AI

AI Junk Is Starting To Pollute the Internet (wsj.com) 55

Online publishers are inundated with useless article pitches as websites using AI-generated content multiply. From a report: When she first heard of the humanlike language skills of the artificial-intelligence bot ChatGPT, Jennifer Stevens wondered what it would mean for the retirement magazine she edits. Months later, she has a better idea. It means she is spending a lot of time filtering out useless article pitches. People like Stevens, the executive editor of International Living, are among those seeing a growing amount of AI-generated content that is so far beneath their standards that they consider it a new kind of spam.

The technology is fueling an investment boom. It can answer questions, produce images and even generate essays based on simple prompts. Some of these techniques promise to enhance data analysis and eliminate mundane writing tasks, much as the calculator changed mathematics. But they also show the potential for AI-generated spam to surge and potentially spread across the internet. In early May, the news site rating company NewsGuard found 49 fake news websites that were using AI to generate content. By the end of June, the tally had hit 277, according to Gordon Crovitz, the company's co-founder. "This is growing exponentially," Crovitz said. The sites appear to have been created to make money through Google's online advertising network, said Crovitz, formerly a columnist and a publisher at The Wall Street Journal.

Researchers also point to the potential of AI technologies being used to create political disinformation and targeted messages used for hacking. The cybersecurity company Zscaler says it is too early to say whether AI is being used by criminals in a widespread way, but the company expects to see it being used to create high-quality fake phishing webpages, which are designed to trick victims into downloading malicious software or disclosing their online usernames and passwords. On YouTube, the ChatGPT gold rush is in full swing. Dozens of videos offering advice on how to make money from OpenAI's technology have been viewed hundreds of thousands of times. Many of them suggest questionable schemes involving junk content. Some tell viewers that they can make thousands of dollars a week, urging them to write ebooks or sell advertising on blogs filled with AI-generated content that could then generate ad revenue by popping up on Google searches.

Facebook

Why the Early Success of Threads May Crash Into Reality (nytimes.com) 175

Mark Zuckerberg has used Meta's might to push Threads to a fast start -- but that may only work up to a point. Mike Isaac, writing at The New York Times: A big tech company with billions of users introduces a new social network. Leveraging the popularity and scale of its existing products, the company intends to make the new social platform a success. In doing so, it also plans to squash a leading competitor's app. If this sounds like Instagram's new Threads app and its push against its rival Twitter, think again. The year was 2011 and Google had just rolled out a social network called Google+, which was aimed as its "Facebook killer." Google thrust the new site in front of many of its users who relied on its search and other products, expanding Google+ to more than 90 million users within the first year.

But by 2018, Google+ was relegated to the ash heap of history. Despite the internet search giant's enormous audience, its social network failed to catch on as people continued flocking to Facebook -- and later to Instagram and other social apps. In the history of Silicon Valley, big tech companies have often become even bigger tech companies by using their scale as a built-in advantage. But as Google+ shows, bigness alone is no guarantee of winning the fickle and faddish social media market.

This is the challenge that Zuckerberg, the chief executive of Meta, which owns Instagram and Facebook, now faces as he tries to dislodge Twitter and make Threads the prime app for real-time, public conversations. If tech history is any guide, size and scale are solid footholds -- but ultimately can only go so far. What comes next is much harder. Mr. Zuckerberg needs people to be able to find friends and influencers on Threads in the serendipitous and sometimes weird ways that Twitter managed to accomplish. He needs to make sure Threads isn't filled with spam and grifters. He needs people to be patient about app updates that are in the works.

Social Networks

As BotDefense Leaves 'Antagonistic' Reddit, Mods Fear Spam Overload (arstechnica.com) 68

"The Reddit community is still reckoning with the consequences of the platform's API price hike..." reports Ars Technica.

"The latest group to announce its departure is BotDefense." BotDefense, which helps remove rogue submission and comment bots from Reddit and which is maintained by volunteer moderators, is said to help moderate 3,650 subreddits. BotDefense's creator told Ars Technica that the team is now quitting over Reddit's "antagonistic actions" toward moderators and developers, with concerning implications for spam moderation on some large subreddits like r/space.

BotDefense started in 2019 as a volunteer project and has been run by volunteer mods, known as "dequeued" and "abrownn" on Reddit. Since then, it claims to have populated its ban list with 144,926 accounts, and it helps moderate subreddits with huge followings, like r/gaming (37.4 million members), /r/aww (34.2 million), r/music (32.4 million), r/Jokes (26.2 million), r/space (23.5 million), and /r/LifeProTips (22.2 million). Dequeued told Ars that other large subreddits BotDefense helps moderates include /r/food, /r/EarthPorn, /r/DIY, and /r/mildlyinteresting. On Wednesday, dequeued announced that BotDefense is ceasing operations. BotDefense has already stopped accepting bot account submissions and will disable future action on bots. BotDefense "will continue to review appeals and process unbans for a minimum of 90 days or until Reddit breaks the code running BotDefense," the announcement said...

Dequeued, who said they've been moderating for nearly nine years, said Reddit's "antagonistic actions" toward devs and mods are the only reason BotDefense is closing. The moderator said there were plans for future tools, like a new machine learning system for detecting "many more" bots. Before the API battle turned ugly, dequeued had no plans to stop working on BotDefense...

[S]ubreddits that have relied on BotDefense are uncertain about managing their subreddits without the tool, and the tool's impending departure are new signs of a deteriorating Reddit community.

Ironically, Reddit's largest shareholder — Advance Publications — owns Ars Technica's parent company Conde Naste.

The article notes that Reddit "didn't respond to Ars' request for comment on BotDefense closing, how Reddit fights spam bots and karma farms, or about users quitting Reddit."
Social Networks

AMAs Are the Latest Casualty In Reddit's API War (arstechnica.com) 179

An anonymous reader quotes a report from Ars Technica: Ask Me Anything (AMA) has been a Reddit staple that helped popularize the social media platform. It delivered some unique, personal, and, at times, fiery interviews between public figures and people who submitted questions. The Q&A format became so popular that many people host so-called AMAs these days, but the main subreddit has been r/IAmA, where the likes of then-US President Barack Obama and Bill Gates have sat in the virtual hot seat. But that subreddit, which has been called its own "juggernaut of a media brand," is about to look a lot different and likely less reputable. On July 1, Reddit moved forward with changes to its API pricing that has infuriated a large and influential portion of its user base. High pricing and a 30-day adjustment period resulted in many third-party Reddit apps closing and others moving to paid-for models that developers are unsure are sustainable.

The latest casualty in the Reddit battle has a profound impact on one of the most famous forms of Reddit content and signals a potential trend in Reddit content changing for the worse. On Saturday, the r/IAmA moderators announced that they will no longer perform these duties:

- Active solicitation of celebrities or high-profile figures to do AMAs.
- Email and modmail coordination with celebrities and high-profile figures and their PR teams to facilitate, educate, and operate AMAs. (We will still be available to answer questions about posting, though response time may vary).
- Running and maintaining a website for scheduling of AMAs with pre-verification and proof, as well as social media promotion.
- Maintaining a current up-to-date sidebar calendar of scheduled AMAs, with schedule reminders for users.
- Sister subreddits with categorized cross-posts for easy following.
- Moderator confidential verification for AMAs.
- Running various bots, including automatic flairing of live posts

The subreddit, which has 22.5 million subscribers as of this writing, will still exist, but its moderators contend that most of what makes it special will be undermined. "Moving forward, we'll be allowing most AMA topics, leaving proof and requests for verification up to the community, and limiting ourselves to removing rule-breaking material alone. This doesn't mean we're allowing fake AMAs explicitly, but it does mean you'll need to pay more attention," the moderators said. The mods will also continue to do bare minimum tasks like keeping spam out and rule enforcement, they said. Like many other Reddit moderators Ars has spoken to, some will step away from their duties, and they'll reportedly be replaced "as needed."

AI

'AI is Killing the Old Web' 108

Rapid changes, fueled by AI, are impacting the large pockets of the internet, argues a new column. An excerpt: In recent months, the signs and portents have been accumulating with increasing speed. Google is trying to kill the 10 blue links. Twitter is being abandoned to bots and blue ticks. There's the junkification of Amazon and the enshittification of TikTok. Layoffs are gutting online media. A job posting looking for an "AI editor" expects "output of 200 to 250 articles per week." ChatGPT is being used to generate whole spam sites. Etsy is flooded with "AI-generated junk."

Chatbots cite one another in a misinformation ouroboros. LinkedIn is using AI to stimulate tired users. Snapchat and Instagram hope bots will talk to you when your friends don't. Redditors are staging blackouts. Stack Overflow mods are on strike. The Internet Archive is fighting off data scrapers, and "AI is tearing Wikipedia apart." The old web is dying, and the new web struggles to be born. The web is always dying, of course; it's been dying for years, killed by apps that divert traffic from websites or algorithms that reward supposedly shortening attention spans. But in 2023, it's dying again -- and, as the litany above suggests, there's a new catalyst at play: AI.
Social Networks

Is Reddit Dying? (eff.org) 266

"Compared to the website's average daily volume over the past month, the 52,121,649 visits Reddit saw on June 13th represented a 6.6 percent drop..." reports Engadget (citing data provided by internet analytics firm Similarweb). [A]s many subreddits continue to protest the company's plans and its leadership contemplates policy changes that could change its relationship with moderators, the platform could see a slow but gradual decline in daily active users. That's unlikely to bode well for Reddit ahead of its planned IPO and beyond.
In fact, the Financial Times now reports that Reddit "acknowledged that several advertisers had postponed certain premium ad campaigns in order to wait for the blackouts to pass." But they also got this dire prediction from a historian who helps moderate the subreddit "r/Askhistorians" (with 1.8 million subscribers).

"If they refuse to budge in any way I do not see Reddit surviving as it currently exists. That's the kind of fire I think they're playing with."

More people had the same same thought. The Reddit protests drew this response earlier this week from EFF's associate director of community organizing: This tension between these communities and their host have, again, fueled more interest in the Fediverse as a decentralized refuge... Unfortunately, discussions of Reddit-like fediverse services Lemmy and Kbin on Reddit were colored by paranoia after the company banned users and subreddits related to these projects (reportedly due to "spam"). While these accounts and subreddits have been reinstated, the potential for censorship around such projects has made a Reddit exodus feel more urgently necessary...
Saturday the EFF official reiterated their concerns when Wired asked: does this really signal the death of Reddit? "I can't see it as anything but that... [I]t's not a big collapse when a social media website starts to die, but it is a slow attrition unless they change their course. The longer they stay in their position, the more loss of users and content they're going to face."

Wired even heard a thought-provoking idea from Amy Bruckman, a regents' professor/senior associate chair at the School of Interactive Computing at Georgia Institute of Technology. Bruckman "advocates for public funding of a nonprofit version of something akin to Reddit."

Meanwhile, hundreds of people are now placing bets on whether Reddit will backtrack on its new upcoming API pricing — or oust CEO Steve Huffman — according to Insider, citing reports from online betting company BetUS.

CEO Huffman's complaint that the moderators were ignoring the wishes of Reddit's users led to a funny counter-response, according to the Verge. After asking users to vote on whether to end the protest, two forums saw overwhelming support instead for the only offered alternative: the subreddits "now only allow posts about comedian and Last Week Tonight host John Oliver."

Both r/pics (more than 30 million subscribers) and r/gifs (more than 21 million subscribers) offered two options to users to vote on... The results were conclusive:

r/pics: return to normal, -2,329 votes; "only allow images of John Oliver looking sexy," 37,331 votes.
r/gifs: return to normal, -1,851 votes; only feature GIFs of John Oliver, 13,696 votes...

On Twitter, John Oliver encouraged the subreddits — and even gave them some fodder. "Dear Reddit, excellent work," he wrote to kick off a thread that included several ridiculous pictures. A spokesperson for Last Week Tonight with John Oliver didn't immediately reply to a request for comment.

Social Networks

Reddit Fight 'Enters News Phase', as Moderators Vow to Pressure Advertisers, CNN Reports (cnn.com) 158

Reddit "appears to be laying the groundwork for ejecting forum moderators committed to continuing the protests," CNN reported Friday afternoon, "a move that could force open some communities that currently remain closed to the public.

"In response, some moderators have vowed to put pressure on Reddit's advertisers and investors." As of Friday morning, nearly 5,000 subreddits were still set to private and inaccessible to the public, reflecting a modest decrease from earlier in the week but still including groups such as r/funny, which claims more than 40 million subscribers, and r/aww and r/music, each with more than 30 million members. But Reddit has portrayed the blacked-out communities as a small slice of its wider platform. Some 100,000 forums remain open, the company said in a blog post, including 80% of its 5,000 most actively engaged subreddits...

Reddit CEO and co-founder Steve Huffman told NBC News the company will soon allow forum users to overrule moderators by voting them out of their positions, a change that may enable communities that do not wish to remain private to reopen. In addition, one company administrator said Thursday, Reddit may soon view communities that remain private as an indicator that the moderators of those communities no longer wish to moderate. That would constitute a form of inactivity for which the moderators can be removed, the company said. "If a moderator team unanimously decides to stop moderating, we will invite new, active moderators to keep these spaces open and accessible to users," the administrator said, adding that Reddit may intervene even if most moderators on a team wish to remain closed and only a single moderator wants to reopen...

Omar, a moderator of a subreddit participating in this week's blackout, told CNN Friday that many subreddits have participated in the blackouts based on member polls that indicate strong support for the protests... Content moderation on Reddit stands to worsen if the company continues with its plan, Omar said, warning that the coming changes will deter developers from creating and maintaining tools that Reddit communities rely on to detect and eliminate spam, hate speech or even child sexual abuse material. "That's both harmful for users and advertisers," Omar said, adding that supporters of the protests have been contacting advertisers to explain how the platform's coming changes may hurt brands. Already, Omar said, the blackout has made it harder for companies to target ads to interest groups; video game companies, for example, can no longer target ads to gaming-focused subreddits that have taken themselves private...

Huffman has also said that the protests have had little impact on the company financially.

NBC News adds: In an interview Thursday with NBC News, Reddit CEO Steve Huffman praised Musk's aggressive cost-cutting and layoffs at Twitter, and said he had chatted "a handful of times" with Musk on the subject of running an internet platform. Huffman said he saw Musk's handling of Twitter, which he purchased last year, as an example for Reddit to follow.
AI

The Problem with the Matrix Theory of AI-Assisted Human Learning (nytimes.com) 28

In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems "will do more to distract and entertain than to focus." (Since they tend to "hallucinate" inaccuracies, and may first be relegated to areas "where reliability isn't a concern" like videogames, song mash-ups, children's shows, and "bespoke" images.)

"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?

You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."

But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...

The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.

We failed that test with the internet. Let's not fail it with A.I.

The Internet

Phishing Domains Tanked After Meta Sued Freenom (krebsonsecurity.com) 7

An anonymous reader quotes a report from KrebsOnSecurity: The number of phishing websites tied to domain name registrar Freenom dropped precipitously in the months surrounding a recent lawsuit from social networking giant Meta, which alleged the free domain name provider has a long history of ignoring abuse complaints about phishing websites while monetizing traffic to those abusive domains. Freenom is the domain name registry service provider for five so-called "country code top level domains" (ccTLDs), including .cf for the Central African Republic; .ga for Gabon; .gq for Equatorial Guinea; .ml for Mali; and .tk for Tokelau. Freenom has always waived the registration fees for domains in these country-code domains, but the registrar also reserves the right to take back free domains at any time, and to divert traffic to other sites -- including adult websites. And there are countless reports from Freenom users who've seen free domains removed from their control and forwarded to other websites.

By the time Meta initially filed its lawsuit in December 2022, Freenom was the source of well more than half of all new phishing domains coming from country-code top-level domains. Meta initially asked a court to seal its case against Freenom, but that request was denied. Meta withdrew its December 2022 lawsuit and re-filed it in March 2023. "The five ccTLDs to which Freenom provides its services are the TLDs of choice for cybercriminals because Freenom provides free domain name registration services and shields its customers' identity, even after being presented with evidence that the domain names are being used for illegal purposes," Meta's complaint charged. "Even after receiving notices of infringement or phishing by its customers, Freenom continues to license new infringing domain names to those same customers." Meta pointed to research from Interisle Consulting Group, which discovered in 2021 and again last year that the five ccTLDs operated by Freenom made up half of the Top Ten TLDs most abused by phishers.

Interisle partner Dave Piscitello said something remarkable has happened in the months since the Meta lawsuit. "We've observed a significant decline in phishing domains reported in the Freenom commercialized ccTLDs in months surrounding the lawsuit," Piscitello wrote on Mastodon. "Responsible for over 60% of phishing domains reported in November 2022, Freenom's percentage has dropped to under 15%." Piscitello said it's too soon to tell the full impact of the Freenom lawsuit, noting that Interisle's sources of spam and phishing data all have different policies about when domains are removed from their block lists.

IT

Google Drive Gets a Desperately Needed 'Spam' Folder for Shared Files (arstechnica.com) 9

Fifteen years after launching Google Docs and Sheets with file sharing, Google is adding what sounds like adequate safety controls to the feature. From a report: Google Drive (the file repository interface that contains your Docs, Sheets, and Slides files) is finally getting a spam folder and algorithmic spam filters, just like Gmail has. It sounds like the update will provide a way to limit Drive's unbelievably insecure behavior of allowing random people to add files to your Drive account without your consent or control. Because Google essentially turned Drive file-sharing into email, Google Drive needs every spam control that Gmail has. Anyone with your email address can "share" a file with you, and a ton of spammers already have your email address. Previously, Drive assumed that all shared files were legitimate and wanted, with the only "control" being "security by obscurity" and hoping no one else knew your email address.

Drive shows any shared files in your shared documents folder, notifies you of the share on your phone, highlights the "new recent file" at the top of the Drive interface, lists the file in searches, and sends you an email about it, all without any indication that you know the file sharer at all. For years, some people in my life have been inundated with shared Google Drive files containing porn, ads, dating site scams, and malware. For a long time, there was nothing you could do to support affected users other than disabling Drive notifications, telling them to ignore the highlighted porn ads at the top of their Drive account, and warning them to never click on the "shared files" folder.

Technology

Truecaller Aims To Help WhatsApp Users Combat Spam (reuters.com) 10

Truecaller will soon start making its caller identification service available over WhatsApp and other messaging apps to help users spot potential spam calls over the internet, the company told Reuters on Monday. From a report: The feature, currently in beta phase, will be rolled out globally later in May, Truecaller Chief Executive Alan Mamedi said. Telemarketing and scamming calls have been on the rise in countries like India, where users gets about 17 spam calls per month on average, according to a 2021 report by Truecaller. "Over the last two weeks, we have seen a spike in user reports from India about spam calls over WhatsApp," Mamedi said, noting that telemarketers switching to internet calling was fairly new to the market.
The Internet

Imgur To Ban Nudity Or Sexually Explicit Content Next Month 60

Online image hosting service Imgur is updating its Terms of Service on May 15th to prohibit nudity and sexually explicit content, among other things. The news arrived in an email sent to "Imgurians". The changes have since been outlined on the company's "Community Rules" page, which reads: Imgur welcomes a diverse audience. We don't want to create a bad experience for someone that might stumble across explicit images, nor is it in our company ethos to support explicit content, so some lascivious or sexualized posts are not allowed. This may include content containing:

- the gratuitous or explicit display of breasts, butts, and sexual organs intended to stimulate erotic feelings
- full or partial nudity
- any depiction of sexual activity, explicit or implied (drawings, print, animated, human, or otherwise)
- any image taken of or from someone without their knowledge or consent for the purpose of sexualization
- solicitation (the uninvited act of directly requesting sexual content from another person, or selling/offering explicit content and/or adult services)

Content that might be taken down may includes: see-thru clothing, exposed or clearly defined genitalia, some images of female nipples/areolas, spread eagle poses, butts in thongs or partially exposed buttocks, close-ups, upskirts, strip teases, cam shows, sexual fluids, private photos from a social media page, or linking to sexually explicit content. Sexually explicit comments that don't include images may also be removed.

Artistic, scientific or educational nude images shared with educational context may be okay here. We don't try to define art or judge the artistic merit of particular content. Instead, we focus on context and intent, as well as what might make content too explicit for the general community. Any content found to be sexualizing and exploiting minors will be removed and, if necessary, reported to the National Center for Missing & Exploited Children (NCMEC). This applies to photos, videos, animated imagery, descriptions and sexual jokes concerning children.
The company is also prohibiting hate speech, abuse or harassment, content that condones illegal or violent activity, gore or shock content, spam or prohibited behavior, content that shares personal information, and posts in general that violate Imgur's terms of service. Meanwhile, "provocative, inflammatory, unsettling, or suggestive content should be marked as Mature," says Imgur.
AI

Reddit Moderators Brace for a ChatGPT Spam Apocalypse (vice.com) 89

Reddit moderators say they already see an increase in spam and that the future will "require a lot of human labor." From a report: In December last year, the moderators of the popular r/AskHistorians Reddit forum noticed posts popping up that appeared to carry the hallmarks of AI-generated text. "They were pretty easy to spot," said Sarah Gilbert, one of the forum's moderators and a postdoctoral associate at Cornell University. "They're not in-depth, they're not comprehensive, and they often contain false information." The team quickly realized their little corner of the internet had become a target for ChatGPT-created content. When ChatGPT launched last year, it set off a seemingly never-ending carousel of hype. According to evangelists, the tech behind ChatGPT may eradicate hundreds of millions of jobs, exhibit "sparks" of singularity-esque artificial general intelligence, and quite possibly destroy the world, but in a way that means you must buy it right now. The less glamorous impacts, like unleashing a tidal wave of AI-produced effluvium on the internet, haven't garnered the same attention so far.

The two-million-strong AskHistorians forum allows non-expert Redditors to submit questions about history topics, and receive in-depth answers from historians. Recent popular posts have probed the hive mind on whether the stress of being "on time" is a modern concept; what a medieval scribe would've done if the monastery cat left an inky paw print on their vellum; and how Genghis Khan got fiber in his diet. Shortly after ChatGPT launched, the forum was experiencing five to 10 ChatGPT posts per day, says Gilbert, which soon ramped up as more people found out about the tool. The frequency has tapered off now, which the team believes may be a consequence of how rigorously they've dealt with AI-produced content: even if the posts aren't being deleted for being written by ChatGPT, they tend to violate the sub's standards for quality.

Security

Novel Social Engineering Attacks Soar 135% Amid Uptake of Generative AI (itpro.com) 15

Researchers from Darktrace have seen a 135% increase in novel social engineering attack emails in the first two months of 2023. IT Pro reports: The cyber security firm said the email attacks targeted thousands of its customers in January and February 2023, an increase which it said matches the adoption rate of ChatGPT. The novel social engineering attacks make use of "sophisticated linguistic techniques," which Darktrace said include increasing text volume, sentence length, and punctuation in emails. Darktrace also found there's been a decrease in the number of malicious emails that are sent with an attachment or link.

The firm said that this behavior could mean that generative AI, including ChatGPT, is being used by malicious actors to construct targeted attacks rapidly. Survey results indicated that 82% of employees are worried about hackers using generative AI to create scam emails which are indistinguishable from genuine communication. It also found that 30% of employees have fallen for a scam email or text in the past. Darktrace asked survey respondents what the top-three characteristics are that suggest an email is a phish and found:

- 68% said it was being invited to click a link or open an attachment
- 61% said it was due to an unknown sender or unexpected content
- Poor use of spelling and grammar was chosen by 61% too

In the last six months, 70% of employees reported an increase in the frequency of scam emails. Additionally, 79% said that their organization's spam filters prevent legitimate emails from entering their inbox. 87% of employees said they were worried about the amount of their personal information online which could be used in phishing or email scams.

Programming

'One In Two New Npm Packages Is SEO Spam Right Now' (sandworm.dev) 37

Gabi Dobocan, writing at auditing firm Sandworm: More than half of all new packages that are currently (29 Mar 2023) being submitted to npm are SEO spam. That is - empty packages, with just a single README file that contains links to various malicious websites. Out of the ~320k new npm packages or versions that Sandworm has scanned over the past week, at least ~185k were labeled as SEO spam. Just in the last hour as of writing this article, 1583 new e-book spam packages have been published. All the identified spam packages are currently live on npmjs.com.
Microsoft

Microsoft's Outlook Spam Email Filters Are Broken for Many Right Now (theverge.com) 39

New submitter calicuse writes: Microsoft's Outlook spam filters appear to be broken for many users today. I woke up to more than 20 junk messages in my Focused Inbox in Outlook this morning, and spam emails have kept breaking through on an hourly basis today. Many Outlook users in Europe have also spotted the same thing, with some heading to Twitter to complain about waking up to an inbox full of spam messages. Most of the messages that are making it into Outlook users' inboxes are very clearly spam. Today's issues are particularly bad, after weeks of the Outlook spam filter progressively deteriorating for me personally.
Privacy

The Washington Post Says There's 'No Real Reason' to Use a VPN (msn.com) 211

Some people try to hide parts of their email address from online scrapers by spelling out "at" and "dot," notes a Washington Post technology newsletter. But unfortunately, "This spam-fighting trick doesn't work. At all." They warn that it's not just a "piece of anti-spam fiction," but "an example of the digital self-protection myths that drain your time and energy and make you less safe.

"Today, let's kill off four privacy and security bogus beliefs, including that you need a VPN to stay safe online. (No, you probably don't.) Myth No. 3: You need a VPN to stay safe online.

...for most people in the United States and other democracies, "There is no real reason why you should use a VPN," said Frédéric Rivain, chief technology officer of Dashlane, a password management service that also offers a VPN.... If you're researching sensitive subjects like depression and don't want family members to know or corporations to keep records of your activities, Rivain said you might be better off using a privacy-focused web browser such as Brave or the search engine DuckDuckGo. If you use a VPN, that company has records of what you're doing. And advertisers will still figure out how to pitch ads based on your online activities.

P.S. If you're concerned about crooks stealing your info when you use WiFi networks in coffee shops or airports and want to use a VPN to disguise what you're doing, you probably don't need to. Using public WiFi is safe now in most circumstances, my colleague Tatum Hunter has reported.

"Many VPNs are also dodgy and may do far more harm than good," their myth-busting continues, referring readers to an earlier analysis by the Washington Post (with some safe recommendations).

On a more sympathetic note, they acknowledge that "It's exhausting to be a human on the internet. Companies and public officials could be doing far more to protect you."

But as it is, "the internet is a nonstop scam machine and a little paranoia is healthy."
Google

Think Twice Before Using Google To Download Software, Researchers Warn (arstechnica.com) 54

Searching Google for downloads of popular software has always come with risks, but over the past few months, it has been downright dangerous, according to researchers and a pseudorandom collection of queries. Ars Technica reports: "Threat researchers are used to seeing a moderate flow of malvertising via Google Ads," volunteers at Spamhaus wrote on Thursday. "However, over the past few days, researchers have witnessed a massive spike affecting numerous famous brands, with multiple malware being utilized. This is not "the norm.'"

The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader. In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros. Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.

On the same day that Spamhaus published its report, researchers from security firm Sentinel One documented an advanced Google malvertising campaign pushing multiple malicious loaders implemented in .NET. Sentinel One has dubbed these loaders MalVirt. At the moment, the MalVirt loaders are being used to distribute malware most commonly known as XLoader, available for both Windows and macOS. XLoader is a successor to malware also known as Formbook. Threat actors use XLoader to steal contacts' data and other sensitive information from infected devices. The MalVirt loaders use obfuscated virtualization to evade end-point protection and analysis. To disguise real C2 traffic and evade network detections, MalVirt beacons to decoy command and control servers hosted at providers including Azure, Tucows, Choopa, and Namecheap.
"Until Google devises new defenses, the decoy domains and other obfuscation techniques remain an effective way to conceal the true control servers used in the rampant MalVirt and other malvertising campaigns," concludes Ars. "It's clear at the moment that malvertisers have gained the upper hand over Google's considerable might."
Security

Yandex Denies Hack, Blames Source Code Leak on Former Employee (bleepingcomputer.com) 11

A Yandex source code repository allegedly stolen by a former employee of the Russian technology company has been leaked as a Torrent on a popular hacking forum. From a report: Yesterday, the leaker posted a magnet link that they claim are 'Yandex git sources' consisting of 44.7 GB of files stolen from the company in July 2022. These code repositories allegedly contain all of the company's source code besides anti-spam rules.
AI

Shutterstock Launches Generative AI Image Tool (gizmodo.com) 34

Shutterstock, one of the internet's biggest sources of stock photos and illustrations, is now offering its customers the option to generate their own AI images. Gizmodo reports: In October, the company announced a partnership with OpenAI, the creator of the wildly popular and controversial DALL-E AI tool. Now, the results of that deal are in beta testing and available to all paying Shutterstock users. The new platform is available in "every language the site offers," and comes included with customers' existing licensing packages, according to a press statement from the company. And, according to Gizmodo's own test, every text prompt you feed Shutterstock's machine results in four images, ostensibly tailored to your request. At the bottom of the page, the site also suggests "More AI-generated images from the Shutterstock library," which offer unrelated glimpses into the void.

In an attempt to pre-empt concerns about copyright law and artistic ethics, Shutterstock has said it uses "datasets licensed from Shutterstock" to train its DALL-E and LG EXAONE-powered AI. The company also claims it will pay artists whose work is used in its AI-generation. Shutterstock plans to do so through a "Contributor Fund." That fund "will directly compensate Shutterstock contributors if their IP was used in the development of AI-generative models, like the OpenAI model, through licensing of data from Shutterstock's library," the company explains in an FAQ section on its website. "Shutterstock will continue to compensate contributors for the future licensing of AI-generated content through the Shutterstock AI content generation tool," it further says.

Further, Shutterstock includes a clever caveat in their use guidelines for AI images. "You must not use the generated image to infringe, misappropriate, or violate the intellectual property or other rights of any third party, to generate spam, false, misleading, deceptive, harmful, or violent imagery," the company notes. And, though I am not a legal expert, it would seem this clause puts the onus on the customer to avoid ending up in trouble. If a generated image includes a recognizable bit of trademarked material, or spits out celebrity's likeness -- it's on the user of Shutterstock's tool to notice and avoid republishing the problem content.

Slashdot Top Deals