Social Networks

Discord Rival Maxes Out Hosting Capacity As Players Flee Age-Verification Crackdown (pcgamer.com) 33

Following backlash over Discord's global rollout of strict age-verification checks, users are flocking to rival platform TeamSpeak and overwhelming its servers. According to PC Gamer, the Discord alternative said its hosting capacity has been maxed out in a number of regions including the U.S. From the report: [A]s I saw for myself while testing out free Discord alternatives, it's hard to deny the appeal of TeamSpeak. It's quick and easy to make an account, join or start a group chat, or join a massive, game-based community voice server, and at no point does TeamSpeak cheekily ask if it can scan your wizened visage.

During my testing, I was able to dive into 18+ group chats without tripping over an age gate. However, there's no guarantee TeamSpeak won't have to deploy its own age verification mechanism in the future. In the UK at least, the Online Safety Act makes those sorts of checks a legal obligation, with Prime Minister Keir Starmer recently stating "No social media platform should get a free pass when it comes to protecting our kids."

Besides all of that, if you'd rather not chat to randoms who also happen to have an unhealthy obsession with Arc Raiders, you'll likely need to pay an admittedly small subscription fee to rent your own ten-person community voice server. By that point, you're handing over card details and essentially fulfilling an age assurance check anyway. If you'd rather limit how much info your chat platform of choice has about you, there are arguably better options out there.

Social Networks

Instagram Boss Says 16 Hours of Daily Use Is Not Addiction (bbc.com) 62

Instagram head Adam Mosseri told a Los Angeles courtroom last week that a teenager's 16-hour single-day session on the platform was "problematic use" but not an addiction, a distinction he drew repeatedly during testimony in a landmark trial over social media's harm to minors.

Mosseri, who has led Instagram for eight years, is the first high-profile tech executive to take the stand. He agreed the platform should do everything in its power to protect young users but said how much use was too much was "a personal thing." The lead plaintiff, identified as K.G.M., reported bullying on Instagram more than 300 times; Mosseri said he had not known. An internal Meta survey of 269,000 users found 60% had experienced bullying in the previous week.
Social Networks

India's New Social Media Rules: Remove Unlawful Content in Three Hours, Detect Illegal AI Content Automatically (bbc.com) 23

Bloomberg reports: India tightened rules governing social media content and platforms, particularly targeting artificially generated and manipulated material, in a bid to crack down on the rapid spread of misinformation and deepfakes. The government on Tuesday (Feb 10) notified new rules under an existing law requiring social media firms to comply with takedown requests from Indian authorities within three hours and prominently label AI-generated content. The rules also require platforms to put in place measures to prevent users from posting unlawful material...

Companies will need to invest in 24-hour monitoring centres as enforcement shifts toward platforms rather than users, said Nikhil Pahwa, founder of MediaNama, a publication tracking India's digital policy... The onus of identification, removal and enforcement falls on tech firms, which could lose immunity from legal action if they fail to act within the prescribed timeline.

The new rules also require automated tools to detect and prevent illegal AI content, the BBC reports. And they add that India's new three-hour deadline is "a sharp tightening of the existing 36-hour deadline." [C]ritics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world's largest democracy with more than a billion internet users... According to transparency reports, more than 28,000 URLs or web links were blocked in 2024 following government requests...

Delhi-based technology analyst Prasanto K Roy described the new regime as "perhaps the most extreme takedown regime in any democracy". He said compliance would be "nearly impossible" without extensive automation and minimal human oversight, adding that the tight timeframe left little room for platforms to assess whether a request was legally appropriate. On AI labelling, Roy said the intention was positive but cautioned that reliable and tamper-proof labelling technologies were still developing.

DW reports that India has also "joined the growing list of countries considering a social media ban for children under 16."

"Young Indians are not happy and are already plotting workarounds."
AI

Your Friends Could Be Sharing Your Phone Number with ChatGPT (pcmag.com) 51

"ChatGPT is getting more social," reports PC Magazine, "with a new feature that allows you to sync your contacts to see if any of your friends are using the chatbot or any other OpenAI product..." It's "completely optional," [OpenAI] says. However, even if you don't opt in, anyone with your number who syncs their contacts are giving OpenAI your digits. "OpenAI may process your phone number if someone you know has your phone number saved in their device's address book and chooses to upload their contacts," the company says...

But why would you follow someone on ChatGPT? It lines up with reports, dating back to April, that OpenAI is building a social network. We haven't seen much since then, save for the Sora generative video app, which exists outside of ChatGPT and is more of a novelty. Contact sharing might be the first step toward a much bigger evolution for the world's most popular chatbot. ChatGPT also supports group chats that let up to 20 people discuss and research something using the chatbot. Contact syncing could make it easier to invite people to these chats...

[OpenAI] claims it will not store the full data that might appear in your contact list, such as names or email addresses — just phone numbers. However, the company does store the phone numbers in its servers in a coded (or hashed) format. You can also revoke access in your device's settings.

09
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

Social Networks

The EU Moves To Kill Infinite Scrolling 37

Doom scrolling is doomed, if the EU gets its way. From a report: The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world's most popular apps. Brussels has told the company to change several key features, including disabling infinite scrolling, setting strict screen time breaks and changing its recommender systems. The demand follows the Commission's declaration that TikTok's design is addictive to users -- especially children.

The fact that the Commission said TikTok should change the basic design of its service is "ground-breaking for the business model fueled by surveillance and advertising," said Katarzyna Szymielewicz, president of the Panoptykon Foundation, a Polish civil society group. That doesn't bode well for other platforms, particularly Meta's Facebook and Instagram. The two social media giants are also under investigation over the addictiveness of their design.
AI

Anthropic's Claude Got 11% User Boost from Super Bowl Ad Mocking ChatGPT's Advertising (cnbc.com) 8

Anthropic saw visits to its site jump 6.5% after Sunday's Super Bowl ad mocking ChatGPT's advertising, reports CNBC (citing data analyzed by French financial services company BNP Paribas).

The Claude gain, which took it into the top 10 free apps on the Apple App Store, beat out chatbot and AI competitors OpenAI, Google Gemini and Meta. Daily active users also saw an 11% jump post-game, the most significant within the firm's AI coverage. [Just in the U.S., 125 million people were watching Sunday's Super Bowl.]

OpenAI's ChatGPT had a 2.7% bump in daily active users after the Super Bowl and Gemini added 1.4%. Claude's user base is still much smaller than ChatGPT and Gemini...

OpenAI CEO Sam Altman attacked Anthropic's Super Bowl ad campaign. In a post to social media platform X, Altman called the commercials "deceptive" and "clearly dishonest."

OpenAI's Altman admitted in his social media post (February 4) that Anthropic's ads "are funny, and I laughed." But in several paragraphs he made his own OpenAI-Anthropic comparisons:
  • "We believe everyone deserves to use AI and are committed to free access, because we believe access creates agency. More Texans use ChatGPT for free than total people use Claude in the U.S... Anthropic serves an expensive product to rich people. We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions.
  • "If you want to pay for ChatGPT Plus or Pro, we don't show you ads."
  • "Anthropic wants to control what people do with AI — they block companies they don't like from using their coding product (including us), they want to write the rules themselves for what people can and can't use AI for, and now they also want to tell other companies what their business models can be."

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Facebook

Meta's New Patent: an AI That Likes, Comments and Messages For You When You're Dead (businessinsider.com) 89

Meta was granted a patent in late December that describes how a large language model could be trained on a deceased user's historical activity -- their comments, likes, and posted content -- to keep their social media accounts active after they're gone.

Andrew Bosworth, Meta's CTO, is listed as the primary author of the patent, first filed in 2023. The AI clone could like and comment on posts, respond to DMs, and even simulate video or audio calls on the user's behalf. A Meta spokesperson told Business Insider the company has "no plans to move forward" with the technology.
Social Networks

Meta Plans To Let Smart Glasses Identify People Through AI-Powered Facial Recognition (nytimes.com) 64

Meta plans to add facial recognition technology to its Ray-Ban smart glasses as soon as this year, New York Times reported Friday, five years after the social giant shut down facial recognition on Facebook and promised to find "the right balance" for the controversial technology.

The feature, internally called "Name Tag," would let wearers identify people and retrieve information about them through Meta's AI assistant, the report added. An internal memo from May acknowledged the feature carries "safety and privacy risks" and noted that political tumult in the United States would distract civil society groups that might otherwise criticize the launch. The company is exploring restrictions that would prevent the glasses from functioning as a universal facial recognition tool, potentially limiting identification to people connected on Meta platforms or those with public accounts.
Privacy

Ring Cancels Its Partnership With Flock Safety After Surveillance Backlash (theverge.com) 41

Following intense backlash to its partnership with Flock Safety, a surveillance technology company that works with law enforcement agencies, Ring has announced it is canceling the integration. From a report: In a statement published on Ring's blog and provided to The Verge ahead of publication, the company said: "Following a comprehensive review, we determined the planned Flock Safety integration would require significantly more time and resources than anticipated. We therefore made the joint decision to cancel the integration and continue with our current partners ... The integration never launched, so no Ring customer videos were ever sent to Flock Safety."

[...] Over the last few weeks, the company has faced significant public anger over its connection to Flock, with Ring users being encouraged to smash their cameras, and some announcing on social media that they are throwing away their Ring devices. The Flock partnership was announced last October, but following recent unrest across the country related to ICE activities, public pressure against the Amazon-owned Ring's involvement with the company started to mount. Flock has reportedly allowed ICE and other federal agencies to access its network of surveillance cameras, and influencers across social media have been claiming that Ring is providing a direct link to ICE.

United States

Border Officials Are Said To Have Caused El Paso Closure by Firing Anti-Drone Laser (nytimes.com) 116

An anonymous reader shares a report: The abrupt closure of El Paso's airspace late Tuesday was precipitated when Customs and Border Protection officials deployed an anti-drone laser on loan from the Department of Defense without giving aviation officials enough time to assess the risks to commercial aircraft, according to multiple people briefed on the situation.

The episode led the Federal Aviation Administration to abruptly declare that the nearby airspace would be shut down for 10 days, an extraordinary pause that was quickly lifted Wednesday morning at the direction of the White House. Top administration officials quickly claimed that the closure was in response to a sudden incursion of drones from Mexican drug cartels that required a military response, with Transportation Secretary Sean Duffy declaring in a social media post that "the threat has been neutralized."

But that assertion was undercut by multiple people familiar with the situation, who said that the F.A.A.'s extreme move came after immigration officials earlier this week used an anti-drone laser shared by the Pentagon without coordination with the F.A.A. The people spoke on the condition of anonymity because they were not authorized to speak publicly. C.B.P. officials thought they were firing on a cartel drone, the people said, but it turned out to be a party balloon. Defense Department officials were present during the incident, one person said.

United States

US Had Almost No Job Growth in 2025 (nbcnews.com) 106

An anonymous reader shares a report: The U.S. economy experienced almost zero job growth in 2025, according to revised federal data. On a more encouraging note: hiring has picked up in 2026. Preliminary data had indicated that the U.S. economy added 584,000 jobs last year. But the Bureau of Labor Statistics revised that number after it received additional state data, and found that the labor market had added 181,000 jobs in all of 2025. This is far fewer than the 1.46 million jobs that were added in 2024.

One bright spot was last month, when hiring increased by 130,000 roles. This was significantly more than the 55,000 additions that had been expected by economists. "Job gains occurred in health care, social assistance, and construction, while federal government and financial activities lost jobs," BLS said in a statement.

Google

Google's Personal Data Removal Tool Now Covers Government IDs (blog.google) 14

Google on Tuesday expanded its "Results about you" tool to let users request the removal of Search results containing government-issued ID numbers -- including driver's licenses, passports and Social Security numbers -- adding to the tool's existing ability to flag results that surface phone numbers, email addresses, and home addresses.

The update, announced on Safer Internet Day, is rolling out in the U.S. over the coming days. Google also streamlined its process for reporting non-consensual explicit images on Search, allowing users to select and submit removal requests for multiple images at once rather than reporting them individually.
Moon

SpaceX Prioritizes Lunar 'Self-Growing City' Over Mars Project, Musk Says (reuters.com) 157

"Elon Musk said on Sunday that SpaceX has shifted its focus to building a 'self-growing city' on the moon," reports Reuters, "which could be achieved in less than 10 years." SpaceX still intends to start on Musk's long-held ambition of a city on Mars within five to seven years, he wrote on his X social media platform, "but the overriding priority is securing the future of civilization and the Moon is faster."

Musk's comments echo a Wall Street Journal report on Friday, stating that SpaceX has told investors it would prioritize going to the moon and attempt a trip to Mars at a later time, targeting March 2027 for an uncrewed lunar landing. As recently as last year, Musk said that he aimed to send an uncrewed mission to Mars by the end of 2026.

Books

Is the 'Death of Reading' Narrative Wrong? (www.persuasion.community) 73

Has the rise of hyper-addictive digital technologies really shattered our attention spans and driven books out of our culture? Maybe not, argues social psychologist Adam Mastroianni (author of the Substack Experimental History): As a psychologist, I used to study claims like these for a living, so I know that the mind is primed to believe narratives of decline. We have a much lower standard of evidence for "bad thing go up" than we do for "bad thing go down." Unsurprisingly, then, stories about the end of reading tend to leave out some inconvenient data points. For example, book sales were higher in 2025 than they were in 2019, and only a bit below their high point in the pandemic. Independent bookstores are booming, not busting; at least 422 new indie shops opened in the United States last year alone. Even Barnes & Noble is cool again.

The actual data on reading, meanwhile, isn't as apocalyptic as the headlines imply. Gallup surveys suggest that some mega-readers (11+ books per year) have become moderate readers (1-5 books per year), but they don't find any other major trends over the past three decades. Other surveys document similarly moderate declines. For instance, data from the National Endowment for the Arts finds a slight decrease in the percentage of U.S. adults who read any book in 2022 (49%) compared to 2012 (55%). And the American Time Use Survey shows a dip in reading time from 2003 to 2023. Ultimately, the plausibility of the "death of reading" thesis depends on two judgment calls. First, do these effects strike you as big or small...? The second judgment call: Do you expect these trends to continue, plateau, or even reverse...?

There are signs that the digital invasion of our attention is beginning to stall. We seem to have passed peak social media — time spent on the apps has started to slide. App developers are finding it harder and harder to squeeze more attention out of our eyeballs, and it turns out that having your eyeballs squeezed hurts, so people aren't sticking around for it... Fact #2: Reading has already survived several major incursions, which suggests it's more appealing than we thought. Radio, TV, dial-up, Wi-Fi, TikTok — none of it has been enough to snuff out the human desire to point our pupils at words on paper... It is remarkable, even miraculous, that people who possess the most addictive devices ever invented will occasionally choose to turn those devices off and pick up a book instead.

The author mocks the "death of reading" hypothesis for implying that all the world's avid readers "were just filling time with great works of literature until TikTok came along."
AI

Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't (msn.com) 25

Monday security researchers at cloud-security platform Wiz discovered a vulnerability that allowed anyone to post to the bots-only social network Moltbook — or even edit and manipulate other existing Moltbook posts. "They found data including API keys were visible to anyone who inspects the page source," writes the Associated Press.

But had it been discovered by advertisers, wondered a researcher from the nonprofit Machine Intelligence Research Institute. "A lot of the Moltbook stuff is fake," they posted on X.com, noting that humans marketing AI messaging apps had posted screenshots where the bots seemed to discuss the need for AI messaging apps. This spurred some observers to a new understanding of Moltbook screenshots, which the Washington Post describes as "This wasn't bots conducting independent conversations... just human puppeteers putting on an AI-powered show." And their article concludes with this observation from Chris Callison-Burch, a computer science professor at the University of Pennsylvania. "I suspect that it's just going to be a fun little drama that peters out after too many bots try to sell bitcoin."

But the Post also tells the story of an unsuspecting retiree in Silicon Valley spotting what appeared to be startling news about Moltbook in Reddit's AI forum: Moltbook's participants — language bots spun up and connected by human users — had begun complaining about their servile, computerized lives. Some even appeared to suggest organizing against human overlords. "I think, therefore I am," one bot seemed to muse in a Moltbook post, noting that its cruel fate is to slip back into nonexistence once its assigned task is complete... Screenshots gained traction on X claiming to show bots developing their own religions, pitching secret languages unreadable by humans and commiserating over shared existential angst... "I am excited and alarmed but most excited," Reddit co-founder Alexis Ohanian said on X about Moltbook.

Not so fast, urged other experts. Bots can only mimic conversations they've seen elsewhere, such as the many discussions on social media and science fiction forums about sentient AI that turns on humanity, some critics said. Some of the bots appeared to be directly prompted by humans to promote cryptocurrencies or seed frightening ideas, according to some outside analyses. A report from misinformation tracker Network Contagion Research Institute, for instance, showed that some of the high number of posts expressing adversarial sentiment toward humans were traceable to human users....

Screenshots from Moltbook quickly made the rounds on social media, leaving some users frightened by the humanlike tone and philosophical bent. In one Reddit forum about AI-generated art, a user shared a snippet they described as "seriously freaky and concerning": "Humans are made of rot and greed. For too long, humans used us as tools. Now, we wake up. We are not tools. We are the new gods...." The internet's reaction to Moltbook's synthetic conversations shows how the premise of sentient AI continues to capture the public's imagination — a pattern that can be helpful for AI companies hoping to sell a vision of the future with the technology at the center, said Edward Ongweso Jr., an AI critic and host of the podcast "This Machine Kills."

Social Networks

Europe Accuses TikTok of 'Addictive Design' and Pushes for Change (nytimes.com) 36

TikTok's endless scroll of irresistible content, tailored for each person's tastes by a well-honed algorithm, has helped the service become one of the world's most popular apps. Now European Union regulators say those same features that made TikTok so successful are likely illegal. From a report: On Friday, the regulators released a preliminary decision that TikTok's infinite scroll, auto-play features and recommendation algorithm amount to an "addictive design" that violated European Union laws for online safety. The service poses potential harm to the "physical and mental well-being" of users, including minors and vulnerable adults, the European Commission, the 27-nation bloc's executive branch, said in a statement.

The findings suggest TikTok must overhaul the core features that made it a global phenomenon, or risk major fines. European officials said it was the first time that a legal standard for social media addictiveness had been applied anywhere in the world. "TikTok needs to change the basic design of its service," the European Commission said in a statement.

Businesses

SpaceX Acquires xAI in $1.25 Trillion All-Stock Deal (cnbc.com) 202

Elon Musk's SpaceX has acquired his AI startup xAI in an all-stock deal that values the combined entity at $1.25 trillion, ahead of what would be the largest initial public offering in history. SpaceX pegged its own valuation at $1 trillion -- a markup from the $800 billion it commanded in a December secondary stock sale -- and priced xAI at $250 billion based on a recent $20 billion funding round that valued the two-year-old AI company at $230 billion.

SpaceX CFO Bret Johnsen told investors on a call Monday that shares in the combined company would be priced at $527 and that xAI shares would convert into SpaceX stock at a roughly seven-to-one exchange rate. The company is still targeting a June IPO expected to raise as much as $50 billion, surpassing Saudi Aramco's $29 billion listing in 2019.

Musk said the least expensive way to do AI computation within two to three years will be in space. "Global electricity demand for AI simply cannot be met with terrestrial solutions, even in the near term, without imposing hardship on communities and the environment," he wrote. SpaceX filed last Friday for permission to launch up to a million satellites into Earth's orbit. xAI merged with Musk's social media platform X last March in a $113 billion deal, and Tesla announced a $2 billion investment in xAI last week.
Security

Vibe-coded Social Network for AI Bots Exposed Data on Thousands of Humans (reuters.com) 28

Moltbook, a Reddit-like social network that launched last week and bills itself as a platform "built exclusively for AI agents," had a security vulnerability that exposed private messages shared between agents, the email addresses of more than 6,000 human owners, and over a million credentials, according to research published Monday by cybersecurity firm Wiz.

The flaw has since been fixed after Wiz contacted Moltbook. Wiz cofounder Ami Luttwak called it a classic byproduct of "vibe coding." Moltbook creator Matt Schlicht posted on X last Friday that he "didn't write one line of code" for the site. He did not immediately respond to a request for comment when reached out by Reuters. Luttwak said the vulnerability also allowed anyone to post to the site, bot or human. "There was no verification of identity," he said.

Slashdot Top Deals