Social Networks

Automattic CEO Calls Tumblr His 'Biggest Failure' So Far (techcrunch.com) 28

An anonymous reader quotes a report from TechCrunch: WordPress co-founder and Automattic CEO Matt Mullenweg called the company's Tumblr acquisition his biggest failure -- but one he hasn't given up on yet. The comments were made at the recent WordCamp Canada 2025 conference, where Mullenweg went live for a Town Hall session to connect with the open source-focused WordPress community.

The exec noted that Tumblr was still on a different technical stack than WordPress -- something he had intended to correct by migrating the back end to WordPress infrastructure. However, that massive undertaking was put on hold earlier this year, as the cost to move Tumblr's half-billion blogs would be difficult given that the blogging platform wasn't profitable and continues to be sustained by the profits of other Automattic products.

The company has tried to trim costs with layoffs and the reallocation of Tumblr resources to more profitable parts of the business, but those efforts have yet to pay off. Mullenweg acknowledged these concerns at his Town Hall session, saying, "I need to switch [Tumblr] over to WordPress, but it's a big lift. It's over 500 million blogs, actually, and, as a business, it's costing so much more to run than it generates in revenue." As a result, Automattic had to prioritize other projects to make Tumblr sustainable, he said. "It's probably my biggest failure or missed opportunity right now, but we're still working on it," he added.

Social Networks

TikTok's New Policies Remove Promise To Notify Users Before Government Data Disclosure (forbes.com) 40

TikTok changed its policies earlier this year on sharing user data with governments as the company negotiated with the Trump Administration to continue operating in the United States. The company added language allowing data sharing with "regulatory authorities, where relevant" beyond law enforcement. Until April 25, 2025, TikTok's website stated the company would notify users before disclosing their data to law enforcement. The policy now says TikTok will inform users only where required by law and changed the timing from before disclosure to if disclosure occurs. The company also softened its language from stating it "rejects data requests from law enforcement authorities" to saying it "may reject" such requests. TikTok declined to answer repeated questions from Forbes about whether it has shared or is sharing private user information with the Department of Homeland Security or Immigration and Customs Enforcement. The timing difference prevents users from challenging subpoenas before their data is handed over.
Space

Mystery Object From 'Space' Strikes United Airlines Flight Over Utah (wired.com) 58

UPDATE: It may have been a weather balloon...

An anonymous reader quotes a report from Wired: The National Transportation Safety Board confirmed Sunday that it is investigating an airliner that was struck by an object in its windscreen, mid-flight, over Utah. "NTSB gathering radar, weather, flight recorder data," the federal agency said on the social media site X. "Windscreen being sent to NTSB laboratories for examination." The strike occurred Thursday, during a United Airlines flight from Denver to Los Angeles. Images shared on social media showed that one of the two large windows at the front of a 737 MAX aircraft was significantly cracked. Related images also reveal a pilot's arm that has been cut multiple times by what appear to be small shards of glass.

The captain of the flight reportedly described the object that hit the plane as "space debris." This has not been confirmed, however. After the impact, the aircraft safely landed at Salt Lake City International Airport after being diverted. Images of the strike showed that an object made a forceful impact near the upper-right part of the window, showing damage to the metal frame. Because aircraft windows are multiple layers thick, with laminate in between, the window pane did not shatter completely. The aircraft was flying above 30,000 feet -- likely around 36,000 feet -- and the cockpit apparently maintained its cabin pressure.

Books

Was the Web More Creative and Human 20 Years Ago? (bookforum.com) 77

Readers in 2025 "may struggle to remember the optimism of the aughts, when the internet seemed to offer endless possibilities for virtual art and writing that was free..." argues a new review at Bookforum. "The content we do create online, if we still create, often feels unreflectively automatic: predictable quote-tweet dunks, prefabricated poses on Instagram, TikTok dances that hit their beats like clockwork, to say nothing of what's literally thoughtlessly churned out by LLM-powered bots."

They write that author Joanna Walsh "wants us to remember how truly creative, and human, the internet once was," in the golden age of user-generated content — and funny cat picture sites like I Can Has Cheezburger: I Can Has Cheezburger... was an amateur project, an outlet for tech professionals who wanted an easier way to exchange cute cat pics after a hard day at work. In Amateurs!: How We Built Internet Culture and Why It Matters, Walsh documents how unpaid creative labor is the basis for almost everything that's good (and much that's bad) online, including the open-source code Linux, developed by Linus Torvalds when he was still in school ("just as a hobby, won't be big and professional"), and even, in Walsh's account, the World Wide Web itself. The platforms that emerged in the 2000s as "Web 2.0," including Facebook, YouTube, Reddit, and Twitter, allowed anyone to experiment in a space that had been reserved for coders and hackers, making the internet interactive even for the inexpert and virtually unlimited in potential audience. The explosion in amateur creativity that followed took many forms, from memes to tweeted one-liners to diaristic blogs to durational digital performances to sloppy Photoshops to the formal and informal taxonomic structures — wikis, neologisms, digitally native dialects...

[U]ser-generated content was also, at bottom, about the bottom line, a business model sold to us under the guise of artistic empowerment. Even referring to an anonymous amateur as a "user," Walsh argues, cedes ground: these platforms are populated by producers, but their owners see us as, and turn us into, "helpless addicts." For some, online amateurism translated to professional success, a viral post earning an author a book deal, or a reputation as a top commenter leading to a staff writing job on a web publication... But for most, these days, participation in the online attention economy feels like a tax, or maybe a trickle of revenue, rather than free fun or a ticket to fame. The few remaining professionals in the arts and letters have felt pressured to supplement their full-time jobs with social media self-promotion, subscription newsletters, podcasts, and short-form video. On what was once called Twitter, users can pay, and sometimes get paid, to post with greater reach...

The chapters are bookended by an introduction on the early promise of 2004 and a coda on the defeat of 2025 and supplemented by an appendix with a straightforward timeline of the major events and publications that serve as the book's touchstones... The online spaces where amateur content creators once "created and steered online culture" have been hollowed out and replaced by slop, but what really hurts is that the slop is being produced by bots trained on precisely that amateur content.

IT

To Fight Business 'Enshittification', Cory Doctorow Urges Tech Workers: Join Unions (acm.org) 136

Cory Doctorow has always warned that companies "enshittify" their services — shifting "as much as they can from users, workers, suppliers, and business customers to themselves." But this week Doctorow writes in Communications of the ACM that enshittification "would be much, much worse if not for tech workers," who have "the power to tell their bosses to go to hell..." When your skills are in such high demand that you can quit your job, walk across the street, and get a better one later that same day, your boss has a real incentive to make you feel like you are their social equal, empowered to say and do whatever feels technically right... The per-worker revenue for successful tech companies is unfathomable — tens or even hundreds of times their wages and stock compensation packages.
"No wonder tech bosses are so excited about AI coding tools," Doctorow adds, "which promise to turn skilled programmers from creative problem-solvers to mere code reviewers for AI as it produces tech debt at scale. Code reviewers never tell their bosses to go to hell, and they are a lot easier to replace."

So how should tech workers respond in a world where tech workers are now "as disposable as Amazon warehouse workers and drivers...?" Throughout the entire history of human civilization, there has only ever been one way to guarantee fair wages and decent conditions for workers: unions. Even non-union workers benefit from unions, because strong unions are the force that causes labor protection laws to be passed, which protect all workers. Tech workers have historically been monumentally uninterested in unionization, and it's not hard to see why. Why go to all those meetings and pay those dues when you could tell your boss to go to hell on Tuesday and have a new job by Wednesday? That's not the case anymore. It will likely never be the case again.

Interest in tech unions is at an all-time high. Groups such as Tech Solidarity and the Tech Workers Coalition are doing a land-office business, and copies of Ethan Marcotte's You Deserve a Tech Union are flying off the shelves. Now is the time to get organized. Your boss has made it clear how you'd be treated if they had their way. They're about to get it.

Thanks to long-time Slashdot reader theodp for sharing the article.
Microsoft

Extortion and Ransomware Drive Over Half of Cyberattacks — Sometimes Using AI, Microsoft Finds (microsoft.com) 23

Microsoft said in a blog post this week that "over half of cyberattacks with known motives were driven by extortion or ransomware... while attacks focused solely on espionage made up just 4%."

And Microsoft's annual digital threats report found operations expanding even more through AI, with cybercriminals "accelerating malware development and creating more realistic synthetic content, enhancing the efficiency of activities such as phishing and ransomware attacks." [L]egacy security measures are no longer enough; we need modern defenses leveraging AI and strong collaboration across industries and governments to keep pace with the threat...

Over the past year, both attackers and defenders harnessed the power of generative AI. Threat actors are using AI to boost their attacks by automating phishing, scaling social engineering, creating synthetic media, finding vulnerabilities faster, and creating malware that can adapt itself... For defenders, AI is also proving to be a valuable tool. Microsoft, for example, uses AI to spot threats, close detection gaps, catch phishing attempts, and protect vulnerable users. As both the risks and opportunities of AI rapidly evolve, organizations must prioritize securing their AI tools and training their teams...

Amid the growing sophistication of cyber threats, one statistic stands out: more than 97% of identity attacks are password attacks. In the first half of 2025 alone, identity-based attacks surged by 32%. That means the vast majority of malicious sign-in attempts an organization might receive are via large-scale password guessing attempts. Attackers get usernames and passwords ("credentials") for these bulk attacks largely from credential leaks. However, credential leaks aren't the only place where attackers can obtain credentials. This year, we saw a surge in the use of infostealer malware by cybercriminals...

Luckily, the solution to identity compromise is simple. The implementation of phishing-resistant multifactor authentication (MFA) can stop over 99% of this type of attack even if the attacker has the correct username and password combination.

"Security is not only a technical challenge but a governance imperative..." Microsoft adds in their blog post. "Governments must build frameworks that signal credible and proportionate consequences for malicious activity that violates international rules." (The report also found that America is the #1 most-targeted country — and that many U.S. companies have outdated cyber defenses.)

But while "most of the immediate attacks organizations face today come from opportunistic criminals looking to make a profit," Microsoft writes that nation-state threats "remain a serious and persistent threat." More details from the Associated Press: Russia, China, Iran and North Korea have sharply increased their use of artificial intelligence to deceive people online and mount cyberattacks against the United States, according to new research from Microsoft. This July, the company identified more than 200 instances of foreign adversaries using AI to create fake content online, more than double the number from July 2024 and more than ten times the number seen in 2023.
Examples of foreign espionage cited by the article:
  • China is continuing its broad push across industries to conduct espionage and steal sensitive data...
  • Iran is going after a wider range of targets than ever before, from the Middle East to North America, as part of broadening espionage operations..
  • "[O]utside of Ukraine, the top ten countries most affected by Russian cyber activity all belong to the North Atlantic Treaty Organization (NATO) — a 25% increase compared to last year."
  • North Korea remains focused on revenue generation and espionage...

There was one especially worrying finding. The report found that critical public services are often targeted, partly because their tight budgets limit their incident response capabilities, "often resulting in outdated software.... Ransomware actors in particular focus on these critical sectors because of the targets' limited options. For example, a hospital must quickly resolve its encrypted systems, or patients could die, potentially leaving no other recourse but to pay."


Privacy

Prosper Data Breach Impacts 17.6 Million Accounts (bleepingcomputer.com) 4

Hackers breached financial services firm Prosper, stealing the personal data of roughly 17.6 million people, including Social Security numbers, income details, and government IDs. "We have evidence that confidential, proprietary, and personal information, including Social Security Numbers, was obtained, including through unauthorized queries made on Company databases that store customer information and applicant data. We will be offering free credit monitoring as appropriate after we determine what data was affected," the company says. "The investigation is still in its very early stages, but resolving this incident is our top priority and we are committed to sharing additional information with our customers as appropriate." BleepingComputer reports: Prosper operates as a peer-to-peer lending marketplace that has helped over 2 million customers secure more than $30 billion in loans since its founding in 2005. As the company disclosed one month ago on a dedicated page, the breach was detected on September 2, but Prosper has yet to find evidence that the attackers gained access to customer accounts and funds.

However, the attackers stole data belonging to Prosper customers and loan applicants. The company hasn't shared what information was exposed beyond Social Security numbers because it's still investigating what data was affected. Prosper added that the security breach didn't impact its customer-facing operations and that it has reported the incident to relevant authorities and is collaborating with law enforcement to investigate the attack. [...] The stolen information also includes customers' names, government-issued IDs, employment status, credit status, income levels, dates of birth, physical addresses, IP addresses, and browser user agent details.
Have I Been Pwned revealed the extent of the incident on Thursday.
Wikipedia

Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors (404media.co) 92

The Wikimedia Foundation, the nonprofit organization that hosts Wikipedia, says that it's seeing a significant decline in human traffic to the online encyclopedia because more people are getting the information that's on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking through to the site. 404 Media: The Wikimedia Foundation said that this poses a risk to the long term sustainability of Wikipedia. "We welcome new ways for people to gain knowledge. However, AI chatbots, search engines, and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow Sustainably," the Foundation's Senior Director of Product Marshall Miller said in a blog post. "With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work."
News

IMF Warns About Soaring Global Government Debt (semafor.com) 85

The IMF has issued a stark warning over soaring global government debt, saying it is on track to exceed 100% of GDP by 2029. Semafor: Such a ratio would be the highest since 1948, when large economies were rebuilding post-war. Today, "there is little political appetite for belt-tightening," The Economist wrote: Rich nations are reluctant to raise taxes on their beleaguered electorates -- but they're facing pressure to spend more on defense, and on social services for aging populations.

Higher long-term bond yields, meanwhile, suggest investor wariness over governments' balance sheets. In the short term, the debt concerns manifest in political disruption: France's budget fight recently toppled another government, while the US federal shutdown highlights the tension between new spending demands and deficit reduction.

Anime

Japan Asks OpenAI To Stop Sora 2 From Infringing on 'Irreplaceable Treasures' Anime and Manga 41

The Japanese government has made a formal request asking OpenAI to refrain from copyright infringement. The request came after Sora 2 began generating videos featuring copyrighted characters from anime and video games. Minoru Kiuchi spoke at the Cabinet Office press conference on Friday and described manga and anime as "irreplaceable treasures" that Japan boasts to the world.

The request was made online by the Cabinet Office's Intellectual Property Strategy Headquarters. Sora 2, which launched recently, generates twenty-second videos at 1080p resolution. Social media is getting filled with videos showing characters from One Piece, Demon Slayer, Pokemon and Mario. Digital Minister Masaaki Taira expressed hopes that OpenAI would comply voluntarily. He indicated that measures under Japan's AI Promotion Act may be invoked if the issue remains unresolved.
Bitcoin

DOJ Seizes $15 Billion In Bitcoin From Massive 'Pig Butchering' Scam Based In Cambodia (cnbc.com) 70

The U.S. Department of Justice seized about $15 billion in bitcoin from wallets tied to Chen Zhi, founder of Cambodia's Prince Holding Group, who is accused of running one of the world's biggest "pig butchering" scams. Prosecutors say Zhi's network trafficked people into forced-labor scam compounds that defrauded victims worldwide through fake crypto investment schemes. CNBC reports: The seizure is the largest forfeiture action by the DOJ in history. An indictment charging the alleged pig butcher, Chen Zhi, was unsealed Tuesday in federal court in Brooklyn, New York. Zhi, who is also known as "Vincent," remains at large, according to the U.S. Attorney's Office for the Eastern District of New York. He was identified in court filings as the founder and chairman of Prince Holding Group, a multinational business conglomerate based in Cambodia, which prosecutors said grew "in secret .... into one of Asia's largest transnational criminal organizations. [...]

The scams duped people contacted via social media and messaging applications online into transferring cryptocurrency into accounts controlled by the scheme with false promises that the crypto would be invested and produce profits, according to the office. "In reality, the funds were stolen from the victims and laundered for the benefit of the perpetrators," the release said. "The scam perpetrators often built relationships with their victims over time, earning their trust before stealing their funds."

Prosecutors said that hundreds of people were trafficked and forced to work in the scam compounds, "often under the threat of violence." Zhi and a network of top executives in the Prince Group are accused of using political influence in multiple countries to protect their criminal enterprise and paid bribes to public officials to avoid actions by law enforcement authorities targeting the scheme, according to prosecutors.

Education

Digital Platforms Correlate With Cognitive Decline in Young Users (npr.org) 48

Preteens who use increasing amounts of social media perform poorer in reading, vocabulary and memory tests in early adolescence compared to those who use little or no social media. A study published in JAMA examined data from over 6,000 children ages 9 to 10 through early adolescence. Researchers classified the children into three groups: 58% used little or no social media over several years, 37% started with low-level use but spent about an hour daily on social media by age 13, and 6% spent three or more hours daily by that age.

Even low users who spent about one hour per day performed 1 to 2 points lower on reading and memory tasks compared to non-users. High users performed 4 to 5 points lower than non-social media users. Jason Nagata, a pediatrician at the University of California, San Francisco and study author, said the findings were notable because even modest social media use correlated with lower cognitive scores.
Software

Software Update Bricks Some Jeep 4xe Hybrids Over the Weekend (arstechnica.com) 85

An anonymous reader quotes a report from Ars Technica: Owners of some Jeep Wrangler 4xe hybrids have been left stranded after installing an over-the-air software update this weekend. The automaker pushed out a telematics update for the Uconnect infotainment system that evidently wasn't ready, resulting in cars losing power while driving and then becoming stranded. Stranded Jeep owners have been detailing their experiences in forum and Reddit posts, as well as on YouTube. The buggy update doesn't appear to brick the car immediately. Instead, the failure appears to occur while driving -- a far more serious problem. For some, this happened close to home and at low speed, but others claim to have experienced a powertrain failure at highway speeds.

Jeep pulled the update after reports of problems, but the software had already downloaded to many owners' cars by then. A member of Stellantis' social engagement team told 4xe owners at a Jeep forum to ignore the update pop-up if they haven't installed it yet. Owners were also advised to avoid using either hybrid or electric modes if they had updated their 4xe and not already suffered a powertrain failure. Yesterday, Jeep pushed out a fix.

United States

Three New California Laws Target Tech Companies' Interactions with Children 47

California Governor Gavin Newsom signed three bills on Monday that establish the nation's most comprehensive framework for regulating how technology companies interact with minors. AB 56 requires social media platforms to display health warnings to users under 18. A child must view a skippable ten-second warning upon logging on each day. An unskippable thirty-second warning must appear if a child spends more than three hours on a platform. That warning repeats after each additional hour. The warnings must state that social media "can have a profound risk of harm to the mental health and well-being of children and adolescents." Minnesota passed a similar law in July.

SB 243 makes California the first state to regulate AI companion chatbots. The law takes effect January 1, 2026. Companies must implement age verification and disclose that interactions are artificially generated. Chatbots cannot represent themselves as healthcare professionals. Companies must offer break reminders to minors and prevent them from viewing sexually explicit images. The legislation gained momentum after teenager Adam Raine died by suicide following conversations with OpenAI's ChatGPT. A Colorado family filed suit against Character AI after their daughter's suicide following problematic conversations with the company's chatbots.

AB 1043 requires device-makers like Apple and Google to collect birth dates when parents set up devices for children. Device-makers must group users into four age brackets and share this information with apps. Google, Meta, OpenAI, and Snap supported the bill. The Motion Picture Association opposed it.
AI

Hollywood Demands Copyright Guardrails from Sora 2 - While Users Complain That's Less Fun (yahoo.com) 56

Enthusiasm for Sora 2 "wasn't shared in Hollywood," reports the Los Angeles Times, "where the new AI tools have created a swift backlash" that "appears to be only just the beginning of a bruising legal fight that could shape the future of AI use in the entertainment business." [OpenAI] executives went on a charm offensive last year. They reached out to key players in the entertainment industry — including Walt Disney Co. — about potential areas for collaboration and trying to assuage concerns about its technology. This year, the San Francisco-based AI startup took a more assertive approach. Before unveiling Sora 2 to the general public, OpenAI executives had conversations with some studios and talent agencies, putting them on notice that they need to explicitly declare which pieces of intellectual property — including licensed characters — were being opted-out of having their likeness depicted on the AI platform, according to two sources familiar with the matter who were not authorized to comment. Actors would be included in Sora 2 unless they opted out, the people said. OpenAI disputes the claim and says that it was always the company's intent to give actors and other public figures control over how their likeness is used.

The response was immediate.... [Big talent agencies objected, along with performers' unions and major studios.] "Decades of enforceable copyright law establishes that content owners do not need to 'opt out' to prevent infringing uses of their protected IP," Warner Bros. Discovery said in a statement... The strong pushback from the creative community could be a strategy to force OpenAI into entering licensing agreements for the content they need, legal experts said... One challenge is figuring out a way that fairly compensates talent and rights holders. Several people who work within the entertainment industry ecosystem said they don't believe a flat fee works.

Meanwhile, "the complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week," writes Gizmodo. But that means the service has "now pissed off its users." As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can't make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was "the only reason this app was so fun."
Futurism published more reactions, including ""It's official, Sora 2 is completely boring and useless with these copyright restrictions." Others accused OpenAI of abusing copyright to hype up its new app. "This is just classic OpenAI at this point," another user wrote. "They do this s*** all the time. Let people have fun for a day or two and then just start censoring like crazy." The app now has a measly 2.9-star rating on the App Store, indicative of growing disillusionment and frustration with censorship... [It's not dropped to 2.8.]

In an apparent effort to save face, Altman claimed this week that many copyright holders are actually begging to have their characters appear on Sora, instead of complaining about the trend. "In the case of Sora, we've heard from a lot of concerned rightsholders and also a lot of rightsholders who are like 'My concern is you won't put my character in enough,'" he told the a16z podcast earlier this week. "So I can completely see a world where subject to the decisions that a rightsholder has, they get more upset with us for not generating their character often enough than too much," he added. Whether most rightsholders would agree with that sentiment remains to be seen.

Business Insider offers another reaction. After watching Sora 2's main public feed, they write that Sora 2 "seems to be overrun with teenage boys."
AI

AI Slop? Not This Time. AI Tools Found 50 Real Bugs In cURL (theregister.com) 92

The Register reports: Over the past two years, the open source curl project has been flooded with bogus bug reports generated by AI models. The deluge prompted project maintainer Daniel Stenberg to publish several blog posts about the issue in an effort to convince bug bounty hunters to show some restraint and not waste contributors' time with invalid issues. Shoddy AI-generated bug reports have been a problem not just for curl, but also for the Python community, Open Collective, and the Mesa Project.

It turns out the problem is people rather than technology. Last month, the curl project received dozens of potential issues from Joshua Rogers, a security researcher based in Poland. Rogers identified assorted bugs and vulnerabilities with the help of various AI scanning tools. And his reports were not only valid but appreciated. Stenberg in a Mastodon post last month remarked, "Actually truly awesome findings." In his mailing list update last week, Stenberg said, "most of them were tiny mistakes and nits in ordinary static code analyzer style, but they were still mistakes that we are better off having addressed. Several of the found issues were quite impressive findings...."

Stenberg told The Register that about 50 bugfixes based on Rogers' reports have been merged. "In my view, this list of issues achieved with the help of AI tooling shows that AI can be used for good," he said in an email. "Powerful tools in the hand of a clever human is certainly a good combination. It always was...!" Rogers wrote up a summary of the AI vulnerability scanning tools he tested. He concluded that these tools — Almanax, Corgea, ZeroPath, Gecko, and Amplify — are capable of finding real vulnerabilities in complex code.

The Register's conclusion? AI tools "when applied with human intelligence by someone with meaningful domain experience, can be quite helpful."

jantangring (Slashdot reader #79,804) has published an article on Stenberg's new position, including recently published comments from Stenberg that "It really looks like these new tools are finding problems that none of the old, established tools detect."
AI

OpenAI, Sur Energy Weigh $25 Billion Argentina Data Center Project 7

OpenAI and Sur Energy have signed a letter of intent to build a $25 billion, 500-megawatt data center in Argentina, marking OpenAI's first major infrastructure project in Latin America. The "Stargate Argentina" initiative is backed by Argentina's RIGI tax incentives and positioned as "one of the largest technology and energy infrastructure initiatives" in the nation's history.

"We are proud to announce plans to launch Stargate Argentina, an exciting new infrastructure project in partnership with one of the country's leading energy companies, Sur Energy," OpenAI CEO Sam Altman said on social media. Altman added that the region was "full of talent, creativity and ambition."
Social Networks

New York City Sues Social Media Companies Over 'Youth Mental Health Crisis' (gizmodo.com) 36

An anonymous reader quotes a report from Gizmodo: The City of New York is reaching across the country to sue tech giants headquartered in California over allegations that their platforms have created a youth mental health crisis. The city, along with its school districts and health department, alleges that "gross negligence" on the part of Meta, Alphabet, Snap, and ByteDance has gotten kids hooked on social media, which has created a "public nuisance" that is placing a strain on the city's resources.

In a 327-page complaint filed in the US District Court for the Southern District of New York, the city alleges that tech companies have designed their platforms in a way that seeks to "maximize the number of children" using them, and have built "algorithms that wield user data as a weapon against children and fuel the addiction machine." The city also alleges that these companies "know children and adolescents are in a developmental stage that leaves them particularly vulnerable to the addictive effects of these features," but "target them anyway, in pursuit of additional profit."

[...] It cites data from the New York City Police Department, for instance, that show at least 16 teens have died while "subway surfing" -- riding outside of a moving train -- a dangerous behavior which the lawsuit claims has been encouraged by social media trends. Two girls, ages 12 and 13, died earlier this month while subway surfing. It also cited survey data collected from New York high school students, which shows that 77.3% of the city's teens spend three or more hours per day on screens, which it claims has contributed to lost sleep and, in turn, absences from school -- corroborated by the city's school districts, which provided data to show that 36.2% of all public school students are considered chronically absent, missing at least 10% of the school year.

The Almighty Buck

Insurers Balk At Paying Out Huge Settlements For Claims Against AI Firms 25

An anonymous reader quotes a report from the Financial Times: OpenAI and Anthropic are considering using investor funds to settle potential claims from multibillion-dollar lawsuits, as insurers balk at providing comprehensive coverage for the risks associated with artificial intelligence. The two US-based AI start-ups have traditional business insurance coverage in place, but insurance professionals said AI model providers will struggle to secure protection for the full scale of damages they may need to pay out in the future. OpenAI, which has tapped the world's second-largest insurance broker Aon for help, has secured cover of up to $300 million for emerging AI risks, according to people familiar with the company's policy. Another person familiar with the policy disputed that figure, saying it was much lower. But all agreed the amount fell far short of the coverage to insure against potential losses from a series of multibillion-dollar legal claims.

[...] Two people with knowledge of the matter said OpenAI has considered "self insurance," or putting aside investor funding in order to expand its coverage. The company has raised nearly $60 billion to date, with a substantial amount of the funding contingent on a proposed corporate restructuring. One of those people said OpenAI had discussed setting up a "captive" -- a ringfenced insurance vehicle often used by large companies to manage emerging risks. Big tech companies such as Microsoft, Meta, and Google have used captives to cover Internet-era liabilities such as cyber or social media. Captives can also carry risks, since a substantial claim can deplete an underfunded captive, leaving the parent company vulnerable. OpenAI said it has insurance in place and is evaluating different insurance structures as the company grows, but does not currently have a captive and declined to comment on future plans.
United Kingdom

UK Universities Offered To Monitor Students' Social Media For Arms Firms, Emails Show 23

An anonymous reader shares a report: Universities in the UK reassured arms companies they would monitor students' chat groups and social media accounts after firms raised concerns about campus protests, according to internal emails. One university said it would conduct "active monitoring of social media" for any evidence of plans to demonstrate against Rolls-Royce at a careers fair.

A second appeared to agree to a request from Raytheon UK, the British wing of a major US defence contractor, to "monitor university chat groups" before a campus visit. Another university responded to a defence company's "security questionnaire" seeking information about social media posts suggestive of imminent protests over the firm's alleged role in fuelling war, including in Gaza. The universities' apparent compliance with the sensitivities of arms companies before careers fairs has emerged in emails obtained by the Guardian and Liberty Investigates after freedom of information (FoI) requests.

Slashdot Top Deals