Earth

Ocean Damage Nearly Doubles the Cost of Climate Change 38

A new study from Scripps Institution of Oceanography finds that factoring ocean damage into climate economics nearly doubles the estimated global cost of climate change, adding close to $2 trillion per year from losses to fisheries, coral reefs, and coastal infrastructure. "It is the first time a social cost of carbon (SCC) assessment -- a key measure of economic harm caused by climate change -- has included damages to the ocean," reports Inside Climate News. From the report: "For decades, we've been estimating the economic cost of climate change while effectively assigning a value of zero to the ocean," said Bernardo Bastien-Olvera, who led the study during his postdoctoral fellowship at Scripps. "Ocean loss is not just an environmental issue, but a central part of the economic story of climate change."

The social cost of carbon is an accounting method for working out the monetary cost of each ton of carbon dioxide released into the atmosphere. "[It] is one of the most efficient tools we have for internalizing climate damages into economic decision-making," said Amy Campbell, a United Nations climate advisor and former British government COP negotiator. Calculations have historically been used by international organizations and state departments like the U.S. Environmental Protection Agency to assess policy proposals -- though a 2025 White House memo from the Trump administration instructed federal agencies to ignore the data during cost-benefit analyses unless required by law. "It becomes politically contentious when deciding whose damages are counted, which sectors are included and most importantly how future and retrospective harms are valued," Campbell said.

Excluding ocean harm, the social cost of carbon is $51 per ton of carbon dioxide emitted. This increases to $97.20 per ton when the ocean, which covers 70 percent of the planet, is included. In 2024, global CO2 emissions were estimated to be 41.6 billion tons, making the 91 percent cost increase significant. Using greenhouse gas emission predictions, the report estimates the annual damages to traditional markets alone will be $1.66 trillion by 2100.
Transportation

Germany's EV Subsidies Will Include Chinese Brands (cnevpost.com) 55

Germany is reinstating EV subsidies after a sharp sales drop, rolling out a 3 billion-euro program offering 1,500-6,000 euros per buyer starting in May and running through 2029. Unlike some neighboring countries, the incentives are open to all manufacturers with a focus on low- and middle-income households. From a report: "I cannot see any evidence of this postulated major influx of Chinese car manufacturers in Germany, either in the figures or on the roads -- and that is why we are facing up to the competition and not imposing any restrictions," German Environment Minister Carsten Schneider said at a Monday press conference. The decision is a major boon for affordable Chinese automakers like BYD that are steadily gaining ground in the European market, [Bloomberg noted].

Germany's green-light for Chinese EVs stands in stark contrast to other nations' approaches. In the UK, subsidies introduced last year effectively excluded Chinese battery-powered vehicles, while France's so-called social leasing scheme includes similar restrictions. [...] Germany maintains strong diplomatic ties with China. German automakers are among the most significant players in China's automotive industry. Over the past years, China's policies -- including purchase subsidies and purchase tax reductions -- have not excluded models or automakers from specific countries. Whether German automakers like Volkswagen or American automakers like Tesla, all enjoy national-level purchase incentive policies in China on par with domestic automakers.

Social Networks

Threads Usage Overtakes X On Mobile (techcrunch.com) 37

New data from Similarweb shows Threads has overtaken X in daily mobile users. However, X still dominates on the web with around 150 million daily web visits compared to Threads' 8.5 million daily visits. TechCrunch reports: Similarweb's data shows that Threads had 141.5 million daily active users on iOS and Android as of January 7, 2026, after months of growth, while X has 125 million daily active users on mobile devices. This appears to be the result of longer-term trends, rather than a reaction to the recent X controversies [...]. Instead, Threads' boost in daily mobile usage may be driven by other factors, including cross-promotions from Meta's larger social apps like Facebook and Instagram (where Threads is regularly advertised to existing users), its focus on creators, and the rapid rollout of new features.

Over the past year, Threads has added features like interest-based communities, better filters, DMs, long-form text, disappearing posts, and has recently been spotted testing games. Combined, the daily active user increases suggest that more people are using Threads on mobile as a more regular habit.
Further reading: Threads Now Has More Than 400 Million Monthly Active Users
Electronic Frontier Foundation

Congress Wants To Hand Your Parenting To Big Tech 53

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing [Friday] on "examining the effect of technology on America's youth." Witnesses warned about "addictive" online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and "empower parents."

That's a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill's press release contains soothing language, KOSMA doesn't actually give parents more control. Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That's right -- this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem. [...] This bill doesn't just set an age rule. It creates a legal duty for platforms to police families. Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it "shall terminate any existing account or profile" belonging to that user. And "knows" doesn't just mean someone admits their age. The bill defines knowledge to include what is "fairly implied on the basis of objective circumstances" -- in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won't be kids sneaking around -- it will be minors who are following their parents' guidance, and the parents themselves. Imagine a child using their parent's YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, "Cool video -- I'll show this to my 6th grade teacher!" and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn't matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a "family" account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That's more than enough legal risk to make platforms err on the side of cutting people off. Platforms have no way to remove "just the kid" from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child's use, KOSMA forces Big Tech to override that family decision. [...] These companies don't know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
AI

Is the Possibility of Conscious AI a Dangerous Myth? (noemamag.com) 221

This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious.

He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines."

But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be...

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious....

Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves...

The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Australia

Nearly 5 Million Accounts Removed Under Australia's New Social Media Ban (nytimes.com) 72

An anonymous reader quotes a report from the New York Times: Nearly five million social media accounts belonging to Australian teenagers have been deactivated or removed, a month after a landmark law barring those younger than 16 from using the services took effect, the government said on Thursday. The announcement was the first reported metric reflecting the rollout of the law, which is being closely watched by several other countries weighing whether the regulation can be a blueprint for protecting children from the harms of social media, or a cautionary tale highlighting the challenges of such attempts.

The law required 10 social media platforms, including Instagram, Facebook, Snapchat and Reddit, to prevent users under 16 from accessing their services. Under the law, which came into force in December, failure by the companies to take "reasonable steps" to remove underage users could lead to fines of up to 49.5 million Australian dollars, about $33 million. [...] The number of removed accounts offered only a limited picture of the ban's impact. Many teenagers have said in the weeks since the law took effect that they were able to get around the ban by lying about their age, or that they could easily bypass verification systems.

The regulator tasked with enforcing and tracking the law, the eSafety Commissioner, did not release a detailed breakdown beyond announcing that the companies had "removed access" to about 4.7 million accounts belonging to children under 16. Meta, the parent company of Instagram and Facebook, said this week that it had removed almost 550,000 accounts of users younger than 16 before the ban came into effect.
"Change doesn't happen overnight," said Prime Minister Anthony Albanese. "But these early signs show it's important we've acted to make this change."
Social Networks

Study Finds Weak Evidence Linking Social Media Use to Teen Mental Health Problems (theguardian.com) 40

An anonymous reader quotes a report from the Guardian: Screen time spent gaming or on social media does not cause mental health problems in teenagers, according to a large-scale study. [...] Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties. Participants were asked how much time on a normal weekday in term time they spent on TikTok, Instagram, Snapchat and other social media, or gaming. They were also asked questions about their feelings, mood and wider mental health.

The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers' symptoms of anxiety or depression over the following year. Increases in girls' and boys' social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year, the authors found. More time spent gaming also had a zero negative effect on pupils' mental health. "We know families are worried, but our results do not support the idea that simply spending time on social media or gaming leads to mental health problems -- the story is far more complex than that," said the lead author Dr Qiqi Cheng.

The research, published in the Journal of Public Health, also examined whether how pupils use social media makes a difference, with participants asked how much time spent chatting with others, posting stories, pictures and videos, browsing feeds, profiles or scrolling through photos and stories. The scientists found that actively chatting on social media or passive scrolling feeds did not appear to drive mental health difficulties. The authors stressed that the findings did not mean online experiences were harmless. Hurtful messages, online pressures and extreme content could have detrimental effects on wellbeing, but focusing on screen time alone was not helpful, they said.

Social Networks

Digg Launches Its New Reddit Rival To the Public (techcrunch.com) 44

Digg is officially back under the ownership of its original founder, Kevin Rose, along with Reddit co-founder Alexis Ohanian. "Similar to Reddit, the new Digg offers a website and mobile app where you can browse feeds featuring posts from across a selection of its communities and join other communities that align with your interests," reports TechCrunch. "There, you can post, comment, and upvote (or 'digg') the site's content." From the report: [T]he rise of AI has presented an opportunity to rebuild Digg, Rose and Ohanian believe, leading them to acquire Digg last March through a leveraged buyout by True Ventures, Ohanian's firm Seven Seven Six, Rose and Ohanian themselves, and the venture firm S32. The company has not disclosed its funding. They're betting that AI can help to address some of the messiness and toxicity of today's social media landscape. At the same time, social platforms will need a new set of tools to ensure they're not taken over by AI bots posing as people.

"We obviously don't want to force everyone down some kind of crazy KYC process," said Rose in an interview with TechCrunch, referring to the 'know your customer' verification process used by financial institutions to confirm someone's identity. Instead of simply offering verification checkmarks to designate trust, Digg will try out new technologies, like using zero-knowledge proofs (cryptographic methods that verify information without revealing the underlying data) to verify the people using its platform. It could also do other things, like require that people who join a product-focused community verify they actually own or use the product being discussed there.

As an example, a community for Oura ring owners could verify that everyone who posts has proven they own one of the smart rings. Plus, Rose suggests Digg could use signals acquired from mobile devices to help verify members -- for instance, the app could identify when Digg users attended a meetup in the same location. "I don't think there's going to be any one silver bullet here," said Rose. "It's just going to be us saying ... here's a platter of things that you can add together to create trust."

Communications

Widespread Verizon Outage Prompts Emergency Alerts in Washington, New York City (nbcnews.com) 16

Verizon said on Wednesday that its wireless service was suffering an outage impacting cellular data and voice services. From a report: The nation's largest wireless carrier said that its "engineers are engaged and are working to identify and solve the issue quickly." Verizon's statement came after a swath of social media comments directed at Verizon, with users saying that their mobile devices were showing no bars of service or "SOS," indicating a lack of connection.

Verizon, which has more than 146 million customers, appears to have started experiencing services issues around 12:00 p.m. ET, according to comments on social media site X. Users also reported problems with Verizon competitor T-Mobile. But the company said that it was not having any service issues. "T-Mobile's network is keeping our customers connected, and we've confirmed that our network is operating optimally," a spokesperson told NBC News. "However, due to Verizon's reported outage, our customers may not be able to reach someone with Verizon service at this time."

Microsoft

UK Police Blame Microsoft Copilot for Intelligence Mistake (theverge.com) 60

The chief constable of one of Britain's largest police forces has admitted that Microsoft's Copilot AI assistant made a mistake in a football (soccer) intelligence report. From a report: The report, which led to Israeli football fans being banned from a match last year, included a nonexistent match between West Ham and Maccabi Tel Aviv.

Copilot hallucinated the game and West Midlands Police included the error in its intelligence report without fact checking it. "On Friday afternoon I became aware that the erroneous result concerning the West Ham v Maccabi Tel Aviv match arose as result of a use of Microsoft Co Pilot [sic]," says Craig Guildford, chief constable of West Midlands Police, in a letter to the Home Affairs Committee earlier this week. Guildford previously denied in December that the West Midlands Police had used AI to prepare the report, blaming "social media scraping" for the error.

Government

Senate Passes a Bill That Would Let Nonconsensual Deepfake Victims Sue (theverge.com) 63

The U.S. Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), giving victims of sexually explicit AI deepfakes the right to sue the individuals who created them. The Verge reports: The bill passed with unanimous consent -- meaning there was no roll-call vote, and no Senator objected to its passage on the floor Tuesday. It's meant to build on the work of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate images (NCII) and requires social media platforms to promptly remove them. [...] Now the ball is again in the House leadership's court; if they decide to bring the bill to the floor, it will have to pass in order to reach the president's desk.
Power

Trump Says Microsoft To Make Changes To Curb Data Center Power Costs For Americans (cnbc.com) 42

An anonymous reader quotes a report from CNBC: President Donald Trump said in a social media post on Monday that Microsoft will announce changes to ensure that Americans won't see rising utility bills as the company builds more data centers to meet rising artificial intelligence demand. "I never want Americans to pay higher Electricity bills because of Data Centers," Trump wrote on Truth Social. "Therefore, my Administration is working with major American Technology Companies to secure their commitment to the American People, and we will have much to announce in the coming weeks."

[...] Trump congratulated Microsoft on its efforts to keep prices in check, suggesting that other companies will make similar commitments. "First up is Microsoft, who my team has been working with, and which will make major changes beginning this week to ensure that Americans don't 'pick up the tab' for their POWER consumption, in the form of paying higher Utility bills," Trump wrote on Monday. Utilities charged U.S. consumers 6% more for electricity in August from a year earlier, including in states with many data centers, CNBC reported in November.

Microsoft is paying close to attention to the impact of its data centers on local residents. "I just want you to know we are doing everything we can, and I believe we're succeeding, in managing this issue well, so that you all don't have to pay more for electricity because of our presence," Brad Smith, the company's president and vice chair, said at a September town hall meeting in Wisconsin, where Microsoft is building an AI data center. While Microsoft is moving forward with some facilities, the company withdrew plans for a data center in Caledonia, Wisconsin, amid loud opposition to its efforts there. The project would would have been located 20 miles away from a data center in the village of Mount Pleasant.

AI

Meta Plans To Cut Around 10% of Employees In Reality Labs Division 33

Meta plans to cut roughly 10% of staff in its Reality Labs division, with layoffs hitting metaverse-focused teams hardest. Reuters reports: The cuts to Reality Labs, which has roughly 15,000 employees, could be announced as soon as Tuesday and are set to disproportionately affect those in the metaverse unit who work on virtual reality headsets and virtual social networks, the report said. [...] Meta Chief Technology Officer Andrew Bosworth, who oversees Reality Labs, has called a meeting on Wednesday and has urged staff to attend in person, the NYT reported, citing a memo. [...]

The metaverse had been a massive project spearheaded by CEO Mark Zuckerberg, who prioritized and spent heavily on the venture, only for the business to burn more than $60 billion since 2020. [...] The report comes as the Facebook-parent scrambles to stay relevant in Silicon Valley's artificial intelligence race after its Llama 4 model met with a poor reception.
United States

US President Calls for 10% Credit Card Interest Cap, Banks Push Back (pbs.org) 309

President Donald Trump revived a campaign pledge Friday night by calling for a one-year, 10% cap on credit card interest rates, a proposal that banking groups immediately opposed despite the industry's heavy donations to his 2024 campaign and support for his second-term agenda.

Trump posted on Truth Social that he hoped the cap would be in place by January 20, one year after he took office, though he did not specify whether it would come through executive action or legislation.

Americans currently pay between 19.65% and 21.5% interest on credit cards on average and carry roughly $1.23 trillion in credit card debt, according to the New York Federal Reserve. Researchers found that a 10% cap would save Americans roughly $100 billion in interest annually. The American Bankers Association warned that such a cap "would only drive consumers toward less regulated, more costly alternatives."

Further reading: How Trump's proposed cap on credit card rates could reshape consumer lending.
AI

Amazon's AI Tool Listed Products from Small Businesses Without Their Knowledge (msn.com) 40

Bloomberg reports on Amazon listings "automatically generated by an experimental AI tool" for stores that don't sell on Amazon.

Bloomberg notes that the listings "didn't always correspond to the correct product", leaving the stores to handle the complaints from angry customers: Between the Christmas and New Year holidays, small shop owners and artisans who had found their products listed on Amazon took to social media to compare notes and warn their peers... In interviews, six small shop owners said they found themselves unwittingly selling their products on Amazon's digital marketplace. Some, especially those who deliberately avoided Amazon, said they should have been asked for their consent. Others said it was ironic that Amazon was scouring the web for products with AI tools despite suing Perplexity AI Inc.for using similar technology to buy products on Amazon... Some retailers say the listings displayed the wrong product image or mistakenly showed wholesale pricing. Users of Shopify Inc.'s e-commerce tools said the system flagged Amazon's automated purchases as potentially fraudulent...

In a statement, Amazon spokesperson Maxine Tagay said sellers are free to opt out. Two Amazon initiatives — Shop Direct, which links out to make purchases on other retailers' sites, and Buy For Me, which duplicates listings and handles purchases without leaving Amazon — "are programs we're testing that help customers discover brands and products not currently sold in Amazon's store, while helping businessesâreach new customers and drive incremental sales," she said in an emailed statement. "We have received positive feedback on these programs." Tagay didn't say why the sellers were enrolled without notifying them. She added that the Buy For Me selection features more than 500,000 items, up from about 65,000 at launch in April.

The article includes quotes from the owners of affected businesses.
  • A one-person company complained that "If suddenly there were 100 orders, I couldn't necessarily manage. When someone takes your proprietary, copyrighted works, I should be asked about that. This is my business. It's not their business."
  • One business owner said "I just don't want my products on there... It's like if Airbnb showed up and tried to put your house on the market without your permission."
  • One business owner complained "When things started to go wrong, there was no system set up by Amazon to resolve it. It's just 'We set this up for you, you should be grateful, you fix it.'" One Amazon representative even suggested they try opening a $39-a-month Amazon seller account.

Social Networks

Elon Musk: X's New Algorithm Will Be Made Open Source in Seven Days (msn.com) 90

"We will make the new ð algorithm...open source in 7 days," Elon Musk posted Saturday on X.com. Musk says this is "including all code used to determine what organic and advertising posts are recommended to users," and "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed."

Some context from Engadget: Musk has been making promises of open-sourcing the algorithm since his takeover of Twitter, and in 2023 published the code for the site's "For You" feed on GitHub. But the code wasn't all that revealing, leaving out key details, according to analyses at the time. And it hasn't been kept up to date.
Bloomberg also reported on Saturday's announcement: The billionaire didn't say why X was making its algorithm open source. He and the company have clashed several times with regulators over content being shown to users.

Some X users had previously complained that they were receiving fewer posts on the social media platform from people they follow. In October, Musk confirmed in a post on X that the company had found a "significant bug" in the platform's "For You" algorithm and pledged a fix. The company has also been working to incorporate more artificial intelligence into its recommendation algorithm for X, using Grok, Musk's artificial intelligence chatbot...

In September, Musk wrote that the goal was for X's recommendation engine to "be purely AI" and that the company would share its open source algorithm about every two weeks. "To the degree that people are seeing improvements in their feed, it is not due to the actions of specific individuals changing heuristics, but rather increasing use of Grok and other AI tools," Musk wrote in October. The company was working to have all of the more than 100 million daily posts published to X evaluated by Grok, which would then offer individual users the posts most likely to interest them, Musk wrote. "This will profoundly improve the quality of your feed." He added that the company was planning to roll out the new features by November.

Social Networks

AI-Powered Social Media App Hopes To Build More Purposeful Lives (msn.com) 32

A founder of Twitter and a founder of Pinterest are now working on "social media for people who hate social media," writes a Washington Post columnist.

"When I heard that this platform would harness AI to help us live more meaningful lives, I wanted to know more..." Their bid for redemption is West Co. — the Workshop for Emotional and Spiritual Technology Corporation — and the platform they're testing is called Tangle, a "purpose discovery tool" that uses AI to help users define their life purposes, then encourages them to set intentions toward achieving those purposes, reminds them periodically and builds a community of supporters to encourage steps toward meeting those intentions. "A lot of people, myself included, have been on autopilot," Stone said. "If all goes well, we'll introduce a lot of people to the concept of turning off autopilot."

But will all go well? The entrepreneurs have been at it for two years, and they've scrapped three iterations before even testing them. They still don't have a revenue model. "This is a really hard thing to do," Stone admitted. "If we were a traditional start-up, we would have probably been folded by now." But the two men, with a combined net worth of at least hundreds of millions, and possibly billions, had the luxury of self-funding for a year, and now they have $29 million in seed funding led by Spark Capital...

[T]he project revolves around training existing AI models in "what good intentions and helpful purposes look like," explained Long Cheng, the founding designer. When you join Tangle, which is invitation-only until this spring at the earliest, the AI peruses your calendar, examines your photos, asks you questions and then produces "threads," or categories that define your life purpose. You're free to accept, reject or change the suggestions. It then encourages you to make "intentions" toward achieving your threads, and to add "reflections" when you experience something meaningful in your life. Users then receive encouragement from friends, or "supporters." A few of the "threads" on Tangle are about personal satisfaction (traveler, connoisseur), but the vast majority involve causes greater than self: family (partner, parent, sibling), community (caregiver, connector, guardian), service (volunteer, advocate, healer) and spirituality (seeker, believer). Even the work-related threads (mentor, leader) suggest a higher purpose.

The column includes this caveat. "I have no idea whether they will succeed. But as a columnist writing about how to keep our humanity in the 21st century, I believe it's important to focus on people who are at least trying..."

"Quite possibly, West Co. and the various other enterprises trying to nudge technology in a more humane direction will find that it doesn't work socially or economically — they don't yet have a viable product, after all — but it would be a noble failure."
AI

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say (nbcnews.com) 60

Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.

The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces."

Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said.
"In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away."

Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."
Science

Some Super-Smart Dogs Can Learn New Words Just By Eavesdropping (npr.org) 51

An anonymous reader quotes a report from NPR: [I]t turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. What's more, these "gifted" dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight -- as long as their favorite human is looking at the spot where the toy is hidden. That's according to a new study in the journal Science. "What we found in this study is that the dogs are using social communication. They're using these social cues to understand what the owners are talking about," says cognitive scientist Shany Dror of Eotvos Lorand University and the University of Veterinary Medicine, Vienna. "This tells us that the ability to use social information is actually something that humans probably had before they had language," she says, "and language was kind of hitchhiking on these social abilities."

[...] "There's only a very small group of dogs that are able to learn this differentiation and then can learn that certain labels refer to specific objects," she says. "It's quite hard to train this and some dogs seem to just be able to do it." [...] To explore the various ways that these dogs are capable of learning new words, Dror and some colleagues conducted a study that involved two people interacting while their dog sat nearby and watched. One person would show the other a brand new toy and talk about it, with the toy's name embedded into sentences, such as "This is your armadillo. It has armadillo ears, little armadillo feet. It has a tail, like an armadillo tail." Even though none of this language was directed at the dogs, it turns out the super-learners registered the new toy's name and were later able to pick it out of a pile, at the owner's request.

To do this, the dogs had to go into a separate room where the pile was located, so the humans couldn't give them any hints. Dror says that as she watched the dogs on camera from the other room, she was "honestly surprised" because they seemed to have so much confidence. "Sometimes they just immediately went to the new toy, knowing what they're supposed to do," she says. "Their performance was really, really high." She and her colleagues wondered if what mattered was the dog being able to see the toy while its name was said aloud, even if the words weren't explicitly directed at the dog. So they did another experiment that created a delay between the dog seeing a new toy and hearing its name. The dogs got to see the unfamiliar toy and then the owner dropped the toy in a bucket, so it was out of sight. Then the owner would talk to the dog, and mention the toy's name, while glancing down at the bucket. While this was more difficult for dogs, overall they still could use this information to learn the name of the toy and later retrieve it when asked. "This shows us how flexible they are able to learn," says Dror. "They can use different mechanisms and learn under different conditions."

Social Networks

Iran in 'Digital Blackout' as Tehran Throttles Mobile Internet Access (thenationalnews.com) 45

An anonymous reader shares a report: Internet access available through mobile devices in Iran appears to be limited, according to several social media accounts that routinely track such developments. Cloudflare Radar, which monitors internet traffic on behalf of the internet infrastructure firm Cloudflare, said on Thursday that IPv6 (Internet Protocol version 6), a standard widely used for mobile infrastructure, was affected.

"IPv6 address space in Iran dropped by 98.5 per cent, concurrent with IPv6 traffic share dropping from 12 per cent to 1.8 per cent, as the government selectively blocks internet access amid protests," read Cloudflare Radar's social post. NetBlocks, which tracks internet access and digital rights around the world, also confirmed it was seeing problems with connectivity through various internet providers in Iran. "Live network data show Tehran and other parts of Iran are now entering a digital blackout," NetBlocks posted on X.

Slashdot Top Deals