The Internet

Cloudflare Reveals How Bots and Governments Reshaped the Internet in 2025 (nerds.xyz) 23

Cloudflare's sixth annual Year in Review report describes an internet increasingly shaped by two forces: automated traffic and government intervention, as global connectivity grew 19% year over year in 2025.

Google's web crawler now dominates automated traffic, dwarfing other AI and indexing bots to become the single largest source of bot activity on the web. Nearly half of all major internet disruptions globally were linked to government actions, and civil society and non-profit organizations became the most attacked sector for the first time.

Post-quantum encryption crossed a significant threshold, now protecting 52% of human internet traffic observed by Cloudflare. The company also recorded more than 25 record-breaking DDoS attacks throughout the year.
Television

LG's Software Update Forces Microsoft Copilot Onto Smart TVs (tomshardware.com) 57

LG smart TV owners discovered over the weekend that a recent webOS software update had quietly installed Microsoft Copilot on their devices, and the app cannot be uninstalled. Affected users report the feature appears automatically after installing the latest webOS update on certain models, sitting alongside streaming apps like Netflix and YouTube.

LG's support documentation confirms that certain preinstalled or system apps can only be hidden, not deleted. At CES 2025, LG announced plans to integrate Copilot into webOS as part of its "AI TV" strategy, describing it as an extension of its AI Search experience. The current implementation appears to function as a shortcut to a web-based Copilot interface rather than a native application. Samsung TVs include Google's Gemini in a similar fashion. Users wanting to avoid the feature entirely are left with one option: disconnecting their TV from the internet.
Security

Security Researcher Found Critical Kindle Vulnerabilities That Allowed Hijacking Amazon Accounts (thetimes.com) 13

The Black Hat Europe hacker conference in London included a session titled "Don't Judge an Audiobook by Its Cover" about a two critical (and now fixed) flaws in Amazon's Kindle. The Times reports both flaws were discovered by engineering analyst Valentino Ricotta (from the cybersecurity research division of Thales), who was awarded a "bug bounty" of $20,000 (£15,000 ). He said: "What especially struck me with this device, that's been sitting on my bedside table for years, is that it's connected to the internet. It's constantly running because the battery lasts a long time and it has access to my Amazon account. It can even pay for books from the store with my credit card in a single click. Once an attacker gets a foothold inside a Kindle, it could access personal data, your credit card information, pivot to your local network or even to other devices that are registered with your Amazon account."

Ricotta discovered flaws in the Kindle software that scans and extracts information from audiobooks... He also identified a vulnerability in the onscreen keyboard. Through both of these, he tricked the Kindle into loading malicious code, which enabled him to take the user's Amazon session cookies — tokens that give access to the account. Ricotta said that people could be exposed to this type of hack if they "side-load" books on to the Kindle through non-Amazon stores.

Ricotta donated his bug bounties to charity...
Social Networks

Like Australia, Denmark Plans to Severely Restrict Social Media Use for Teenagers (apnews.com) 92

"As Australia began enforcing a world-first social media ban for children under 16 years old this week, Denmark is planning to follow its lead," reports the Associated Press, "and severely restrict social media access for young people." The Danish government announced last month that it had secured an agreement by three governing coalition and two opposition parties in parliament to ban access to social media for anyone under the age of 15. Such a measure would be the most sweeping step yet by a European Union nation to limit use of social media among teens and children.

The Danish government's plans could become law as soon as mid-2026. The proposed measure would give some parents the right to let their children access social media from age 13, local media reported, but the ministry has not yet fully shared the plans... [A] new "digital evidence" app, announced by the Digital Affairs Ministry last month and expected to launch next spring, will likely form the backbone of the Danish plans. The app will display an age certificate to ensure users comply with social media age limits, the ministry said.

The article also notes Malaysia "is expected to ban social media accounts for people under the age of 16 starting at the beginning of next year, and Norway is also taking steps to restrict social media access for children and teens.

"China — which manufacturers many of the world's digital devices — has set limits on online gaming time and smartphone time for kids."
AI

Podcast Industry Under Siege as AI Bots Flood Airways with Thousands of Programs (yahoo.com) 42

An anonymous reader shared this report from the Los Angeles Times: Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast "Diary of a CEO." On YouTube, his clone narrates "100 CEOs With Steven Bartlett," which adds AI-generated animation to Bartlett's cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson. Erica Mandy, the Redondo Beach-based host of the daily news podcast called "The Newsworthy," let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out...

In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content. Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company... Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality...

Some are using the tech to carpet-bomb the market with content. Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, in some weeks accounting for 1% of all podcasts published that week on the internet, according to CEO Jeanine Wright. The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects. Instead of a studio searching for a specific "hit" podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening... One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue... Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiastNigel Thistledown...

Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.

United States

Repeal Section 230 and Its Platform Protections, Urges New Bipartisan US Bill (eff.org) 168

U.S. Senator Sheldon Whitehouse said Friday he was moving to file a bipartisan bill to repeal Section 230 of America's Communications Decency Act.

"The law prevents most civil suits against users or services that are based on what others say," explains an EFF blog post. "Experts argue that a repeal of Section 230 could kill free speech on the internet," writes LiveMint — though America's last two presidents both supported a repeal: During his first presidency, U.S. President Donald Trump called to repeal the law and signed an executive order attempting to curb some of its protections, though it was challenged in court. Subsequently, former President Joe Biden also voiced his opinion against the law.
An EFF blog post explains the case for Section 230: Congress passed this bipartisan legislation because it recognized that promoting more user speech online outweighed potential harms. When harmful speech takes place, it's the speaker that should be held responsible, not the service that hosts the speech... Without Section 230, the Internet is different. In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects. In non-democratic countries, governments can directly censor the internet, controlling the speech of platforms and users. If the law makes us liable for the speech of others, the biggest platforms would likely become locked-down and heavily censored. The next great websites and apps won't even get started, because they'll face overwhelming legal risk to host users' speech.
But "I strongly believe that Section 230 has long outlived its use," Senator Whitehouse said this week, saying Section 230 "a real vessel for evil that needs to come to an end." "The laws that Section 230 protect these big platforms from are very often laws that go back to the common law of England, that we inherited when this country was initially founded. I mean, these are long-lasting, well-tested, important legal constraints that have — they've met the test of time, not by the year or by the decade, but by the century.

"And yet because of this crazy Section 230, these ancient and highly respected doctrines just don't reach these people. And it really makes no sense, that if you're an internet platform you get treated one way; you do the exact same thing and you're a publisher, you get treated a completely different way.

"And so I think that the time has come.... It really makes no sense... [Testimony before the committee] shows how alone and stranded people are when they don't have the chance to even get justice. It's bad enough to have to live through the tragedy... But to be told by a law of Congress, you can't get justice because of the platform — not because the law is wrong, not because the rule is wrong, not because this is anything new — simply because the wrong type of entity created this harm."

Firefox

Firefox Survey Finds Only 16% Feel In Control of Their Privacy Choices Online (mozilla.org) 33

Choosing your browser "is one of the most important digital decisions you can make, shaping how you experience the web, protect your data, and express yourself online," says the Firefox blog. They've urged readers to "take a stand for independence and control in your digital life."

But they also recently polled 8,000 adults in France, Germany, the UK and the U.S. on "how they navigate choice and control both online and offline" (attending in-person events in Chicago, Berlin, LA, and Munich, San Diego, Stuttgart): The survey, conducted by research agency YouGov, showcases a tension between people's desire to have control over their data and digital privacy, and the reality of the internet today — a reality defined by Big Tech platforms that make it difficult for people to exercise meaningful choice online:


— Only 16% feel in control of their privacy choices (highest in Germany at 21%)

— 24% feel it's "too late" because Big Tech already has too much control or knows too much about them. And 36% said the feeling of Big Tech companies knowing too much about them is frustrating — highest among respondents in the U.S. (43%) and the UK (40%)

— Practices respondents said frustrated them were Big Tech using their data to train AI without their permission (38%) and tracking their data without asking (47%; highest in U.S. — 55% and lowest in France — 39%)


And from our existing research on browser choice, we know more about how defaults that are hard to change and confusing settings can bury alternatives, limiting people's ability to choose for themselves — the real problem that fuels these dynamics.

Taken together our new and existing insights could also explain why, when asked which actions feel like the strongest expressions of their independence online, choosing not to share their data (44%) was among the top three responses in each country (46% in the UK; 45% in the U.S.; 44% in France; 39% in Germany)... We also see a powerful signal in how people think about choosing the communities and platforms they join — for 29% of respondents, this was one of their top three expressions of independence online.

"For Firefox, community has always been at the heart of what we do," says their VP of Global Marketing, "and we'll keep fighting to put real choice and control back in people's hands so the web once again feels like it belongs to the communities that shape it."

At TwitchCon in San Diego Firefox even launched a satirical new online card game with a privacy theme called Data War.
Google

Google is Building an Experimental New Browser and a New Kind of Web App (theverge.com) 18

Google's Chrome team has built an experimental browser called Disco that takes a query or prompt, opens a cluster of related tabs, and then generates a custom application tailored to whatever task the user is trying to accomplish. The browser launched Thursday as an experiment in Google's Search Labs.

GenTabs, the core feature powering Disco, are information-rich pages created by Google's Gemini AI models -- ask for travel tips and the system builds a planner app; ask for study help and it creates a flashcard system. Disco -- named partly for fun and partly as shorthand for "discovery" -- started as a hackathon project inside Google before catching the team's imagination.

Parisa Tabriz, who leads the Chrome team, said that Disco is not intended as a general-purpose browser and is not an attempt to cannibalize Chrome. The experiment aims to test what happens when users move from simply having tabs to generating personalized, curated applications on demand. The capability relies on features in the recently launched Gemini 3, which can create one-off interactive interfaces and build miniature apps on the fly rather than just returning text or images.
Opera

Opera Wants You To Pay $20 a Month For Its AI Browser (techcrunch.com) 43

Opera has opened its AI-powered browser Neon to the public after a couple of months of testing, and anyone interested in trying it will need to pay $19.90 per month. The Norway-based company first unveiled Neon in May and launched it in early access to select users in October. Like Perplexity's Comet, OpenAI's Atlas, and The Browser Company's Dia, Neon bakes an AI chatbot into its interface that can answer questions about pages, create mini apps and videos, and perform tasks. The browser uses your browsing history as context, so you can ask it to fetch details from a YouTube video you watched last week. The subscription also grants access to AI models including Gemini 3 Pro and GPT-5.1.
The Internet

India Proposes Charging OpenAI, Google For Training AI On Copyrighted Content (techcrunch.com) 10

An anonymous reader quotes a report from TechCrunch: On Tuesday, India's Department for Promotion of Industry and Internal Trade released a proposed framework that would give AI companies access to all copyrighted works for training in exchange for paying royalties to a new collecting body composed of rights-holding organizations, with payments then distributed to creators. The proposal argues that this "mandatory blanket license" would lower compliance costs for AI firms while ensuring that writers, musicians, artists, and other rights holders are compensated when their work is scraped to train commercial models. [...]

The eight-member committee, formed by the Indian government in late April, argues the system would avoid years of legal uncertainty while ensuring creators are compensated from the outset. Defending the system, the committee says in a 125-page submission (PDF) that a blanket license "aims to provide an easy access to content for AI developers reduce transaction costs [and] ensure fair compensation for rightsholders," calling it the least burdensome way to manage large-scale AI training. The submission adds that the single collecting body would function as a "single window," eliminating the need for individual negotiations and enabling royalties to flow to both registered and unregistered creators.

The Internet

Evidence That Humans Now Speak In a Chatbot-Influenced Dialect Is Getting Stronger (gizmodo.com) 85

Researchers and moderators are increasingly concerned that ChatGPT-style language is bleeding into everyday speech and writing. The topic has been explored in the past but "two new, more anecdotal reports, suggest that our chatbot dialect isn't just something that can be found through close analysis of data," reports Gizmodo. "It might be an obvious, every day fact of life now." Slashdot reader joshuark shares an excerpt from the report: Over on Reddit, according to a new Wired story by Kat Tenbarge, moderators of certain subreddits are complaining about AI posts ruining their online communities. It's not new to observe that AI-armed spammers post low-value engagement bait on social media, but these are spaces like r/AmItheAsshole, r/AmIOverreacting, and r/AmITheDevil, where visitors crave the scintillation or outright titillation of bona fide human misbehavior. If, behind the scenes, there's not really a grieving college student having her tuition cut off for randomly flying off the handle at her stepmom, there's no real fun to be had. The mods in the Wired story explain how they detect AI content, and unfortunately their methods boil down to "It's vibes." But one novel struggle in the war against slop, the mods say, is that not only are human-written posts sometimes rewritten by AI, but mods are concerned that humans are now writing like AI. Humans are becoming flesh and blood AI-text generators, muddying the waters of AI "detection" to the point of total opacity.

As "Cassie" an r/AmItheAsshole moderator who only gave Wired her first name put it, "AI is trained off people, and people copy what they see other people doing." In other words, Cassie said, "People become more like AI, and AI becomes more like people." Meanwhile, essayist Sam Kriss just explored the weird way chatbots "write" for the latest issue of the New York Times Magazine, and he discovered along the way that humans have accidentally taken cues from that weirdness. After parsing chatbots' strange tics and tendencies -- such as overusing the word "delve" most likely because it's in a disproportional number of texts from Nigeria, where that word is popular -- Kriss refers to a previously reported trend from over the summer. Members of the U.K. Parliament were accused of using ChatGPT to write their speeches.

The thinking goes that ChatGPT-written speeches contained the phrase "I rise to speak," an American phrase, used by American legislators. But Kriss notes that it's not just showing up from time to time. It's being used with downright breathtaking frequency. "On a single day this June, it happened 26 times," he notes. While 26 different MPs using ChatGPT to write speeches is not some scientific impossibility, it's more likely an example of chatbots, "smuggling cultural practices into places they don't belong," to quote Kriss again. So when Kriss points out that when Starbucks locations were closing in September, and signs posted on the doors contained tortured sentences like, "It's your coffeehouse, a place woven into your daily rhythm, where memories were made, and where meaningful connections with our partners grew over the years," one can't state with certainty that this is AI-generated text (although let's be honest: it probably is).

Social Networks

Social Media's Relentless Shopping Machine Has Created an Army of Debt-Laden Buyers (theverge.com) 113

The influencer economy that Goldman Sachs projects will reach nearly half a trillion dollars by 2027 depends on a less-examined population: the influenced, millions of people who find themselves accumulating debt and clutter after years of exposure to what amounts to a 24/7 digital infomercial.

Antoinette Hocbo, a former marketing professional who knows the tricks brands use to chip away at willpower, bought a $199 Pilates program, an iPad, and an arsenal of makeup products after TikTok's algorithm served her a stream of aspirational content. The Pilates gear now sits unused. Elysia Berman accumulated over $50,000 in debt across four credit cards and four buy-now-pay-later services during the pandemic, purchasing items she never wore because influencers recommended them.

A 2024 Pew Research Center survey found 62% of adults on TikTok use the platform to find product reviews and recommendations. Marketing expert Mara Einstein told The Verge that brands now need seven exposures to prompt consumer action, up from three in the pre-social media era. The vastness of the internet has allowed available products to bloat beyond imagination.
Power

Can This Simple Invention Convert Waste Heat Into Electricity? (ajc.com) 48

Nuclear engineer Lonnie Johnson worked on NASA's Galileo mission, has more than 140 patents, and invented the Super Soaker water gun. But now he's working on "a potential key to unlock a huge power source that's rarely utilized today," reports the Atlanta Journal-Constitution. [Alternate URL here.]

Waste heat... The Johnson Thermo-Electrochemical Converter, or JTEC, has few moving parts, no combustion and no exhaust. All the work to generate electricity is done by hydrogen, the most abundant element in the universe. Inside the device, pressurized hydrogen gas is separated by a thin, filmlike membrane, with low pressure gas on one side and high pressure gas on the other. The difference in pressure in this "stack" is what drives the hydrogen to compress and expand, creating electricity as it circulates. And unlike a fuel cell, it does not need to be refueled with more hydrogen. All that's needed to keep the process going and electricity flowing is a heat source.

As it turns out, there are enormous amounts of energy vented or otherwise lost from industrial facilities like power plants, factories, breweries and more. Between 20% and 50% of all energy used for industrial processes is dumped into the atmosphere and lost as waste heat, according to the U.S. Department of Energy. The JTEC works with high temperatures, but the device's ability to generate electricity efficiently from low-grade heat sources is what company executives are most excited about. Inside JTEC's headquarters, engineers show off a demonstration unit that can power lights and a sound system with water that's roughly 200 degrees Fahrenheit — below the boiling point and barely warm enough to brew a cup of tea, said Julian Bell, JTEC's vice president of engineering. Comas Haynes, a research engineer at the Georgia Tech Research Institute specializing in thermal and hydrogen system designs, agrees the company could "hit a sweet spot" if it can capitalize on lower temperature heat...

For Johnson, the potential application he's most excited about lies beneath our feet. Geothermal energy exists naturally in rocks and water beneath the Earth's surface at various depths. Tapping into that resource through abandoned oil and gas wells — a well-known access point for underground heat — offers another opportunity. "You don't need batteries and you can draw power when you need it from just about anywhere," Johnson said. Right now, the company is building its first commercial JTEC unit, which is set to be deployed early next year. Mike McQuary, JTEC's CEO and the former president of the pioneering internet service provider MindSpring, said he couldn't reveal the customer, but said it's a "major Southeast utility company." "Crossing that bridge where you have commercial customers that believe in it and will pay for it is important," McQuary said...

On top of some initial seed money, the company brought in $30 million in a Series A funding in 2022 — money that allowed the company to move to its Lee + White headquarters and hire more than 30 engineers. McQuary said it expects to begin another round of fundraising soon.

"Johnson, meanwhile, hasn't stopped working on new inventions," the article points out. "He continues to refine the design for his solid-state battery..."
Open Source

How Home Assistant Leads a 'Local-First Rebellion' (github.blog) 100

It runs locally, a free/open source home automation platform connecting all your devices together, regardless of brand. And GitHub's senior developer calls it "one of the most active, culturally important, and technically demanding open source ecosystems on the planet," with tens of thousands of contributors and millions of installations.

That's confirmed by this year's "Octoverse" developer survey... Home Assistant was one of the fastest-growing open source projects by contributors, ranking alongside AI infrastructure giants like vLLM, Ollama, and Transformers. It also appeared in the top projects attracting first-time contributors, sitting beside massive developer platforms such as VS Code... Home Assistant is now running in more than 2 million households, orchestrating everything from thermostats and door locks to motion sensors and lighting. All on users' own hardware, not the cloud. The contributor base behind that growth is just as remarkable: 21,000 contributors in a single year...

At its core, Home Assistant's problem is combinatorial explosion. The platform supports "hundreds, thousands of devices... over 3,000 brands," as [maintainer Franck Nijhof] notes. Each one behaves differently, and the only way to normalize them is to build a general-purpose abstraction layer that can survive vendor churn, bad APIs, and inconsistent firmware. Instead of treating devices as isolated objects behind cloud accounts, everything is represented locally as entities with states and events. A garage door is not just a vendor-specific API; it's a structured device that exposes capabilities to the automation engine. A thermostat is not a cloud endpoint; it's a sensor/actuator pair with metadata that can be reasoned about.

That consistency is why people can build wildly advanced automations. Frenck describes one particularly inventive example: "Some people install weight sensors into their couches so they actually know if you're sitting down or standing up again. You're watching a movie, you stand up, and it will pause and then turn on the lights a bit brighter so you can actually see when you get your drink. You get back, sit down, the lights dim, and the movie continues." A system that can orchestrate these interactions is fundamentally a distributed event-driven runtime for physical spaces. Home Assistant may look like a dashboard, but under the hood it behaves more like a real-time OS for the home...

The local-first architecture means Home Assistant can run on hardware as small as a Raspberry Pi but must handle workloads that commercial systems offload to the cloud: device discovery, event dispatch, state persistence, automation scheduling, voice pipeline inference (if local), real-time sensor reading, integration updates, and security constraints. This architecture forces optimizations few consumer systems attempt.

"If any of this were offloaded to a vendor cloud, the system would be easier to build," the article points out. "But Home Assistant's philosophy reverses the paradigm: the home is the data center..."

As Nijhof says of other vendor solutions, "It's crazy that we need the internet nowadays to change your thermostat."
Privacy

Woman Hailed As a Hero For Smashing Man's Meta Smart Glasses On Subway (yahoo.com) 154

"Woman Hailed as Hero for Smashing Man's Meta Smart Glasses on Subway," reads the headline at Futurism: As Daily Dot reports, a New York subway rider has accused a woman of breaking his Meta smart glasses. "She just broke my Meta glasses," said the TikTok user, who goes by eth8n, in a video that has since garnered millions of views.

"You're going to be famous on the internet!" he shouted at her through the window after getting off the train. The accused woman, however, peered back at him completely unfazed, as if to say that he had it coming.

"I was making a funny noise people were honestly crying laughing at," he claimed in the caption of a followup video. "She was the only person annoyed..." But instead of coming to his support, the internet wholeheartedly rallied behind the alleged perpetrator, celebrating the woman as a folk hero — and perfectly highlighting how the public feels about gadgets like Meta's smart glasses.

"Good, people are tired of being filmed by strangers," one user commented.

"The fact that no one else on the train is defending him is telling," another wrote...

Others accused the man of fabricating details of the incident. "'People were crying laughing' — I've never heard a less plausible NYC subway story," one user wrote.

In a comment on TikTok, the man acknowledges he'd filmed her on the subway — it looks like he even zoomed in. The man says then her other options were "asking nicely to not post it or blur my face".

He also warns that she could get arrested for breaking his glasses if he "felt like it". (And if he sees her again.) "I filed a claim with the police and it's a misdemeanor charge." A subsequent video's captions describe him unboxing new Meta smartglasses "and I'm about to do my thing again... no crazy lady can stop me now."

I'm imagining being mugged — and then telling the mugger "You're going to be internet famous!" But maybe that just shows how easy it is to weaponize smartglasses and their potential for vast public exposure.
Portables

Why These Parents Want Schools to Stop Issuing iPads to Their Children (nbcnews.com) 48

What happened when a school in Los Angeles gave a sixth grader an iPad for use throughout the school day? "He used the iPad during school to watch YouTube and participate in Fortnite video game battles," reports NBC News.

His mother has now launched a coalition of parents called Schools Beyond Screens "organizing in WhatsApp groups, petition drives and actions at school board meetings and demanding meetings with district administrators, pressuring them to pull back on the school-mandated screen time." Los Angeles Unified is the first district of its size to face an organized — and growing — campaign by parents demanding that schools pull back on mandatory screen time. The discontent in Los Angeles Unified, the second-largest school district in the country, reflects a growing unease nationally about the amount of time children spend learning through screens in classrooms. While a majority of states prohibit children from using cellphones in class, 88% of schools provide students with personal devices, according to the National Center for Education Statistics, often Chromebook laptops or iPads. The parents hope getting a district that has over 409,000 students across nearly 800 schools to change how it approaches screen time would send a signal across public school districts to pull back from a yearslong effort to digitize classrooms....

[In the Los Angeles school district] Students in grade levels as low as kindergarten are provided iPads, and some schools require them to take the tablets home. Some teachers have allowed students to opt out of the iPad-based assignments, but other parents say they've been told that they can't. Parents can also opt their children out of having access to YouTube and several other Google products... The billion-dollar 2014 initiative to give tablet computers to everyone became a scandal after the bidding process appeared to heavily favor Apple, and it faced criticism once it became clear that students could bypass security protocols and that few teachers used the tablets. Currently, the district leaves it up to individual schools to decide whether they want students to take home iPads or Chromebooks every day and how much time they spend on them in class...

Around 300 parents attended listening sessions the district held last month about technology in the classroom. Nearly all who spoke criticized how much screen time schools gave their children in class, pointing to ways their behavior and grades suffered as students watched YouTube and played Minecraft... Several also asked district officials to explain why children as young as kindergartners were asked to sign a form to use devices in which they promised they would honor intellectual property law and refrain from meeting people in person whom they met online. "Is it possible for children to meet people over the internet on school-issued devices?" one father asked. The district officials declined to answer, saying it was meant to be a listening session.

In 2022, Los Angeles Unified started requiring students to complete benchmark assessments on educaitonal software i-Ready, the article points out, which generates unique questions for each students. "But parents and teachers are unable to see what children are asked, in part because the company that makes the program considers them proprietary information..."

One teacher says his school's administartors are requiring him to use i-Ready even though it doesn't have any material for the science class he's actually teaching. He's also noticed some students will use answers from AI chatbots, bypassing the school's monitoring software by creating alternate user profiles. But the monitoring software company suggests the school misconfigured their software's settings, adding "More commonly, when students attempt to bypass filtering or monitoring, they do so by using proxies."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Social Networks

'Rage Bait' Named Oxford Word of the Year 2025 (bbc.com) 58

Longtime Slashdot reader sinij shares a report from the BBC: Do you find yourself getting increasingly irate while scrolling through your social media feed? If so, you may be falling victim to rage bait, which Oxford University Press has named its word or phrase of the year. It is a term that describes manipulative tactics used to drive engagement online, with usage of it increasing threefold in the last 12 months, according to the dictionary publisher.

Rage bait beat two other shortlisted terms -- aura farming and biohack -- to win the title. The list of words is intended to reflect some of the moods and conversations that have shaped 2025.
"Fundamental problem with social media as a system is that it exploits people's emotional thinking," comments sinij. "Cute cat videos on one end and rage bait on another end of the same spectrum. I suspect future societies will be teaching disassociation techniques in junior school."
The Courts

The New York Times Is Suing Perplexity For Copyright Infringement (techcrunch.com) 68

The New York Times is suing Perplexity for copyright infringement, accusing the AI startup of repackaging its paywalled reporting without permission. TechCrunch reports: The Times joins several media outlets suing Perplexity, including the Chicago Tribune, which also filed suit this week. The Times' suit claims that "Perplexity provides commercial products to its own users that substitute" for the outlet, "without permission or remuneration." [...] "While we believe in the ethical and responsible use and development of AI, we firmly object to Perplexity's unlicensed use of our content to develop and promote their products," Graham James, a spokesperson for The Times, said in a statement. "We will continue to work to hold companies accountable that refuse to recognize the value of our work."

Similar to the Tribune's suit, the Times takes issue with Perplexity's method for answering user queries by gathering information from websites and databases to generate responses via its retrieval-augmented generation (RAG) products, like its chatbots and Comet browser AI assistant. "Perplexity then repackages the original content in written responses to users," the suit reads. "Those responses, or outputs, often are verbatim or near-verbatim reproductions, summaries, or abridgments of the original content, including The Times's copyrighted works."

Or, as James put it in his statement, "RAG allows Perplexity to crawl the internet and steal content from behind our paywall and deliver it to its customers in real time. That content should only be accessible to our paying subscribers." The Times also claims Perplexity's search engine has hallucinated information and falsely attributed it to the outlet, which damages its brand. "Publishers have been suing new tech companies for a hundred years, starting with radio, TV, the internet, social media, and now AI," Jesse Dwyer, Perplexity's head of communications, told TechCrunch. "Fortunately it's never worked, or we'd all be talking about this by telegraph."

AI

Cloudflare Says It Blocked 416 Billion AI Scraping Requests In 5 Months 43

Cloudflare says it blocked 416 billion AI scraping attempts in five months and warns that AI is reshaping the internet's economic model -- with Google's combined crawler creating a monopoly-style dilemma where opting out of AI means disappearing from search altogether. Tom's Hardware reports: "The business model of the internet has always been to generate content that drive traffic and then sell either things, subscriptions, or ads, [Cloudflare CEO Matthew Prince] told Wired. "What I think people don't realize, though, is that AI is a platform shift. The business model of the internet is about to change dramatically. I don't know what it's going to change to, but it's what I'm spending almost every waking hour thinking about."

While Cloudflare blocks almost all AI crawlers, there's one particular bot it cannot block without affecting its customers' online presence -- Google. The search giant combined its search and AI crawler into one, meaning users who opt out of Google's AI crawler won't be indexed in Google search results. "You can't opt out of one without opting out of both, which is a real challenge -- it's crazy," Prince continued. "It shouldn't be that you can use your monopoly position of yesterday in order to leverage and have a monopoly position in the market of tomorrow."
Wireless Networking

Why One Man Is Fighting For Our Right To Control Our Garage Door Openers (nytimes.com) 126

An anonymous reader quotes a report from the New York Times: A few years ago, Paul Wieland, a 44-year-old information technology professional living in New York's Adirondack Mountains, was wrapping up a home renovation when he ran into a hiccup. He wanted to be able to control his new garage door with his smartphone. But the options available, including a product called MyQ, required connecting to a company's internet servers. He believed a "smart" garage door should operate only over a local Wi-Fi network to protect a home's privacy, so he started building his own system to plug into his garage door. By 2022, he had developed a prototype, which he named RATGDO, for Rage Against the Garage Door Opener. He had hoped to sell 100 of his new gadgets just to recoup expenses, but he ended up selling tens of thousands. That's because MyQ's maker did what a number of other consumer device manufacturers have done over the last few years, much to the frustration of their customers: It changed the device, making it both less useful and more expensive to operate.

Chamberlain Group, a company that makes garage door openers, had created the MyQ hubs so that virtually any garage door opener could be controlled with home automation software from Apple, Google, Nest and others. Chamberlain also offered a free MyQ smartphone app. Two years ago, Chamberlain started shutting down support for most third-party access to its MyQ servers. The company said it was trying to improve the reliability of its products. But this effectively broke connections that people had set up to work with Apple's Home app or Google's Home app, among others. Chamberlain also started working with partners that charge subscriptions for their services, though a basic app to control garage doors was still free.

While Mr. Wieland said RATGDO sales spiked after Chamberlain made those changes, he believes the popularity of his device is about more than just opening and closing a garage. It stems from widespread frustration with companies that sell internet-connected hardware that they eventually change or use to nickel-and-dime customers with subscription fees. "You should own the hardware, and there is a line there that a lot of companies are experimenting with," Mr. Wieland said in a recent interview. "I'm really afraid for the future that consumers are going to swallow this and that's going to become the norm." [...] For Mr. Wieland, the fight isn't over. He started a company named RATCLOUD, for Rage Against the Cloud. He said he was developing similar products that were not yet for sale.

Slashdot Top Deals