Moon

Interlune Signs $300M Deal to Harvest Helium-3 for Quantum Computing from the Moon (msn.com) 60

An anonymous reader shared this report from the Washington Post: Finnish tech firm Bluefors, a maker of ultracold refrigerator systems critical for quantum computing, has purchased tens of thousands of liters of Helium-3 from the moon — spending "above $300 million" — through a commercial space company called Interlune. The agreement, which has not been previously reported, marks the largest purchase of a natural resource from space.

Interlune, a company founded by former executives from Blue Origin and an Apollo astronaut, has faced skepticism about its mission to become the first entity to mine the moon (which is legal thanks to a 2015 law that grants U.S. space companies the rights to mine on celestial bodies). But advances in its harvesting technology and the materialization of commercial agreements are gradually making this undertaking sound less like science fiction. Bluefors is the third customer to sign up, with an order of up to 10,000 liters of Helium-3 annually for delivery between 2028 and 2037...

Helium-3 is lighter than the Helium-4 gas featured at birthday parties. It's also much rarer on Earth. But moon rock samples from the Apollo days hint at its abundance there. Interlune has placed the market value at $20 million per kilogram (about 7,500 liters). "It's the only resource in the universe that's priced high enough to warrant going out to space today and bringing it back to Earth," said Rob Meyerson [CEO of Interlune and former president of Blue Origin]...

[H]eat, even in small doses, can cause qubits to produce errors. That's where Helium-3 comes in. Bluefors makes the cooling technology that allows the computer to operate — producing chandelier-type structures known as dilution refrigerators. Their fridges, used by quantum computer leader IBM, contain a mixture of Helium-3 and Helium-4 that pushes temperatures below 10 millikelvins (or minus-460 degrees Fahrenheit)... Existing quantum computers have been built with more than a thousand qubits, he said, but a commercial system or data center would need a million or more. That could require perhaps thousands of liters of Helium-3 per quantum computer. "They will need more Helium-3 than is available on planet Earth," said Gary Lai [a co-founder and chief technology officer of Interlune, who was previously the chief architect at Blue Origin]. Most Helium-3 on Earth, he said, comes from the decay of tritium (an isotope of hydrogen) in nuclear weapons stockpiles, but between 22,000 and 30,000 liters are made each year...

"We estimate there's more than a million metric tons of Helium-3 on the moon," Meyerson said. "And it's been accumulating there for 4 billion years." Now, they just need to get it.

Interlune CEO Meyerson tells the post "It's really all about establishing a resilient supply chain for this critical material" — adding that in the long-term he could also see Helium-3 being used for other purposes including fusion energy.
AI

UAE Lab Releases Open-Source Model to Rival China's DeepSeek (gizmodo.com) 43

"The United Arab Emirates wants to compete with the U.S. and China in AI," writes Gizmodo, "and a new open source model may be its strongest contender yet.

"An Emirati AI lab called the Institute of Foundation Models (IFM) released K2 Think on Tuesday, a model that researchers say rivals OpenAI's ChatGPT and China's DeepSeek in standard benchmark tests." "With just 32 billion parameters, it outperforms flagship reasoning models that are 20x larger," the lab wrote in a press release on Tuesday. DeepSeek's R1 has 671 billion parameters, though only 37 billion are active. Meta's latest Llama 4 models range from 17 billion to 288 billion active parameters. OpenAI doesn't share parameter information. OpenAI doesn't share parameter information.

Researchers also claim that K2 Think leads "all open-source models in math performance" across several benchmarks. The model is intended to be more focused on math, coding, and scientific research than most other AI chatbots. The Emirati lab's selling point for the model is similar to DeepSeek's strategy that disrupted the AI market earlier this year: optimized efficiency that will have better or the same computing power at a lower cost...

The lab is also aiming to be transparent in everything, "open-sourcing not just models but entire development processes" that provide "researchers with complete materials including training code, datasets, and model checkpoints," IFM said in a press release from May.

The UAE and other Arab countries are investing in AI to try reducing their economic dependence on fossil fuels, the article points out.
United States

US Tech Companies Enabled the Surveillance and Detention of Hundreds of Thousands in China (apnews.com) 29

An Associated Press investigation based on tens of thousands of leaked documents revealed Tuesday that American technology companies designed and built core components of China's surveillance apparatus over the past 25 years, selling billions of dollars in equipment to Chinese police and government agencies despite warnings about human rights abuses.

IBM partnered with Chinese defense contractor Huadi in 2009 to develop predictive policing systems for the "Golden Shield" project, AP reports, citing classified government blueprints. The technology enabled mass detentions in Xinjiang, where administrators assigned 100-point risk scores to Uyghurs with deductions for growing beards or being aged 15-55. Dell promoted a laptop with "all-race recognition" capabilities on its WeChat account in 2019. Thermo Fisher Scientific marketed DNA kits as "designed" for ethnic minorities including Uyghurs and Tibetans until August 2024.

Oracle, Microsoft, HP, Cisco, Intel, NVIDIA, and VMware sold geographic mapping software, facial recognition systems, and cloud infrastructure to Chinese police through the 2010s. The surveillance network tracks "key persons" whose movements are restricted and monitored, with one estimate suggesting 55,000 to 110,000 people were placed under residential surveillance in the past decade. China now has more surveillance cameras than the rest of the world combined.
IBM

Red Hat Back-Office Team Moving To IBM From 2026 (theregister.com) 28

Starting in 2026, Red Hat's back-office staff in HR, finance, legal, and accounting will be transferred to IBM, while engineering, product, sales, and marketing teams remain at Red Hat -- at least for now. The Register reports: According to a communication sent to employees, those in General & Administrative areas will join IBM, including the lion's share of the people working in the HR, finance, accounting, and legal units at Red Hat. A source told us the switch will be "implemented this year," although in some countries "it might take longer due to legal constraints." The leadership running those teams will remain within the Red Hat fold. Some are nervous about the move, with tech companies -- notably IBM -- eliminating duplicated roles to consolidate back-office functions. In January -- as has happened in recent years -- IBM again forecast annual savings of $3.5 billion, partly through job cuts.

There is no public data on the size of the G&A population within Red Hat but the total workforce is understood to be about 19,000 worldwide, with the bulk of those employed in the engineering, sales, and support divisions. The team remaining at Red Hat will be part of the central Strategy & Operations group managed by Mike Ferris. As such, engineering, product, sales, and marketing personnel will be unaffected. For now at least.
"Culture has been dead for at least 1 year now," said Reddit user Purple_Afternoon 966. "The experience might be different depending on the department, but there is nothing left from the open culture praised. We have now micromanagement, decision making from middle management that clearly have no idea of what we do and how and trying to implement ideas that they read somewhere, with no context, data and not giving answer or addressing feedback."
Google

Google and IBM Believe First Workable Quantum Computer is in Sight (ft.com) 36

IBM and Google report they will build industrial-scale quantum computers containing one million or more qubits by 2030, following IBM's June publication of a quantum computer blueprint addressing previous design gaps and Google's late-2023 breakthrough in scaling error correction.

Current experimental systems contain fewer than 200 qubits. IBM encountered crosstalk interference when scaling its Condor chip to 433 qubits and subsequently adopted low-density parity-check code requiring 90% fewer qubits than Google's surface code method, though this requires longer connections between distant qubits.

Google plans to reduce component costs tenfold to achieve its $1 billion target price for a full-scale machine. Amazon Web Services quantum hardware executive Oskar Painter told FT he estimates useful quantum computers remain 15-30 years away, citing engineering challenges in scaling despite resolved fundamental physics problems.
AI

LLMs' 'Simulated Reasoning' Abilities Are a 'Brittle Mirage,' Researchers Find (arstechnica.com) 238

An anonymous reader quotes a report from Ars Technica: In recent months, the AI industry has started moving toward so-called simulated reasoning models that use a "chain of thought" process to work through tricky problems in multiple logical steps. At the same time, recent research has cast doubt on whether those models have even a basic understanding of general logical concepts or an accurate grasp of their own "thought process." Similar research shows that these "reasoning" models can often produce incoherent, logically unsound answers when questions include irrelevant clauses or deviate even slightly from common templates found in their training data.

In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as "suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text." To pull on that thread, the researchers created a carefully controlled LLM environment in an attempt to measure just how well chain-of-thought reasoning works when presented with "out of domain" logical problems that don't match the specific logical patterns found in their training data. The results suggest that the seemingly large performance leaps made by chain-of-thought models are "largely a brittle mirage" that "become[s] fragile and prone to failure even under moderate distribution shifts," the researchers write. "Rather than demonstrating a true understanding of text, CoT reasoning under task transformations appears to reflect a replication of patterns learned during training." [...]

Rather than showing the capability for generalized logical inference, these chain-of-thought models are "a sophisticated form of structured pattern matching" that "degrades significantly" when pushed even slightly outside of its training distribution, the researchers write. Further, the ability of these models to generate "fluent nonsense" creates "a false aura of dependability" that does not stand up to a careful audit. As such, the researchers warn heavily against "equating [chain-of-thought]-style output with human thinking" especially in "high-stakes domains like medicine, finance, or legal analysis." Current tests and benchmarks should prioritize tasks that fall outside of any training set to probe for these kinds of errors, while future models will need to move beyond "surface-level pattern recognition to exhibit deeper inferential competence," they write.

The Internet

AOL Finally Discontinues Its Dial-Up Internet Access - After 34 Years (pcmag.com) 75

AOL (now a Yahoo subsidiary) just announced its dial-up internet service will be discontinued at the end of September.

"The change also means the retirement of the AOL Dialer software and the AOL Shield browser, both designed for older operating systems and slow connections that relied on the familiar screech of a modem handshake," remembers Slashdot reader BrianFagioli (noting that dial-up Internet "was once the gateway to the web for millions of households, back when speeds were measured in kilobits and waiting for a picture to load could feel like an eternity.")

AOL's dial-up service "has been publicly available for 34 years," writes Tom's Hardware. But AppleInsider notes the move comes more than 40 years after AOL started "as a very early Apple service." AOL itself started back in 1983 under the name Control Video Corporation, offering online services for the Atari 2600 console. After failing, it became Quantum Computer Services in 1985, eventually launching AppleLink in 1988 to connect Macintosh computers together... With the launch of PC Link for IBM-compatible PCs in 1988 and parting from Apple in October 1989, the company rebranded itself as America Online, or AOL... Even at its height, dial-up connections could get up to 56 kilobits per second under ideal conditions, while modern connections are measured in megabits and gigabits. Most of the service was also what's considered a "walled garden," with features that were only available through AOL itself and that it wasn't the actual, untamed Internet.
In the 1990s AOL "was how millions of people were introduced to the Internet," the article remembers, adding that "Even after the AOL Time Warner acquisition and the 2015 acquisition by Verizon, AOL was still a popular service. Astoundingly, it counted about two million dial-up subscribers at the time." In the 2021 acquisition of assets from Verizon by Apollo Global Management, AOL was said to have 1.5 million people paying for services. However, this was more for technical support and software, rather than for actual Internet access. A CNBC report at the time reports that the dial-up user count was "in the low thousands".... While it dies off, not with a bang but a whimper, AOL's dial-up is still remembered as one of the most transformative services in the Internet age.
"This change does not impact the numerous other valued products and services that these subscribers are able to access and enjoy as part of their plans," a Yahoo spokesperson told PC Magazine this week. "There is also no impact to our users' free AOL email accounts." AOL's disastrous 2001 merger with Time Warner and ongoing inability to deliver broadband to its customers... left it on a path to decline that acquiring such widely read sites as Engadget [2005] and TechCrunch [2010] did not stem. By 2014, the number of dial-up AOL customers had collapsed to 2.34 million. A year later, Verizon bought the company for $4.4 billion in an internet-content play that turned out to be as doomed as the Time Warner transaction. In 2021, Verizon unloaded both AOL and Yahoo, which it had separately purchased in 2017, to the private-equity firm Apollo Global Management....

The demise of AOL's dial-up service does not mean the extinction of the oldest form of consumer online access. Estimates from the Census Bureau's 2023 American Community Survey show 163,401 Americans connected to the internet via dial-up that year.

That was by far the smallest segment of the internet-using population, dwarfed by 100,166,949 subscribing to such forms of broadband as "cable, fiber optic, or DSL"; 8,628,648 using satellite; 3,318,901 using "Internet access without a subscription" (which suggests Wi-Fi from coffee shops or public libraries); and 1,445,135 via "other service."

The remaining AOL dial-up subscribers will need to find some sort of replacement, which in rural areas may be limited to fixed wireless or SpaceX's considerably more expensive Starlink. Or they may wind up joining the ranks of Americans with no internet access: 6,866,059, in those 2023 estimates.

IBM

Vortex's Wireless Take On the Model M Keyboard: Cover Band Or New Legend? (ofb.biz) 74

IBM's legendary Model M keyboard was sturdy and solid. But "What would happen if you took the classic layout and look of the Model M and rebuilt it with modern mechanical guts?" asks long-time Slashdot reader uninet. Writing for the long-running tech blog Open for Business , they review a new wireless keyboard from Vortex that was clearly inspired by the Model M: The result is a unique keyboard with one foot in two different decades... Let's call it the Vortex M for simplicity's sake.

I first became aware of it on a Facebook ad and was immediately fascinated. It looked so close to the original Model M, I wondered if someone else had gotten access to an original mold and was trying Unicomp's game. No, they've just managed to copy the aesthetic to a nearly uncanny level... The Vortex M eschews the normal eye candy we expect on modern keyboards and attempts the closest duplication of IBM's staid early PC design sensibility I can imagine. Off-white, rugged and absolutely no frills of lighting. If you're looking for cutesy, forget it.

The keyboard's casing has the same highly textured plastic that looks and feels instantly familiar to anyone who spent too many hours interacting with early PCs. Model M to a tee. The keycaps likewise look the part... The Vortex M looks like a Model M. Its build quality feels like a Model M. But one key press and it becomes clear this is a different beast. Underneath the Model M-styled skin, Vortex's keyboard is a very modern design — everything the Unicomp is not. For our test, Vortex provided a keyboard with Cherry MX Blues, the classic clicky option the company and I both thought would best match up against Model M's buckling springs...

Vortex's product configurator offers a variety of common and less common Cherry and Gateron options, if you want to get a different sort of feel in lieu of the clicky I tested. This is possible with an MX switch-style keyboard and impossible with buckling springs with their one option of bold clicky. Not only can this be done when ordering, but also later on, thanks to hot swap switches that allow changes without soldering. Following the modern premium board theme, Vortex paired high end switches with a gasket mount and foam padding. The combination provides a solid feeling, sound dampened typing experience. Ironically, though, for a keyboard that apes the design of perhaps the loudest keyboard on the market today, the Vortex M is (relatively) quiet even with the clicky Blues on tap...

The review's highlights:
  • "The keyboard is exquisitely crafted to look like the IBM original... "
  • "The Vortex M supports connecting to three different devices via Bluetooth, along with a 2.4 GHz receiver and a USB Type-C wired connection. "
  • There's a full complement of media hot keys — "including an emoji key ala recent Macs. "
  • "For repetitive tasks, the keyboard is programmable with macros... And unlike Unicomp's boards, Vortex's can switch between PC and Mac layouts with the press of a hotkey."
  • The keyboard uses AA batteries rather than having a built-in rechargeable battery

The keyboard ultimately gave the reviewer some cognitive dissonance. "How am I typing on a Model M and not making a racket...?"

"Pricing varies based on options, but as tested, it clocked in at $154. That's the low end of the 'premium' market and this is an exceptional board for that price."


Microsoft

Microsoft Ends Tradition of Naming Competitors in Regulatory Filings (cnbc.com) 15

Microsoft has abandoned a decades-long tradition of calling out the names of its rivals in regulatory documents. From a report: When the 50-year-old technology company released its annual report Wednesday, the 101-page document contained zero references to longtime foes Apple and IBM.

Nor did it mention privately held challengers such as Anthropic or Databricks. Last year's Microsoft annual report officially designated over 25 companies as competitors. The names of Microsoft's enemies have appeared in its annual reports at least since 1994.

IBM

Why IBM's Amazing 'Sliding Keyboard' ThinkPad 701 Never Survived Past 1995 (fastcompany.com) 40

Fast Company's tech editor Harry McCracken (also harrymccSlashdot reader #1,641,347) writes: As part of Fast Company's "1995 week", I wrote about IBM's ThinkPad 701, the famous model with an expanding "butterfly" keyboard [which could be stretched from 9.7-inches to 11.5 inches]. By putting full-sized keys in a subnotebook-sized laptop, it solved one of mobile computing's biggest problems.

IBM discontinued it before the end of the year, and neither it nor anyone else ever made anything similar again. And yet it remains amazing.

Check out this 1995 ad for the keyboard! The article calls the butterfly ThinkPad "one of the best things the technology industry has ever done with moving parts," and revisits 1995's race "to design a subnotebook-sized laptop with a desktop-sized keyboard." It's still comically thick, standing almost as tall as four MacBook Airs stacked on each other. That height is required to accommodate multiple technologies later rendered obsolete by technological progress, such as a dial-up fax/modem, an infrared port, two PCMCIA expansion card slots, and a bulky connector for an external docking station... Lifting the screen set off a system of concealed gears and levers that propelled the two sections of keyboard into position with balletic grace... A Businesweek article cited sales of 215,000 units and said it was 1995's best-selling PC laptop. Yet by the time that story appeared in February 1996, the 701 had been discontinued. IBM never made anything like it again. Neither did anyone else...

As portable computers became more popular, progress in display technology had made it possible for PC makers to use larger screens. Manufacturers were also getting better at fitting a laptop's necessary components into less space. These advances let them design a new generation of thin, light laptops that went beyond the limitations of subnotebooks. Once IBM could make a lightweight laptop with a wider screen, "the need for an expanding keyboard was no longer essential," says [butterfly ThinkPad engineer] George Karidis. "It would have just been a novelty."

The article notes a fan's open source guides for repairing butterfly Thinkpads at Project Butterfly, and all the fan-community videos about it on YouTube, "from an excellent documentary to people simply being entranced by it.

"As a thing of wonder, it continues to transcend its own obsolescence."
AI

CEOs Have Started Warning: AI is Coming For Your Job (yahoo.com) 124

It's not just Amazon's CEO predicting AI will lower their headcount. "Top executives at some of the largest American companies have a warning for their workers: Artificial intelligence is a threat to your job," reports the Washington Post — including IBM, Salesforce, and JPMorgan Chase.

But are they really just trying to impress their shareholders? Economists say there aren't yet strong signs that AI is driving widespread layoffs across industries.... CEOs are under pressure to show they are embracing new technology and getting results — incentivizing attention-grabbing predictions that can create additional uncertainty for workers. "It's a message to shareholders and board members as much as it is to employees," Molly Kinder, a Brookings Institution fellow who studies the impact of AI, said of the CEO announcements, noting that when one company makes a bold AI statement, others typically follow. "You're projecting that you're out in the future, that you're embracing and adopting this so much that the footprint [of your company] will look different."

Some CEOs fear they could be ousted from their job within two years if they don't deliver measurable AI-driven business gains, a Harris Poll survey conducted for software company Dataiku showed. Tech leaders have sounded some of the loudest warnings — in line with their interest in promoting AI's power...

IBM, which recently announced job cuts, said it replaced a couple hundred human resource workers with AI "agents" for repetitive tasks such as onboarding and scheduling interviews. In January, Meta CEO Mark Zuckerberg suggested on Joe Rogan's podcast that the company is building AI that might be able to do what some human workers do by the end of the year.... Marianne Lake, JPMorgan's CEO of consumer and community banking, told an investor meeting last month that AI could help the bank cut headcount in operations and account services by 10 percent. The CEO of BT Group Allison Kirkby suggested that advances in AI would mean deeper cuts at the British telecom company...

Despite corporate leaders' warnings, economists don't yet see broad signs that AI is driving humans out of work. "We have little evidence of layoffs so far," said Columbia Business School professor Laura Veldkamp, whose research explores how companies' use of AI affects the economy. "What I'd look for are new entrants with an AI-intensive business model, entering and putting the existing firms out of business." Some researchers suggest there is evidence AI is playing a role in the drop in openings for some specific jobs, like computer programming, where AI tools that generate code have become standard... It is still unclear what benefits companies are reaping from employees' use of AI, said Arvind Karunakaran, a faculty member of Stanford University's Center for Work, Technology, and Organization. "Usage does not necessarily translate into value," he said. "Is it just increasing productivity in terms of people doing the same task quicker or are people now doing more high value tasks as a result?"

Lynda Gratton, a professor at London Business School, said predictions of huge productivity gains from AI remain unproven. "Right now, the technology companies are predicting there will be a 30% productivity gain. We haven't yet experienced that, and it's not clear if that gain would come from cost reduction ... or because humans are more productive."

On an earnings call, Salesforce's chief operating and financial officer said AI agents helped them reduce hiring needs — and saved $50 million, according to the article. (And Ethan Mollick, co-director of Wharton School of Business' generative AI Labs, adds that if advanced tools like AI agents can prove their reliability and automate work — that could become a larger disruptor to jobs.) "A wave of disruption is going to happen," he's quoted as saying.

But while the debate continues about whether AI will eliminate or create jobs, Mollick still hedges that "the truth is probably somewhere in between."
Supercomputing

IBM Says It's Cracked Quantum Error Correction (ieee.org) 26

Edd Gent reporting for IEEE Spectrum: IBM has unveiled a new quantum computing architecture it says will slash the number of qubits required for error correction. The advance will underpin its goal of building a large-scale, fault-tolerant quantum computer, called Starling, that will be available to customers by 2029. Because of the inherent unreliability of the qubits (the quantum equivalent of bits) that quantum computers are built from, error correction will be crucial for building reliable, large-scale devices. Error-correction approaches spread each unit of information across many physical qubits to create "logical qubits." This provides redundancy against errors in individual physical qubits.

One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit. This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream," Jay Gambetta, the vice president of IBM Quantum, said in a press briefing. Around 2019, the company began to investigate alternatives. In a paper published in Nature last year, IBM researchers outlined a new error-correction scheme called quantum low-density parity check (qLDPC) codes that would require roughly one-tenth of the number of qubits that surface codes need. Now, the company has unveiled a new quantum-computing architecture that can realize this new approach.
"We've cracked the code to quantum error correction and it's our plan to build the first large-scale, fault-tolerant quantum computer," said Gambetta, who is also an IBM Fellow. "We feel confident it is now a question of engineering to build these machines, rather than science."
Robotics

Hugging Face Introduces Two Open-Source Robot Designs (siliconangle.com) 8

An anonymous reader quotes a report from SiliconANGLE: Hugging Face has open-sourced the blueprints of two internally developed robots called HopeJR and Reachy Mini. The company debuted the machines on Thursday. Hugging Face is backed by more than $390 million in funding from Nvidia Corp., IBM Corp. and other investors. It operates a GitHub-like platform for sharing open-source artificial intelligence projects. It says its platform hosts more than 1 million AI models, hundreds of thousands of datasets and various other technical assets.

The company started prioritizing robotics last year after launching LeRobot, a section of its platform dedicated to autonomous machines. The portal provides access to AI models for powering robots and datasets that can be used to train those models. Hugging Face released its first hardware blueprint, a robotic arm design called the SO-100, late last year. The SO-100 was developed in partnership with a startup called The Robot Studio. Hugging Face also collaborated with the company on the HopeJR, the first new robot that debuted this week. According to TechCrunch, it's a humanoid robot that can perform 66 movements including walking.

HopeJR is equipped with a pair of robotic arms that can be remotely controlled by a human using a pair of specialized, chip-equipped gloves. HopeJR's arms replicate the movements made by the wearer of the gloves. A demo video shared by Hugging Face showed that the robot can shake hands, point to a specific text snippet on a piece of paper and perform other tasks. Hugging Face's other new robot, the Reachy Mini, likewise features an open-source design. It's based on technology that the company obtained through the acquisition of a venture-backed startup called Pollen Robotics earlier this year. Reachy Mini is a turtle-like robot that comes in a rectangular case. Its main mechanical feature is a retractable neck that allows it to follow the user with its head or withdraw into the case. This case, which is stationary, is compact and lightweight enough to be placed on a desk.
Hugging Face will offer pre-assembled versions of its open-source Reach Mini and HopeJR robots for $250 and $3,000, with the first units starting to ship by the end of the year.
AI

When a Company Does Job Interviews with a Malfunctioning AI - and Then Rejects You (slate.com) 51

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace." Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase. Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.
Open Source

OSU's Open Source Lab Eyes Infrastructure Upgrades and Sustainability After Recent Funding Success (osuosl.org) 11

It's a nonprofit that's provide hosting for the Linux Foundation, the Apache Software Foundation, Drupal, Firefox, and 160 other projects — delivering nearly 430 terabytes of information every month. (It's currently hosting Debian, Fedora, and Gentoo Linux.) But hosting only provides about 20% of its income, with the rest coming from individual and corporate donors (including Google and IBM). "Over the past several years, we have been operating at a deficit due to a decline in corporate donations," the Open Source Lab's director announced in late April.

It's part of the CS/electrical engineering department at Oregon State University, and while the department "has generously filled this gap, recent changes in university funding makes our current funding model no longer sustainable. Unless we secure $250,000 in committed funds, the OSL will shut down later this year."

But "Thankfully, the call for support worked, paving the way for the OSU Open Source Lab to look ahead, into what the future holds for them," reports the blog It's FOSS.

"Following our OSL Future post, the community response has been incredible!" posted director Lance Albertson. "Thanks to your amazing support, our team is funded for the next year. This is a huge relief and lets us focus on building a truly self-sustaining OSL." To get there, we're tackling two big interconnected goals:

1. Finding a new, cost-effective physical home for our core infrastructure, ideally with more modern hardware.
2. Securing multi-year funding commitments to cover all our operations, including potential new infrastructure costs and hardware refreshes.


Our current data center is over 20 years old and needs to be replaced soon. With Oregon State University evaluating the future of this facility, it's very likely we'll need to relocate in the near future. While migrating to the State of Oregon's data center is one option, it comes with significant new costs. This makes finding free or very low-cost hosting (ideally between Eugene and Portland for ~13-20 racks) a huge opportunity for our long-term sustainability. More power-efficient hardware would also help us shrink our footprint.

Speaking of hardware, refreshing some of our older gear during a move would be a game-changer. We don't need brand new, but even a few-generations-old refurbished systems would boost performance and efficiency. (Huge thanks to the Yocto Project and Intel for a recent hardware donation that showed just how impactful this is!) The dream? A data center partner donating space and cycled-out hardware. Our overall infrastructure strategy is flexible. We're enhancing our OpenStack/Ceph platforms and exploring public cloud credits and other donated compute capacity. But whatever the resource, it needs to fit our goals and come with multi-year commitments for stability. And, a physical space still offers unique value, especially the invaluable hands-on data center experience for our students....

[O]ur big focus this next year is locking in ongoing support — think annualized pledges, different kinds of regular income, and other recurring help. This is vital, especially with potential new data center costs and hardware needs. Getting this right means we can stop worrying about short-term funding and plan for the future: investing in our tech and people, growing our awesome student programs, and serving the FOSS community. We're looking for partners, big and small, who get why foundational open source infrastructure matters and want to help us build this sustainable future together.

The It's FOSS blog adds that "With these prerequisites in place, the OSUOSL intends to expand their student program, strengthen their managed services portfolio for open source projects, introduce modern tooling like Kubernetes and Terraform, and encourage more community volunteers to actively contribute."

Thanks to long-time Slashdot reader I'm just joshin for suggesting the story.
Google

Google Dominates AI Patent Applications (axios.com) 12

Google has overtaken IBM to become the leader in generative AI-related patents and also leads in the emerging area of agentic AI, according to data from IFI Claims. Axios: In the patents-for-agents U.S. rankings, Google and Nvidia top the list, followed by IBM, Intel and Microsoft, according to an analysis released Thursday.

Globally, Google and Nvidia also led the agentic patents list, but three Chinese universities also make the top 10, highlighting China's place as the chief U.S. rival in the field. In global rankings for generative AI, Google was also the leader -- but six of the top 10 global spots were held by Chinese companies or universities. Microsoft was No. 3, with Nvidia and IBM also in the top 10.

United States

Tech Industry Warns US Investment Pledges Hinge on Research Tax Break (bloomberg.com) 64

An anonymous reader shares a report: Major tech companies lobbying to salvage a tax deduction for research and development are warning they may pull back from high-profile pledges of new US investments if Congress doesn't fully reinstate the break.

Big tech companies have pledged more than $1.6 trillion in investments in the US since Donald Trump took office, promising to build factories and data centers in alignment with Trump's push to build in America. But industry representatives are signaling those promises will be imperiled if Congress doesn't fully reinstate the R&D tax deduction, which was pared back to help offset the massive cost of President Donald Trump's 2017 bill. At the time, it was estimated that limiting the provision would temporarily raise about $120 billion from 2018 to 2027.

"A lot of those announcements are predicated on an expectation the administration and Congress will partner together on reinstating those R&D provisions," said Jason Oxman, president of the Information Technology Industry Council, a trade group that includes among its members Amazon, Apple, Anthropic, Alphabet, and IBM. Lobbyists representing tech companies that announced US investments have made similar claims to congressional aides and lawmakers, according to people familiar with the conversations.

IBM

IBM CEO Says AI Has Replaced Hundreds of Workers But Created New Programming, Sales Jobs (wsj.com) 27

IBM CEO Arvind Krishna said the tech giant has used AI, and specifically AI agents, to replace the work of a couple hundred human resources workers. As a result, it has hired more programmers and salespeople, he said. From a report: Krishna's comments on Monday come as businesses sort through the workforce impacts of AI and AI agents, the independent bots that can autonomously perform tasks like analyze spreadsheets, conduct research and draft emails.

While there haven't yet been widespread layoffs or downsizing as a result of AI across the economy, some business leaders have said they are holding down head count as they investigate the use of the technology.

Meanwhile, the information-technology workforce has continued to shrink as AI weighs on hiring and some workers leave the field. For IBM, which this week hosts its annual Think conference in Boston, AI adoption has led it to boost hiring in some functions.

Patents

OIN Marks 20 Years of Defending Linux and Open Source From Patent Trolls (zdnet.com) 3

An anonymous reader quotes a report from ZDNet: Today, open-source software powers the world. It didn't have to be that way. The Open Invention Network's (OIN) origins are rooted in a turbulent era for open source. In the mid-2000s, Linux faced existential threats from copyright and patent litigation. Besides, the infamous SCO lawsuit and Microsoft's claims that Linux infringed on hundreds of its patents cast a shadow over the ecosystem. Business leaders became worried. While SCO's attacks petered out, patent trolls -- formally known as Patent Assertion Entities (PAEs) -- were increasing their attacks. So, open-source friendly industry giants, including IBM, Novell, Philips, Red Hat, and Sony, formed the Open Invention Network (OIN) to create a bulwark against patent threats targeting Linux and open-source technologies. Founded in 2005, the Open Invention Network (OIN) has evolved into a global community comprising over 4,000 participants, ranging from startups to multinational corporations, collectively holding more than three million patents and patent applications.

At the heart of OIN's legal strategy is a royalty-free cross-license agreement. Members agree not to assert their patents against the Linux System, creating a powerful network effect that shields open-source projects from litigation. As OIN CEO Keith Bergelt explained, this model enables "broad-based participation by ensuring patent risk mitigation in key open-source technologies, thereby facilitating open-source adoption." This approach worked then, and it continues to work today. [...] Over the years, OIN's mission has expanded beyond Linux to cover a range of open-source technologies. Its Linux System Definition, which determines the scope of patent cross-licensing, has grown from a few core packages to over 4,500 software components and platforms, including Android, Apache, Kubernetes, and ChromeOS. This expansion has been critical, as open source has become foundational across industries such as finance, automotive, telecommunications, and artificial intelligence.

IBM

IBM Pledges $150 Billion US Investment (reuters.com) 42

IBM announced plans to invest $150 billion in the United States over the next five years, with more than $30 billion earmarked specifically for research and development of mainframes and quantum computing technology. The investment follows similar commitments from tech giants including Apple and Nvidia -- each pledging approximately $500 billion -- in the wake of President Trump's election and tariff threats.

"We have been focused on American jobs and manufacturing since our founding 114 years ago," said IBM CEO Arvind Krishna in a statement. The company currently manufactures its mainframe systems in upstate New York and plans to continue designing and assembling quantum computers domestically. The announcement comes amid challenging circumstances for IBM, which recently saw 15 government contracts shelved under the Trump administration's cost-cutting initiatives.

Further reading: IBM US Cuts May Run Deeper Than Feared - and the Jobs Are Heading To India;
IBM Now Has More Employees In India Than In the US (2017).

Slashdot Top Deals