×
AI

OpenAI and Stack Overflow Partner To Bring More Technical Knowledge Into ChatGPT (theverge.com) 5

OpenAI and the developer platform Stack Overflow have announced a partnership that could potentially improve the performance of AI models and bring more technical information into ChatGPT. From a report: OpenAI will have access to Stack Overflow's API and will receive feedback from the developer community to improve the performance of AI models. OpenAI, in turn, will give Stack Overflow attribution -- aka link to its contents -- in ChatGPT. Users of the chatbot will see more information from Stack Overflow's knowledge archive if they ask ChatGPT coding or technical questions. The companies write in the press release that this will "foster deeper engagement with content." Stack Overflow will use OpenAI's large language models to expand its Overflow AI, the generative AI application it announced last year. Further reading: Stack Overflow Cuts 28% Workforce as the AI Coding Boom Continues (October 2023).
AI

40,000 AI-Narrated Audiobooks Flood Audible (techspot.com) 59

A new breed of audiobook is taking over digital bookshelves -- ones narrated not by professional voice actors, but by artificial intelligence voices. It's an AI audiobook revolution that has been turbo-charged by Amazon. From a report: Since announcing a beta tool last year allowing self-published authors to generate AI "virtual voice" narrations for their ebooks, over 40,000 AI-narrated titles have flooded onto Audible, Amazon's audiobook platform. The eye-popping stat, revealed in a recent Bloomberg report, has many authors celebrating but is also raising red flags for human narrators.

For indie writers wanting to crack the lucrative audiobook market without paying hefty professional voiceover fees, Amazon's free virtual narration tool is a game-changer. One blogger cited in the report claimed converting an ebook to audio using the AI narration took just 52 minutes, bypassing the expensive studio recording route. Others have mixed reactions. Last month, an author named George Steffanos launched an audiobook version of his existing book, posting that while he prefers human-generated works to those generated by AI, "the modest sales of my work were never going to support paying anyone for all those hours of narration."

Microsoft

Microsoft Readies New AI Model To Compete With Google, OpenAI (theinformation.com) 20

For the first time since it invested more than $10 billion into OpenAI in exchange for the rights to reuse the startup's AI models, Microsoft is training a new, in-house AI model large enough to compete with state-of-the-art models from Google, Anthropic and OpenAI itself. The Information: The new model, internally referred to as MAI-1, is being overseen by Mustafa Suleyman, the ex-Google AI leader who most recently served as CEO of the AI startup Inflection before Microsoft hired the majority of the startup's staff and paid $650 million for the rights to its intellectual property in March. But this is a Microsoft model, not one carried over from Inflection, although it may build on training data and other tech from the startup. It is separate from the Pi models that Inflection previously released, according to two Microsoft employees with knowledge of the effort.

MAI-1 will be far larger than any of the smaller, open source models that Microsoft has previously trained, meaning it will require more computing power and training data and will therefore be more expensive, according to the people. MAI-1 will have roughly 500 billion parameters, or settings that can be adjusted to determine what models learn during training. By comparison, OpenAI's GPT-4 has more than 1 trillion parameters, while smaller open source models released by firms like Meta Platforms and Mistral have 70 billion parameters. That means Microsoft is now pursuing a dual trajectory of sorts in AI, aiming to develop both "small language models" that are inexpensive to build into apps and that could run on mobile devices, alongside larger, state-of-the-art AI models.

Twitter

Elon Musk's X Launches Grok AI-Powered 'Stories' Feature (techcrunch.com) 66

An anonymous reader shared this report from Mint: Elon Musk-owned social media platform X (formerly Twitter) has launched a new Grok AI-powered feature called 'Stories', which allows users to read summaries of a trending post on the social media platform. The feature is currently only available to X Premium subscribers on the iOS and web versions, and hasn't found its way to the Android application just yet... instead of reading the whole post, they'll have Grok AI summarise it to get the gist of those big news stories. However, since Grok, like other AI chatbots on the market, is prone to hallucination (making things up), X provides a warning at the end of these stories that says: "Grok can make mistakes, verify its outputs."
"Access to xAI's chatbot Grok is meant to be a selling point to push users to buy premium subscriptions," reports TechCrunch: A snarky and "rebellious" AI, Grok's differentiator from other AI chatbots like ChatGPT is its exclusive and real-time access to X data. A post published to X on Friday by tech journalist Alex Kantrowitz lays out Elon Musk's further plan for AI-powered news on X, based on an email conversation with the X owner. Kantrowitz says that conversations on X will make up the core of Grok's summaries. Grok won't look at the article text, in other words, even if that's what people are discussing on the platform.
The article notes that some AI companies have been striking expensive licensing deals with news publishers. But in X's case, "it's able to get at the news by way of the conversation around it — and without having to partner to access the news content itself."
Microsoft

Microsoft's 'Responsible AI' Chief Worries About the Open Web (msn.com) 38

From the Washington Post's "Technology 202" newsletter: As tech giants move toward a world in which chatbots supplement, and perhaps supplant, search engines, the Microsoft executive assigned to make sure AI is used responsibly said the industry has to be careful not to break the business model of the wider web. Search engines citing and linking to the websites they draw from is "part of the core bargain of search," [Microsoft's chief Responsible AI officer] said in an interview Monday....

"It's really important to maintain a healthy information ecosystem and recognize it is an ecosystem. And so part of what I will continue to guide our Microsoft teams toward is making sure that we are citing back to the core webpages from which the content is sourced. Making sure that we've got that feedback loop happening. Because that is part of the core bargain of search, right? And I think it's critical to make sure that we are both providing users with new engaging ways to interact, to explore new ideas — but also making sure that we are building and supporting the great work of our creators."

Asked about lawsuits alleging copyright use without permission, they said "We believe that there are strong grounds under existing laws to train models."

But they also added those lawsuits are "asking legitimate questions" about where the boundaries are, "for which the courts will provide answers in due course."
IT

Some San Francisco Tech Workers are Renting Cheap 'Bed Pods' (sfgate.com) 157

An anonymous reader shared this report from SFGate: Late last year, tales of tech workers paying $700 a month for tiny "bed pods" in downtown San Francisco went viral. The story provided a perfect distillation of SF's wild (and wildly expensive) housing market — and inspired schadenfreude when the city deemed the situation illegal. But the provocative living situation wasn't an anomaly, according to a city official.

"We've definitely seen an uptick of these 'pod'-type complaints," Kelly Wong, a planner with San Francisco's code enforcement and zoning and compliance team, told SFGATE... Wong stressed that it's not that San Francisco is inherently against bed pod-type arrangements, but that the city is responsible for making sure these spaces are safe and legally zoned.


So Brownstone Shared Housing is still renting one bed pod location — but not accepting new tenants — after citations for failing to get proper permits and having a lock on the front door that required a key to exit.

And SFGate also spoke to Alex Akel, general manager of Olive Rooms, which opened up a co-living and co-working space in SoMa earlier this year (and also faced "a flurry of complaints.") "Unfortunately, we had complaints from neighbors because of foot traffic and noise, and since then we cut the number of people to fit the ordinance by the city," Akel wrote. Olive Rooms describes its space as targeted at "tech founders from Central Asia, giving them opportunities to get involved in the current AI boom." Akel added that its residents are "bringing new energy to SF," but that the program "will not accept new residents before we clarify the status with the city."

In April, the city also received a complaint about a group called Let's Be Buds, which rents out 14 pods in a loft on Divisadero Street that start at $575 per month for an upper bunk.

While this recent burst of complaints is new, bed pods in San Francisco have been catching flak for years... a company called PodShare, which rents — you guessed it — bed pods, squared itself away with the city and has operated in SF since 2019.

Brownstone's CEO told SFGate "A lot of people want to be here for AI, or for school, or different opportunities." He argues that "it's literally impossible without a product like ours," and that their residents had said the option "positively changed the trajectory of their lives."
AI

AI-Operated F-16 Jet Carries Air Force Official Into 550-MPH Aerial Combat Test (apnews.com) 83

The Associated Press reports that an F-16 performing aerial combat tests at 550 miles per hour was "controlled by artificial intelligence, not a human pilot."

And riding in the front seat was the U.S. Secretary of the Air Force... AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028.

It was fitting that the dogfight took place at [California's] Edwards Air Force Base, a vast desert facility where Chuck Yeager broke the speed of sound and the military has incubated its most secret aerospace advances. Inside classified simulators and buildings with layers of shielding against surveillance, a new test-pilot generation is training AI agents to fly in war. [U.S. Secretary of the Air Force] Frank Kendall traveled here to see AI fly in real time and make a public statement of confidence in its future role in air combat.

"It's a security risk not to have it. At this point, we have to have it," Kendall said in an interview with The Associated Press after he landed... At the end of the hourlong flight, Kendall climbed out of the cockpit grinning. He said he'd seen enough during his flight that he'd trust this still-learning AI with the ability to decide whether or not to launch weapons in war... [T]he software first learns on millions of data points in a simulator, then tests its conclusions during actual flights. That real-world performance data is then put back into the simulator where the AI then processes it to learn more.

"Kendall said there will always be human oversight in the system when weapons are used," the article notes.

But he also said looked for to the cost-savings of smaller and cheaper AI-controlled unmanned jets.

Slashdot reader fjo3 shared a link to this video. (More photos at Sky.com.)
AI

Microsoft Details How It's Developing AI Responsibly (theverge.com) 40

Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.

The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.

It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.

Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles...

"When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report."

They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."
Government

The US Just Mandated Automated Emergency Braking Systems By 2029 (caranddriver.com) 273

Come 2029, all cars sold in the U.S. "must be able to stop and avoid contact with a vehicle in front of them at speeds up to 62 mph," reports Car and Driver.

"Additionally, the system must be able to detect pedestrians in both daylight and darkness. As a final parameter, the federal standard will require the system to apply the brakes automatically up to 90 mph when a collision is imminent, and up to 45 mph when a pedestrian is detected." Notably, the federal standardization of automated emergency braking systems includes pedestrian-identifying emergency braking, too. Once implemented, the NHTSA projects that this standard will save at least 360 lives a year and prevent at least 24,000 injuries annually. Specifically, the federal agency claims that rear-end collisions and pedestrian injuries will both go down significantly...

"Automatic emergency braking is proven to save lives and reduce serious injuries from frontal crashes, and this technology is now mature enough to require it in all new cars and light trucks. In fact, this technology is now so advanced that we're requiring these systems to be even more effective at higher speeds and to detect pedestrians," said NHTSA deputy administrator Sophie Shulman.

Thanks to long-time Slashdot reader sinij for sharing the article.
AI

AI-Powered 'HorseGPT' Fails to Predict This Year's Kentucky Derby Winner (decrypt.co) 40

In 2016, an online "swarm intelligence" platform generated a correct prediction for the Kentucky Derby — naming all four top finishers, in order. (But the next year their predictions weren't even close, with TechRepublic suggesting 2016's race had an unusual cluster of just a few top racehorses.)

So this year Decrypt.co tried crafting their own system "that can be called up when the next Kentucky Derby draws near. There are a variety of ways to enlist artificial intelligence in horse racing. You could process reams of data based on your own methodology, trust a third-party pre-trained model, or even build a bespoke solution from the ground up. We decided to build a GPT we named HorseGPT to crunch the numbers and make the picks for us...

We carefully curated prompts to instill HorseGPT with expertise in data science specific to horse racing: how weather affects times, the role of jockeys and riding styles, the importance of post positions, and so on. We then fed it a mix of research papers and blogs covering the theoretical aspects of wagering, and layered on practical knowledge: how to read racing forms, what the statistics mean, which factors are most predictive, expert betting strategies, and more. Finally, we gave HorseGPT a wealth of historical Kentucky Derby data, arming it with the raw information needed to put its freshly imparted skills to use.

We unleashed HorseGPT on official racing forms for this year's Derby. We asked HorseGPT to carefully analyze each race's form, identify the top contenders, and recommend wager types and strategies based on deep background knowledge derived from race statistics.

So how did it do? HorseGPT picked two horses to win — both of which failed to do so. (Sierra Leone did finish second — in a rare three-way photo finish. But Fierceness finished... 15th.) It also recommended the same two horses if you were trying to pick the top two finishers in the correct order — a losing bet, since, again, Fierceness finished 15th.

But even worse, HorseGPT recommended betting on Just a Touch to finish in either first or second place. When the race was over, that horse finished dead last. (And when asked to pick the top three finishers in correct order, HorseGPT stuck with its choices for the top two — which finished #2 and #15 — and, again, Just a Touch, who came in last.)

When Google Gemini was asked to pick the winner by The Athletic, it first chose Catching Freedom (who finished 4th). But it then gave an entirely different answer when asked to predict the winner "with an Italian accent."

"The winner of the Kentucky Derby will be... Just a Touch! Si, that's-a right, the underdog! There will be much-a celebrating in the piazzas, thatta-a I guarantee!"
Again, Just a Touch came in last.

Decrypt noticed the same thing. "Interestingly enough, our HorseGPT AI agent and the other out-of-the-box chatbots seemed to agree with each other," the site notes, adding that HorseGPT also seemed to agree "with many expert analysts cited by the official Kentucky Derby website."

But there was one glimmer of insight into the 20-horse race. When asked to choose the top four finishers in order, HorseGPT repeated those same losing picks — which finished #2, #15, and #20. But then it added two more underdogs for fourth place finishers, "based on their potential to outperform expectations under muddy conditions." One of those two horses — Domestic Product — finished in 13th place.

But the other of the two horses was Mystik Dan — who came in first.

Mystik Dan appeared in only one of the six "Top 10 Finishers" lists (created by humans) at the official Kentucky Derby site... in the #10 position.
The Military

US Official Urges China, Russia To Declare AI Will Not Control Nuclear Weapons 80

Senior Department arms control official Paul Dean on Thursday urged China and Russia to declare that artificial intelligence would never make decisions on deploying nuclear weapons. Washington had made a "clear and strong commitment" that humans had total control over nuclear weapons, said Dean. Britain and France have made similar commitments. Reuters reports: "We would welcome a similar statement by China and the Russian Federation," said Dean, principal deputy assistant secretary in the Bureau of Arms Control, Deterrence and Stability. "We think it is an extremely important norm of responsible behaviour and we think it is something that would be very welcome in a P5 context," he said, referring to the five permanent members of the United Nations Security Council.
The Internet

Humans Now Share the Web Equally With Bots, Report Warns (independent.co.uk) 31

An anonymous reader quotes a report from The Independent, published last month: Humans now share the web equally with bots, according to a major new report -- as some fear that the internet is dying. In recent months, the so-called "dead internet theory" has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts. Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its "Bad Bot Report" indicates. That is up 2 percent in comparison with last year, and is the highest number ever seen since the report began in 2013. In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.

Some of that rise is the result of the adoption of generative artificial intelligence and large language models. Companies that build those systems use bots scrape the internet and gather data that can then be used to train them. Some of those bots are becoming increasingly sophisticated, Imperva warned. More and more of them come from residential internet connections, which makes them look more legitimate. "Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications," said Nanhi Singh, general manager for application security at Imperva. "As more AI-enabled tools are introduced, bots will become omnipresent."

AI

AI Engineers Report Burnout, Rushed Rollouts As 'Rat Race' To Stay Competitive Hits Tech Industry (cnbc.com) 36

An anonymous reader quotes a report from CNBC: Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday. There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job. But it was all for nothing. The project was ultimately "deprioritized," the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project.

The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature's software. AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its "iPhone moment."

AI

Nurses Say Hospital Adoption of Half-Cooked 'AI' Is Reckless (techdirt.com) 103

An anonymous reader quotes a report from Techdirt: Last week, hundreds of nurses protested the implementation of sloppy AI into hospital systems in front of Kaiser Permanente. Their primary concern: that systems incapable of empathy are being integrated into an already dysfunctional sector without much thought toward patient care: "No computer, no AI can replace a human touch," said Amy Grewal, a registered nurse. "It cannot hold your loved one's hand. You cannot teach a computer how to have empathy."

There are certainly roles automation can play in easing strain on a sector full of burnout after COVID, particularly when it comes to administrative tasks. The concern, as with other industries dominated by executives with poor judgement, is that this is being used as a justification by for-profit hospital systems to cut corners further. From a National Nurses United blog post (spotted by 404 Media): "Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care," they added.

Kaiser Permanente, for its part, insists it's simply leveraging "state-of-the-art tools and technologies that support our mission of providing high-quality, affordable health care to best meet our members' and patients' needs." The company claims its "Advance Alert" AI monitoring system -- which algorithmically analyzes patient data every hour -- has the potential to save upwards of 500 lives a year. The problem is that healthcare giants' primary obligation no longer appears to reside with patients, but with their financial results. And, that's even true in non-profit healthcare providers. That is seen in the form of cut corners, worse service, and an assault on already over-taxed labor via lower pay and higher workload (curiously, it never seems to impact outsized high-level executive compensation).

AI

Microsoft Bans US Police Departments From Using Enterprise AI Tool 49

An anonymous reader quotes a report from TechCrunch: Microsoft has changed its policy to ban U.S. police departments from using generative AI through the Azure OpenAI Service, the company's fully managed, enterprise-focused wrapper around OpenAI technologies. Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used "by or for" police departments in the U.S., including integrations with OpenAI's text- and speech-analyzing models. A separate new bullet point covers "any law enforcement globally," and explicitly bars the use of "real-time facial recognition technology" on mobile cameras, like body cameras and dashcams, to attempt to identify a person in "uncontrolled, in-the-wild" environments. [...]

The new terms leave wiggle room for Microsoft. The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn't cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police). That tracks with Microsoft's and close partner OpenAI's recent approach to AI-related law enforcement and defense contracts.
Last week, taser company Axon announced a new tool that uses AI built on OpenAI's GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. It's unclear if Microsoft's updated policy is in response to Axon's product launch.
AI

Microsoft To Invest $2.2 Billion In Cloud and AI Services In Malaysia (reuters.com) 8

An anonymous reader quotes a report from Reuters: Microsoft said on Thursday it will invest $2.2 billion over the next four years in Malaysia to expand cloud and artificial intelligence (AI) services in the company's latest push to promote its generative AI technology in Asia. The investment, the largest in Microsoft's 32-year history in Malaysia, will include building cloud and AI infrastructure, creating AI-skilling opportunities for 200,000 people, and supporting the country's developers, the company said.

Microsoft will also work with the Malaysian government to establish a national AI Centre of Excellence and enhance the nation's cybersecurity capabilities, the company said in a statement. Prime Minister Anwar Ibrahim, who met Nadella on Thursday, said the investment supported Malaysia's efforts in developing its AI capabilities. Microsoft is trying to expand its support for the development of AI globally. Nadella this week announced a $1.7 billion investment in neighboring Indonesia and said Microsoft would open its first regional data centre in Thailand.
"We want to make sure we have world class infrastructure right here in the country so that every organization and start-up can benefit," Microsoft Chief Executive Satya Nadella said during a visit to Kuala Lumpur.
AI

Anthropic Brings Claude AI To the iPhone and iPad (9to5mac.com) 16

Anthropic has released its Claude AI chatbot on the App Store, bringing the company's ChatGPT competitor to the masses. Compared to OpenAI's chatbot, Claude is built with a focus on reducing harmful outputs and promoting safety, with a goal of making interactions more reliable and ethically aware. You can give it a try here. 9to5Mac reports: Anthropic highlights three launch features for Claude on iPhone:

Seamless syncing with web chats: Pick up where you left off across devices.
Vision capabilities: Use photos from your library, take new photos, or upload files so you can have real-time image analysis, contextual understanding, and mobile-centric use cases on the go.
Open access: Users across all plans, including Pro and Team, can download the app free of charge.

The app is also capable of analyzing things that you show it like objects, images, and your environment.

AI

National Archives Bans Employee Use of ChatGPT (404media.co) 10

The National Archives and Records Administration (NARA) told employees Wednesday that it is blocking access to ChatGPT on agency-issued laptops to "protect our data from security threats associated with use of ChatGPT," 404 Media reported Wednesday. From the report: "NARA will block access to commercial ChatGPT on NARANet [an internal network] and on NARA issued laptops, tablets, desktop computers, and mobile phones beginning May 6, 2024," an email sent to all employees, and seen by 404 Media, reads. "NARA is taking this action to protect our data from security threats associated with use of ChatGPT."

The move is particularly notable considering that this directive is coming from, well, the National Archives, whose job is to keep an accurate historical record. The email explaining the ban says the agency is particularly concerned with internal government data being incorporated into ChatGPT and leaking through its services. "ChatGPT, in particular, actively incorporates information that is input by its users in other responses, with no limitations. Like other federal agencies, NARA has determined that ChatGPT's unrestricted approach to reusing input data poses an unacceptable risk to NARA data security," the email reads. The email goes on to explain that "If sensitive, non-public NARA data is entered into ChatGPT, our data will become part of the living data set without the ability to have it removed or purged."

Google

Google Urges US To Update Immigration Rules To Attract More AI Talent (theverge.com) 98

The US could lose out on valuable AI and tech talent if some of its immigration policies are not modernized, Google says in a letter sent to the Department of Labor. From a report: Google says policies like Schedule A, a list of occupations the government "pre-certified" as not having enough American workers, have to be more flexible and move faster to meet demand in technologies like AI and cybersecurity. The company says the government must update Schedule A to include AI and cybersecurity and do so more regularly.

"There's wide recognition that there is a global shortage of talent in AI, but the fact remains that the US is one of the harder places to bring talent from abroad, and we risk losing out on some of the most highly sought-after people in the world," Karan Bhatia, head of government affairs and public policy at Google, tells The Verge. He noted that the occupations in Schedule A have not been updated in 20 years.

Companies can apply for permanent residencies, colloquially known as green cards, for employees. The Department of Labor requires companies to get a permanent labor certification (PERM) proving there is a shortage of workers in that role. That process may take time, so the government "pre-certified" some jobs through Schedule A. The US Citizenship and Immigration Services lists Schedule A occupations as physical therapists, professional nurses, or "immigrants of exceptional ability in the sciences or arts." While the wait time for a green card isn't reduced, Google says Schedule A cuts down the processing time by about a year.

Microsoft

Microsoft Concern Over Google's Lead Drove OpenAI Investment (yahoo.com) 10

Microsoft's motivation for investing heavily and partnering with OpenAI came from a sense of falling badly behind Google, according to an internal email released Tuesday as part of the Justice Department's antitrust case against the search giant. Bloomberg: The Windows software maker's chief technology officer, Kevin Scott, was "very, very worried" when he looked at the AI model-training capability gap between Alphabet's efforts and Microsoft's, he wrote in a 2019 message to Chief Executive Officer Satya Nadella and co-founder Bill Gates. The exchange shows how the company's top executives privately acknowledged they lacked the infrastructure and development speed to catch up to the likes of OpenAI and Google's DeepMind.

[...] Scott, who also serves as executive vice president of artificial intelligence at Microsoft, observed that Google's search product had improved on competitive metrics because of the Alphabet company's advancements in AI. The Microsoft executive wrote that he made a mistake by dismissing some of the earlier AI efforts of its competitors. "We are multiple years behind the competition in terms of machine learning scale," Scott said in the email. Significant portions of the message, titled 'Thoughts on OpenAI,' remain redacted. Nadella endorsed Scott's email, forwarding it to Chief Financial Officer Amy Hood and saying it explains "why I want us to do this."

Slashdot Top Deals