×
Bug

Everyone Cites That 'Bugs Are 100x More Expensive To Fix in Production' Research, But the Study Might Not Even Exist (theregister.com) 118

"Software research is a train wreck," says Hillel Wayne, a Chicago-based software consultant who specialises in formal methods, instancing the received wisdom that bugs are way more expensive to fix once software is deployed. Wayne did some research, noting that "if you Google 'cost of a software bug' you will get tons of articles that say 'bugs found in requirements are 100x cheaper than bugs found in implementations.' They all use this chart from the 'IBM Systems Sciences Institute'... There's one tiny problem with the IBM Systems Sciences Institute study: it doesn't exist." The Register: Laurent Bossavit, an Agile methodology expert and technical advisor at software consultancy CodeWorks in Paris, has dedicated some time to this matter, and has a post on GitHub called "Degrees of intellectual dishonesty". Bossavit referenced a successful 1987 book by Roger S Pressman called Software Engineering: a Practitioner's Approach, which states: "To illustrate the cost impact of early error detection, we consider a series of relative costs that are based on actual cost data collected for large software projects [IBM81]." The reference to [IBM81] notes that the information comes from "course notes" at the IBM Systems Sciences Institute. Bossavit discovered, though, that many other publications have referenced Pressman's book as the authoritative source for this research, disguising its tentative nature.

Bossavit took the time to investigate the existence of the IBM Systems Science Institute, concluding that it was "an internal training program for employees." No data was available to support the figures in the chart, which shows a neat 100x the cost of fixing a bug once software is in maintenance. "The original project data, if any exist, are not more recent than 1981, and probably older; and could be as old as 1967," said Bossavit, who also described "wanting to crawl into a hole when I encounter bullshit masquerading as empirical support for a claim, such as 'defects cost more to fix the later you fix them'."

IBM

What Ever Happened to IBM's Watson? (nytimes.com) 75

After Watson triumphed on the gameshow Jeopardy in 2011, its star scientist had to convince IBM that it wasn't a magic answer box, and "explained that Watson was engineered to identify word patterns and predict correct answers for the trivia game."

The New York Times looks at what's happened in the decade since: Watson has not remade any industries. And it hasn't lifted IBM's fortunes. The company trails rivals that emerged as the leaders in cloud computing and A.I. — Amazon, Microsoft and Google. While the shares of those three have multiplied in value many times, IBM's stock price is down more than 10 percent since Watson's "Jeopardy!" triumph in 2011.... The company's missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company, according to many of the more than a dozen current and former IBM managers and scientists interviewed for this article... The company's top management, current and former IBM insiders noted, was dominated until recently by executives with backgrounds in services and sales rather than technology product experts. Product people, they say, might have better understood that Watson had been custom-built for a quiz show, a powerful but limited technology...

IBM insists that its revised A.I. strategy — a pared-down, less world-changing ambition — is working... But the grand visions of the past are gone. Today, instead of being a shorthand for technological prowess, Watson stands out as a sobering example of the pitfalls of technological hype and hubris around A.I. The march of artificial intelligence through the mainstream economy, it turns out, will be more step-by-step evolution than cataclysmic revolution.

One example: IBM technologists approached cancer medical centers, but "were frustrated by the complexity, messiness and gaps in the genetic data at the cancer center... At the end of last year, IBM discontinued Watson for Genomics, which grew out of the joint research with the University of North Carolina. It also shelved another cancer offering, Watson for Oncology, developed with another early collaborator, the Memorial Sloan Kettering Cancer Center..." IBM continued to invest in the health industry, including billions on Watson Health, which was created as a separate business in 2015. That includes more than $4 billion to acquire companies with medical data, billing records and diagnostic images on hundreds of millions of patients. Much of that money, it seems clear, they are never going to get back. Now IBM is paring back Watson Health and reviewing the future of the business. One option being explored, according to a report in The Wall Street Journal, is to sell off Watson Health...

Many outside researchers long dismissed Watson as mainly a branding campaign. But recently, some of them say, the technology has made major strides... The business side of Watson also shows signs of life. Now, Watson is a collection of software tools that companies use to build A.I.-based applications — ones that mainly streamline and automate basic tasks in areas like accounting, payments, technology operations, marketing and customer service. It is workhorse artificial intelligence, and that is true of most A.I. in business today. A core Watson capability is natural language processing — the same ability that helped power the "Jeopardy!" win. That technology powers IBM's popular Watson Assistant, used by businesses to automate customer service inquiries...

IBM says it has 40,000 Watson customers across 20 industries worldwide, more than double the number four years ago. Watson products and services are being used 140 million times a month, compared with a monthly rate of about 10 million two years ago, IBM says. Some of the big customers are in health, like Anthem, a large insurer, which uses Watson Assistant to automate customer inquiries.

"Adoption is accelerating," Mr. Thomas said.

Open Source

Libre-SOC's Open Hardware 180nm ASIC Submitted To IMEC for Fabrication (openpowerfoundation.org) 38

"We're building a chip. A fast chip. A safe chip. A trusted chip," explains the web page at Libre-SOC.org. "A chip with lots of peripherals. And it's VPU. And it's a 3D GPU... Oh and here, have the source code."

And now there's big news, reports long-time Slashdot reader lkcl: Libre-SOC's entirely Libre 180nm ASIC, which can be replicated down to symbolic level GDS-II with no NDAs of any kind, has been submitted to IMEC for fabrication.

It is the first wholly-independent Power ISA ASIC outside of IBM to go Silicon in 12 years. Microwatt went to Skywater 130nm in March; however, it is also developed by IBM, as an exceptionally well-made Reference Design, which Libre-SOC used for verification.

Whilst it would seem that Libre-SOC is jumping on the chip-shortage era's innovation bandwagon, Libre-SOC has actually been in development for over three and a half years so far. It even pre-dates the OpenLane initiative, and has the same objectives: fully automated HDL to GDS-II, full transparency and auditability with Libre VLSI tools Coriolis2 and Libre Cell Libraries from Chips4Makers.

With €400,000 in funding from the NLNet Foundation [a long-standing non-profit supporting privacy, security, and the "open internet"], plus an application to NGI Pointer under consideration, the next steps are to continue development of Draft Cray-style Vectors (SVP64) to the already supercomputer-level Power ISA, under the watchful eye of the upcoming OpenPOWER ISA Workgroup.

Open Source

Experimental Rust Support Patches Submitted to Linux Kernel Mailing List (theregister.com) 55

"The Rust for Linux project, sponsored by Google, has advanced..." reported the Register earlier this week: A new set of patches submitted to the Linux kernel mailing list summarizes the progress of the project to enable Rust to be used alongside C for implementing the Linux kernel. The progress is significant.

- ARM and RISC-V architectures are now supported, thanks to work on rustc_codgen_gcc, which is a GCC codegen for rustc. This means that rustc does the initial compilation of Rust code but GCC (the GNU Compiler Collection) does the backend compilation, enabling support for the architectures that GCC supports...

- Overall, "the Rust support is still to be considered experimental. However, as noted back in April, support is good enough that kernel developers can start working on the Rust abstractions for subsystems and write drivers and other modules," continued project leader Miguel Ojeda, a computer scientist at CERN in Geneva, Switzerland, now working full time on Rust for Linux...

There is substantial support for the project across the industry. Google said in April "we feel that Rust is now ready to join C as a practical language for implementing the kernel" and that it would reduce the number of potential bugs and security vulnerabilities. Google is sponsoring Ojeda to work full time on the project for a year, via the ISRG (Internet Security Research Group), which said last month that it is part of "efforts to move the internet's critical software infrastructure to memory safe code," under the project name Prossimo. The ISRG is also the nonprofit organisation behind Let's Encrypt free security certificates. Ojeda also mentioned that Microsoft's Linux Systems Group is contributing and hopes to submit "select Hyper-V drivers written in Rust." Arm is promising assistance with Rust for Linux on ARM-based systems. IBM has contributed Rust kernel support for its PowerPC processor.

More detail is promised at the forthcoming Linux Plumber's Conference in September. In the meantime, the project is on GitHub here.

"In addition, we would like to announce that we are organizing a new conference that focuses on Rust and the Linux kernel..." Ojeda posted. "Details will be announced soon." And for context, the Register adds: Linus Torvalds has said on several occasions that he welcomes the possibility of using Rust alongside C for kernel development, and told IT Wire in April that it is "getting to the point where maybe it might be mergeable for 5.14 or something like that."
United States

New York City Launches a Cyberdefense Center in Manhattan (engadget.com) 14

Infrastructure cyberattacks are quickly becoming a significant problem in the US, and New York City is opening a facility that could help fend off those potentially dangerous hacks. From a report: The Wall Street Journal reports that NYC has launched a long-in-the-making Cyber Critical Services and Infrastructure (CCSI) operations center in Manhattan to defend against major cyberattacks. The initiative's members are a mix of public and private sector organizations that include Amazon, the Federal Reserve Bank, IBM, the New York Police Department and multiple healthcare providers. If a cyberattack hits, they'll ideally cooperate closely to both overcome the attack and muster a city response if the digital offensive hobbles NYC's infrastructure. Politicians first floated the idea in 2017, but CCSI has been a strictly virtual initiative until now.
Businesses

Jim Whitehurst Steps Down as President at IBM Just 14 Months After Taking Role (techcrunch.com) 57

In a surprise announcement today, IBM announced that Jim Whitehurst, who came over in the Red deal, would be stepping down as company president just 14 months after taking over in that role. From a report: IBM didn't give a lot of details as to why he was stepping away, but acknowledged his key role in helping bring the 2018 $34 billion Red Hat deal to fruition and helping bring the two companies together after the deal closed. "Jim has been instrumental in articulating IBM's strategy, but also, in ensuring that IBM and Red Hat work well together and that our technology platforms and innovations provide more value to our clients," the company stated.

He will stay on as a senior advisor to Krishna, but it begs the question why he is leaving after such a short time in the role, and what he plans to do next. Oftentimes after a deal of this magnitude closes, there is an agreement as to how long key executives will stay. It could be simply that the period has expired and Whitehurst wants to move on, but some saw him as the heir apparent to Krishna and the move comes as a surprise when looked at in that context.

IBM

IBM's 18-Month Company-Wide Email System Migration Has Been a Disaster (theregister.com) 117

An anonymous reader quotes a report from The Register: IBM's planned company-wide email migration has gone off the rails, leaving many employees unable to use email or schedule calendar events. And this has been going on for several days. Current and former IBMers have confirmed to The Register that the migration, 18 months in the making, has been a disaster. We've been told that email service has been intermittent for the past four or five days, and not everyone has been affected in the same way. Lack of access has been shorter for some -- one source told us email was back after two days of downtime. Slack is said to be working though Outlook, Verse (IBM's webmail), and Notes have been unreliable.

"Outlook won't work with the new system, IBM Notes won't work and the online email called Verse has now gone down," a tipster told us. "Everyone has been affected and no fix is in sight." One source we spoke with laid the blame on IBM CFO James Kavanaugh for cutting costs and not hiring the right people. Over the weekend, a source told us, a blog post to IBM's internal network w3 said the migration had been planned for 18 months and that everything should go fine provided everyone follows the instructions emailed to them. Evidently, this did not happen. Since then, a banner has gone up on w3 pointing people to a Slack channel for updates on the situation, and IBM's CIO has posted a note to employees addressing the problems. Presently, the w3 status page returns an error. We're told that the migration plan followed from IBM's decision in 2018 to sell various software products, including Notes, to India-based HCL Technologies. Following the sale, Big Blue didn't want its data on HCL's servers. "They were transitioning to IBM-owned servers," one source told us. "That's where it broke down."
There's also concern that "disappeared messages may not be restored," says The Register. "We've even heard that IBM employees have been approached by recruiters posing questions like, 'Why are you still at IBM? They can't even get email straight.'"
Hardware

Quantum-Computing Startup Rigetti To Offer Modular Processors (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: A quantum-computing startup announced Tuesday that it will make a significant departure in its designs for future quantum processors. Rather than building a monolithic processor as everyone else has, Rigetti Computing will build smaller collections of qubits on chips that can be physically linked together into a single functional processor. This isn't multiprocessing so much as a modular chip design. The decision has several consequences, both for Rigetti processors and quantum computing more generally. We'll discuss them below.

Rigetti's computers rely on a technology called a "transmon," based on a superconducting wire loop linked to a resonator. That's the same qubit technology used by larger competitors like Google and IBM. Transmons are set up so that the state of one can influence that of its neighbors during calculations, an essential feature of quantum computing. To an extent, the topology of connections among transmon qubits is a key contributor to the machine's computational power. Two other factors that currently hold back performance are the error rate of individual qubits and the qubit count. Scaling up the qubit count can boost the computational power of a processor -- but only if all the added qubits are of sufficiently high quality that the error rate doesn't limit the ability to perform accurate computations. Once qubit counts reach the thousands, error correction becomes possible, which changes the process significantly. At the moment, though, we're stuck with less than 100 qubits. So this is change is still in the indefinite future.

For Rigetti, the ability to merge several smaller processors -- which it has already shown it can produce -- into a single larger one should let it run up its qubit count relatively rapidly. In today's announcement, the company expects that an 80-qubit processor will be available within the next few months. (For context, IBM's roadmap includes plans for a 127-qubit processor sometime this year.) The other advantage of moving away from a monolithic design is that most chips tend to have one or more qubits that are either defective or have an unacceptably high error rate. By going with a modular design, the consequences of that are reduced. Rigetti can manufacture a large collection of modules and assemble chips from those with the fewest defects. Alternately, the company can potentially select for the modules that have qubits with low error rates and build the equivalent of an all-star processor. The reduced error rate could possibly offset the impact of a lower qubit count.

Science

How Quantum Computers are Already Untangling Nature's Mysteries (wired.co.uk) 36

Wired published a long extract from Amit Katwala's book Quantum Computing: How It Works and How It Could Change the World — explaining how it's already being put to use to explore some of science's biggest secrets by simulating nature itelf: Some of the world's top scientists are engaged in a frantic race to find new battery technologies that can replace lithium-ion with something cleaner, cheaper and more plentiful. Quantum computers could be their secret weapon... Although we've known all the equations we need to simulate chemistry since the 1930s, we've never had the computing power available to do it...

In January 2020, researchers at IBM published an early glimpse of how quantum computers could be useful in the Noisy Intermediate-Scale Quantum Computing era. Working with the German car manufacturer Daimler on improving batteries for electric vehicles, they used a small-scale quantum computer to simulate the behaviour of three molecules containing lithium, which could be used in the next generation of lithium-sulphur batteries that promise to be more powerful and cheaper than today's power cells..

Some other examples:
  • "Chemistry challenges just waiting for a quantum computer powerful and reliable enough to crack them range from the extraction of metals by catalysis through to carbon dioxide fixation, which could be used to capture emissions and slow climate change. But the one with the potential for the biggest impact might be fertiliser production... Some plants rely on bacteria which use an enzyme called nitrogenase to 'fix' nitrogen from the atmosphere and incorporate it into ammonia. Understanding how this enzyme works would be an important step towards...creating less energy-intensive synthetic fertilisers."
  • "Solar panels are another area where quantum computers could help, by accelerating the search for new materials. This approach could also help to identify new materials for batteries, and superconductors that work at room temperature, which would drive advances in motors, magnets and perhaps even quantum computers themselves...."
  • "Quantum computing could help scientists model complex interactions and processes in the body, enabling the discovery of new treatments for diseases such as Alzheimer's, or a quicker understanding of new diseases such as Covid-19. Artificial intelligence is already being used by companies such as DeepMind to gain insight into protein folding — a key facet of growth and disease — and quantum computers will accelerate this effort."

AI

Robotic AI-Powered Ship Tries Retracing Mayflower's Voyage, Has to Turn Back (apnews.com) 34

Check out this video footage of the sleek Mayflower 400 slicing through the water, hoping to retrace the historic 1620 journey of the famous ship which carried pilgrims to America. Unfortunately, unlike the real Mayflower, this robotic 21st-century doppelganger "had to turn back Friday to fix a mechanical problem," reports the Associated Press: Nonprofit marine research organization ProMare, which worked with IBM to build the autonomous ship, said it made the decision to return to base "to investigate and fix a minor mechanical issue" but hopes to be back on the trans-Atlantic journey as soon as possible.

With no humans on board the ship, there's no one to make repairs while it's at sea.

Piloted by artificial intelligence technology, the 50-foot (15-meter) Mayflower Autonomous Ship began its trip early Tuesday, departing from Plymouth, England, and spending some time off the Isles of Scilly before it headed for deeper waters.

It was supposed to take up to three weeks to reach Provincetown on Cape Cod before making its way to Plymouth, Massachusetts. If successful, it would be the largest autonomous vessel to cross the Atlantic.

Linux

Linux Foundation Readies Global COVID Certificate Network (zdnet.com) 131

An anonymous reader quotes a report from ZDNet: The Linux Foundation Public Health (LFPN) is getting the Global COVID Certificate Network (GCCN) ready for deployment. The GCCN [...] really is a coronavirus vaccine passport. It will do this by establishing a global trust registry network. This will enable interoperable and trustworthy exchanges of COVID certificates among countries for safe reopening and provide related technology and guidance for implementation. It's being built by the Linux Foundation Public Health and its allies, Affinidi, AOKPass, Blockchain Labs, Evernym, IBM, Indicio.Tech, LACChain, Lumedic, Proof Market, and ThoughtWorks. These companies have already implemented COVID certificate or pass systems for governments and industries. Together they will define and implement GCCN. This, it's hoped, will be the model for a true international vaccine registry.

Once completed, the GCCN's trust registry network will enable each country to publish a list of the authorized issuers of COVID certificates that can be digitally verified by authorities in other countries. This will bridge the gap between technical specifications (e.g. W3C Verifiable Credentials or SMART Health Card) and a complete trust architecture required for safe reopening. This is vital because as Brian Behlendorf, the Linux Foundation's General Manager for Blockchain, Healthcare, and Identity explained, "The first wave of apps for proving one's COVID status did not allow that proof to be shown beyond a single state or nation, did not avoid vendor lock-in and did not distinguish between rich health data and simple passes. The Blueprint gives this industry a way to solve those issues while meeting a high bar for privacy and integrity, and GCCN turns those plans into action."

Once in place, the GCCN will support Global COVID Certificates (GCC). These certificates will have three use cases: Vaccination, recovery from infection, and test results. They will be available in both paper and digital formats. Participating governments and industry alliances will decide what COVID certificates they issue and accept. The GCC schema definitions and minimal datasets will follow the recommendations of the Blueprint, as well as GCCN's technical and governance documents, implementation guide, and open-source reference implementations, which will be developed in collaboration with supporting organizations and the broader LFPH community. Besides setting the specs and designs, the GCCN community will also offer peer-based implementation and governance guidance to governments and industries to help them implement COVID certificate systems. This will include how to build national and state trust registries and infrastructure. They'll also provide guidance on how to leverage GCC into their existing coronavirus vaccine systems.

IBM

Will Labor Shortages Give Workers More Power? (msn.com) 174

It's been argued that technology (especially automation) will continue weakening the position of workers. But today the senior economics correspondent for The New York Times argues a "profound shift" happening in America is instead something else.

"For the first time in a generation, workers are gaining the upper hand..." Up and down the wage scale, companies are becoming more willing to pay a little more, to train workers, to take chances on people without traditional qualifications, and to show greater flexibility in where and how people work. The erosion of employer power began during the low-unemployment years leading up to the pandemic and, given demographic trends, could persist for years. March had a record number of open positions, according to federal data that goes back to 2000, and workers were voluntarily leaving their jobs at a rate that matches its historical high. Burning Glass Technologies, a firm that analyzes millions of job listings a day, found that the share of postings that say "no experience necessary" is up two-thirds over 2019 levels, while the share of those promising a starting bonus has doubled.

People are demanding more money to take a new job. The "reservation wage," as economists call the minimum compensation workers would require, was 19 percent higher for those without a college degree in March than in November 2019, a jump of nearly $10,000 a year, according to a survey by the Federal Reserve Bank of New York... [T]he demographic picture is not becoming any more favorable for employers eager to fill positions. Population growth for Americans between ages 20 and 64 turned negative last year for the first time in the nation's history. The Congressional Budget Office projects that the potential labor force will grow a mere 0.3 to 0.4 percent annually for the remainder of the 2020s; the size of the work force rose an average of 0.8 percent a year from 2000 to 2020.

The article describes managers now "being forced to learn how to operate amid labor scarcity... At the high end of the labor market, that can mean workers are more emboldened to leave a job if employers are insufficiently flexible on issues like working from home..."

But it also notes a ride-sharing driver who switched to an IBM apprenticeship for becoming a cloud storage engineer, and former Florida nightclub bouncer Alex Lorick, who became an IBM mainframe technician, "part of a deliberate effort by IBM to rethink how it hires and what counts as a qualification for a given job." [IBM] executives concluded that the qualifications for many jobs were unnecessarily demanding. Postings might require applicants to have a bachelor's degree, for example, in jobs that a six-month training course would adequately prepare a person for.

"By creating your own dumb barriers, you're actually making your job in the search for talent harder," said Obed Louissaint, IBM's senior vice president for transformation and culture. In working with managers across the company on training initiatives like the one under which Mr. Lorick was hired, "it's about making managers more accountable for mentoring, developing and building talent versus buying talent."

"I think something fundamental is changing, and it's been happening for a while, but now it's accelerating," Mr. Louissaint said.

China

The Woman Who Mastered IBM's 5,400-character Chinese Typewriter (fastcompany.com) 58

Fast Company's technology editor harrymcc writes: In the 1940s, IBM tried to market a typewriter capable of handling all 5,400 Chinese characters. The catch was that using it required memorizing a 4-digit code for each character. But a young woman named Lois Lew tackled the challenge and demoed the typewriter for the company in presentations from Manhattan to Shanghai.

More than 70 years later, Lew, now in her 90s, told her remarkable story to Thomas S. Mullaney for Fast Company.

AI

RAI's Certification Process Aims To Prevent AIs From Turning Into HALs (engadget.com) 71

An anonymous reader quotes a report from Engadget: [T]he Responsible Artificial Intelligence Institute (RAI) -- a non-profit developing governance tools to help usher in a new generation of trustworthy, safe, Responsible AIs -- hopes to offer a more standardized means of certifying that our next HAL won't murder the entire crew. In short they want to build "the world's first independent, accredited certification program of its kind." Think of the LEED green building certification system used in construction but with AI instead. Work towards this certification program began nearly half a decade ago alongside the founding of RAI itself, at the hands of Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman and a man widely considered to be the "father" of IBM Watson, though his initial inspiration came even further back.

Certifications are awarded in four levels -- basic, silver, gold, and platinum (sorry, no bronze) -- based on the AI's scores along the five OECD principles of Responsible AI: interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy. The certification is administered via questionnaire and a scan of the AI system. Developers must score 60 points to reach the base certification, 70 points for silver and so on, up to 90 points-plus for platinum status. [Mark Rolston, founder and CCO of argodesign] notes that design analysis will play an outsized role in the certification process. "Any company that is trying to figure out whether their AI is going to be trustworthy needs to first understand how they're constructing that AI within their overall business," he said. "And that requires a level of design analysis, both on the technical front and in terms of how they're interfacing with their users, which is the domain of design."

RAI expects to find (and in some cases has already found) a number of willing entities from government, academia, enterprise corporations, or technology vendors for its services, though the two are remaining mum on specifics while the program is still in beta (until November 15th, at least). Saxena hopes that, like the LEED certification, RAI will eventually evolve into a universalized certification system for AI. He argues, it will help accelerate the development of future systems by eliminating much of the uncertainty and liability exposure today's developers -- and their harried compliance officers -- face while building public trust in the brand. "We're using standards from IEEE, we are looking at things that ISO is coming out with, we are looking at leading indicators from the European Union like GDPR, and now this recently announced algorithmic law," Saxena said. "We see ourselves as the 'do tank' that can operationalize those concepts and those think tank's work."

Hardware

Taiwan's TSMC Claims Breakthrough On 1nm Chips (taiwannews.com.tw) 70

Hmmmmmm shares a report from Taiwan News: Taiwan Semiconductor Manufacturing Co. (TSMC), National Taiwan University (NTU), and the Massachusetts Institute of Technology (MIT) have made a significant breakthrough in the development of 1-nanometer chips, reports said Tuesday (May 18). The joint announcement has trumped IBM's statement earlier in the month about the development of a 2nm semiconductor, British website Verdict reported. While at present the most advanced chips are 5nm, TSMC's find was likely to lead to power-saving and higher speeds for future electric vehicles, artificial intelligence, and other new technologies.

The discovery was first made by the MIT team, with elements optimized by TSMC and improved by NTU's Department of Electrical Engineering and Optometrics, according to a report in Nature Magazine. The key element of the research outcome was that using the semi-metal bismuth as the contract electrode of a two-dimensional material to replace silicon can cut resistance and increase the current, Verdict reported. Energy efficiency would thus increase to the highest possible level for semiconductors.

Government

US Lawmakers Could Restrict the Use of Non-Compete Agreements (protocol.com) 119

Politico's technology site Protocol reports that some U.S. lawmakers are getting angry about an unpopular but widespread corporate policy -- the non-compete agreement: Non-compete agreements prohibit employees who leave their jobs from taking similar positions with potential competitors for a certain period of time. In the U.S., somewhere between 27.8% and 46.5% of private-sector workers are subject to non-compete agreements, according to a 2019 Economic Policy Institute study.

Such agreements are unenforceable in California and limited in nearby Washington, but they can still have adverse effects on employees nationwide. That's why a current piece of legislation, the Workforce Mobility Act, seeks at the federal level to restrict the use of non-compete agreements in most situations. Sens. Chris Murphy and Todd Young introduced the bill, which would only allow non-competes in certain "necessary" situations... Non-compete legislation also has the support of President Joe Biden, who said during his campaign he would support such a bill. John Lettieri, president and CEO of the Economic Innovation Group, is a proponent of the Workforce Mobility Act and suggested the bill should enjoy broad support. "We believe we're in a position where it's possible for this to become law," Lettieri told Protocol.

"Whether you're a free market conservative or whether you're a pro-worker progressive, you can come from either of those ends of the spectrum and end up in the same place. And this is a special issue for that reason... Competition is generally good and for workers, competition among businesses for your labor is the most fundamental bargaining power you've got," he said. But if companies hinder that with non-compete agreements, they create "a downstream series of consequences that really are bad for the worker, they're bad for the broader labor market and it's increasingly clear they're bad for the broader economy as well...."

Companies such as Amazon and Microsoft — both headquartered in Seattle, Washington — and New York-headquartered IBM have all sued employees for breaking the terms of their non-compete agreements.

Programming

IBM's CodeNet Dataset Can Teach AI To Translate Computer Languages (engadget.com) 40

IBM announced during its Think 2021 conference on Monday that its researchers have crafted a Rosetta Stone for programming code. Engadget reports: In effect, we've taught computers how to speak human, so why not also teach computers to speak more computer? That's what IBM's Project CodeNet seeks to accomplish. "We need our ImageNet, which can snowball the innovation and can unleash this innovation in algorithms," [Ruchir Puri, IBM Fellow and Chief Scientist at IBM Research, said during his Think 2021 presentation]. CodeNet is essentially the ImageNet of computers. It's an expansive dataset designed to teach AI/ML systems how to translate code and consists of some 14 million snippets and 500 million lines spread across more than 55 legacy and active languages -- from COBOL and FORTRAN to Java, C++, and Python.

"Since the data set itself contains 50 different languages, it can actually enable algorithms for many pairwise combinations," Puri explained. "Having said that, there has been work done in human language areas, like neural machine translation which, rather than doing pairwise, actually becomes more language-independent and can derive an intermediate abstraction through which it translates into many different languages." In short, the dataset is constructed in a manner that enables bidirectional translation. That is, you can take some legacy COBOL code -- which, terrifyingly, still constitutes a significant amount of this country's banking and federal government infrastructure -- and translate it into Java as easily as you could take a snippet of Java and regress it back into COBOL.

CodeNet can be used for functions like code search and clone detection, in addition to its intended translational duties and serving as a benchmark dataset. Also, each sample is labeled with its CPU run time and memory footprint, allowing researchers to run regression studies and potentially develop automated code correction systems. Project CodeNet consists of more than 14 million code samples along with 4000-plus coding problems collected and curated from decades' of programming challenges and competitions across the globe. "The way the data set actually came about," Puri said, "there are many kinds of programming competitions and all kinds of problems -- some of them more businesslike, some of them more academic. These are the languages that have been used over the last decade and a half in many of these competitions with 1000s of students or competitors submitting solutions." Additionally, users can run individual code samples "to extract metadata and verify outputs from generative AI models for correctness," according to an IBM press release. "This will enable researchers to program intent equivalence when translating one programming language into another." [...] IBM intends to release the CodeNet data to the public domain, allowing researchers worldwide equal and free access.

Hardware

'Despite Chip Shortage, Chip Innovation Is Booming' (nytimes.com) 33

The New York Times reports on surprising silver linings of the global chip shortage: Even as a chip shortage is causing trouble for all sorts of industries, the semiconductor field is entering a surprising new era of creativity, from industry giants to innovative start-ups seeing a spike in funding from venture capitalists that traditionally avoided chip makers. Taiwan Semiconductor Manufacturing Company and Samsung Electronics, for example, have managed the increasingly difficult feat of packing more transistors on each slice of silicon. IBM on Thursday announced another leap in miniaturization, a sign of continued U.S. prowess in the technology race. Perhaps most striking, what was a trickle of new chip companies is now approaching a flood.

Equity investors for years viewed semiconductor companies as too costly to set up, but in 2020 plowed more than $12 billion into 407 chip-related companies, according to CB Insights. Though a tiny fraction of all venture capital investments, that was more than double what the industry received in 2019 and eight times the total for 2016. Synopsys, the biggest supplier of software that engineers use to design chip, is tracking more than 200 start-ups designing chips for artificial intelligence, the ultrahot technology powering everything from smart speakers to self-driving cars. Cerebras, a start-up that sells massive artificial-intelligence processors that span an entire silicon wafer, for example, has attracted more than $475 million. Groq, a start-up whose chief executive previously helped design an artificial-intelligence chip for Google, has raised $367 million.

"It's a bloody miracle," said Jim Keller, a veteran chip designer whose resume includes stints at Apple, Tesla and Intel and who now works at the A.I. chip start-up Tenstorrent. "Ten years ago you couldn't do a hardware start-up...."

More companies are concluding that software running on standard Intel-style microprocessors is not the best solution for all problems. For that reason, companies like Cisco Systems and Hewlett Packard Enterprise have long designed specialty chips for products such as networking gear. Giants like Apple, Amazon and Google more recently have gotten into the act. Google's YouTube unit recently disclosed its first internally developed chip to speed video encoding.

And Volkswagen even said last week that it would develop its own processor to manage autonomous driving.

IBM

IBM Creates First 2nm Chip (anandtech.com) 74

An anonymous reader shares a report: Every decade is the decade that tests the limits of Moore's Law, and this decade is no different. With the arrival of Extreme Ultra Violet (EUV) technology, the intricacies of multipatterning techniques developed on previous technology nodes can now be applied with the finer resolution that EUV provides. That, along with other more technical improvements, can lead to a decrease in transistor size, enabling the future of semiconductors. To that end, today IBM is announcing it has created the world's first 2 nanometer node chip. Just to clarify here, while the process node is being called '2 nanometer,' nothing about transistor dimensions resembles a traditional expectation of what 2nm might be. In the past, the dimension used to be an equivalent metric for 2D feature size on the chip, such as 90nm, 65nm, and 40nm. However with the advent of 3D transistor design with FinFETs and others, the process node name is now an interpretation of an 'equivalent 2D transistor' design.

Some of the features on this chip are likely to be low single digits in actual nanometers, such as transistor fin leakage protection layers, but it's important to note the disconnect in how process nodes are currently named. Often the argument pivots to transistor density as a more accurate metric, and this is something that IBM is sharing with us. Today's announcement states that IBM's 2nm development will improve performance by 45% at the same power, or 75% energy at the same performance, compared to modern 7nm processors. IBM is keen to point out that it was the first research institution to demonstrate 7nm in 2015 and 5nm in 2017, the latter of which upgraded from FinFETs to nanosheet technologies that allow for a greater customization of the voltage characteristics of individual transistors. IBM states that the technology can fit '50 billion transistors onto a chip the size of a fingernail.' We reached out to IBM to ask for clarification on what the size of a fingernail was, given that internally we were coming up with numbers from 50 square millimeters to 250 square millimeters. IBM's press relations stated that a fingernail in this context is 150 square millimeters. That puts IBM's transistor density at 333 million transistors per square millimeter (MTr/mm^2).

Technology

CES 2022 Will Return To Las Vegas in Person (cnet.com) 14

CES 2022 is going back to Las Vegas following this year's all-digital event, the Consumer Technology Association said Thursday, as coronavirus restrictions ease in the US. The event will take place Jan. 5-8, with media days taking place Jan. 3-4. From a report: Around 1,000 companies -- including Amazon, AMD, AT&T, Daimler AG, Dell, Google, Hyundai, IBM, Intel, Lenovo, LG, Panasonic, Qualcomm, Samsung and Sony -- are on board for the event, according to the CTA, with more being added. You'll also be able to attend digitally. Plans for the event will evolve depending on coronavirus safety measures from the Centers for Disease Control and Prevention, the CTA noted.

Slashdot Top Deals