×
China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

Science

Computers Uncover 100,000 Novel Viruses in Old Genetic Data (science.org) 50

sciencehabit writes: It took just one virus to cripple the world's economy and kill millions of people; yet virologists estimate that trillions of still-unknown viruses exist, many of which might be lethal or have the potential to spark the next pandemic. Now, they have a new -- and very long -- list of possible suspects to interrogate. By sifting through unprecedented amounts of existing genomic data, scientists have uncovered more than 100,000 novel viruses, including nine coronaviruses and more than 300 related to the hepatitis Delta virus, which can cause liver failure. "It's a foundational piece of work," says J. Rodney Brister, a bioinformatician at the National Center for Biotechnology Information's National Library of Medicine who was not involved in the new study. The work expands the number of known viruses that use RNA instead of DNA for their genes by an order of magnitude. It also "demonstrates our outrageous lack of knowledge about this group of organisms," says disease ecologist Peter Daszak, president of the EcoHealth Alliance, a nonprofit research group in New York City that is raising money to launch a global survey of viruses. The work will also help launch so-called petabyte genomics -- the analyses of previously unfathomable quantities of DNA and RNA data.

That wasn't exactly what computational biologist Artem Babaian had in mind when he was in between jobs in early 2020. Instead, he was simply curious about how many coronaviruses -- aside from the virus that had just launched the COVID-19 pandemic -- could be found in sequences in existing genomic databases. So, he and independent supercomputing expert Jeff Taylor scoured cloud-based genomic data that had been deposited to a global sequence database and uploaded by the U.S. National Institutes of Health. As of now, the database contains 16 petabytes of archived sequences, which come from genetic surveys of everything from fugu fish to farm soils to the insides of human guts. (A database with a digital photo of every person in the United States would take up about the same amount of space.) The genomes of viruses infecting different organisms in these samples are also captured by sequencing, but they usually go undetected.

Hardware

First RISC-V Computer Chip Lands At the European Processor Initiative (theregister.com) 27

An anonymous reader quotes a report from The Register: The European Processor Initiative (EPI) has run the successful first test of its RISC-V-based European Processor Accelerator (EPAC), touting it as the initial step towards homegrown supercomputing hardware. EPI, launched back in 2018, aims to increase the independence of Europe's supercomputing industry from foreign technology companies. At its heart is the adoption of the free and open-source RISC-V instruction set architecture for the development and production of high-performance chips within Europe's borders. The project's latest milestone is the delivery of 143 samples of EPAC chips, accelerators designed for high-performance computing applications and built around the free and open-source RISC-V instruction set architecture. Designed to prove the processor's design, the 22nm test chips -- fabbed at GlobalFoundries, the not-terribly-European semiconductor manufacturer spun out of AMD back in 2009 -- have passed initial testing, running a bare-metal "hello, world" program as proof of life.

It's a rapid turnaround. The EPAC design was proven on FPGA in March and the project announced silicon tape-out for the test chips in June -- hitting a 26.97mm2 area with 14 million placeable instances, equivalent to 93 million gates, including 991 memory instances. While the FPGA variant, which implemented a subset of the functions of the full EPAC design, was shown booting a Linux operating system, the physical test chips have so far only been tested with basic bare-metal workloads -- leaving plenty of work to be done.
Earlier today, the UK government released its 10-year plan to make the country a global "artificial intelligence superpower," seeking to rival the likes of the U.S. and China. "The so-called 'National Artificial Intelligence Strategy' is designed to boost the use of AI among the nation's businesses, attract international investment into British AI companies and develop the next generation of homegrown tech talent," reports CNBC.
AI

What Does It Take to Build the World's Largest Computer Chip? (newyorker.com) 23

The New Yorker looks at Cerebras, a startup which has raised nearly half a billion dollars to build massive plate-sized chips targeted at AI applications — the largest computer chip in the world. In the end, said Cerebras's co-founder Andrew Feldman, the mega-chip design offers several advantages. Cores communicate faster when they're on the same chip: instead of being spread around a room, the computer's brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that's ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home...

A typical, large computer chip might draw three hundred and fifty watts of power, but Cerebras's giant chip draws fifteen kilowatts — enough to run a small house. "Nobody ever delivered that much power to a chip," Feldman said. "Nobody ever had to cool a chip like that." In the end, three-quarters of the CS-1, the computer that Cerebras built around its WSE-1 chip, is dedicated to preventing the motherboard from melting. Most computers use fans to blow cool air over their processors, but the CS-1 uses water, which conducts heat better; connected to piping and sitting atop the silicon is a water-cooled plate, made of a custom copper alloy that won't expand too much when warmed, and polished to perfection so as not to scratch the chip. On most chips, data and power flow in through wires at the edges, in roughly the same way that they arrive at a suburban house; for the more metropolitan Wafer-Scale Engines, they needed to come in perpendicularly, from below. The engineers had to invent a new connecting material that could withstand the heat and stress of the mega-chip environment. "That took us more than a year," Feldman said...

[I]n a rack in a data center, it takes up the same space as fifteen of the pizza-box-size machines powered by G.P.U.s. Custom-built machine-learning software works to assign tasks to the chip in the most efficient way possible, and even distributes work in order to prevent cold spots, so that the wafer doesn't crack.... According to Cerebras, the CS-1 is being used in several world-class labs — including the Lawrence Livermore National Laboratory, the Pittsburgh Supercomputing Center, and E.P.C.C., the supercomputing centre at the University of Edinburgh — as well as by pharmaceutical companies, industrial firms, and "military and intelligence customers." Earlier this year, in a blog post, an engineer at the pharmaceutical company AstraZeneca wrote that it had used a CS-1 to train a neural network that could extract information from research papers; the computer performed in two days what would take "a large cluster of G.P.U.s" two weeks.

The U.S. National Energy Technology Laboratory reported that its CS-1 solved a system of equations more than two hundred times faster than its supercomputer, while using "a fraction" of the power consumption. "To our knowledge, this is the first ever system capable of faster-than real-time simulation of millions of cells in realistic fluid-dynamics models," the researchers wrote. They concluded that, because of scaling inefficiencies, there could be no version of their supercomputer big enough to beat the CS-1.... Bronis de Supinski, the C.T.O. for Livermore Computing, told me that, in initial tests, the CS-1 had run neural networks about five times as fast per transistor as a cluster of G.P.U.s, and had accelerated network training even more.

It all suggests one possible work-around for Moore's Law: optimizing chips for specific applications. "For now," Feldman tells the New Yorker, "progress will come through specialization."
Supercomputing

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs (nvidia.com) 57

Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

"I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe.

A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Supercomputing

Google Plans To Build a Commercial Quantum Computer By 2029 (engadget.com) 56

Google developers are confident they can build a commercial-grade quantum computer by 2029. Engadget reports: Google CEO Sundar Pichai announced the plan during today's I/O stream, and in a blog post, quantum AI lead engineer Erik Lucero further outlined the company's goal to "build a useful, error-corrected quantum computer" within the decade. Executives also revealed Google's new campus in Santa Barbara, California, which is dedicated to quantum AI. The campus has Google's first quantum data center, hardware research laboratories, and the company's very own quantum processor chip fabrication facilities.

"As we look 10 years into the future, many of the greatest global challenges, from climate change to handling the next pandemic, demand a new kind of computing," Lucero said. "To build better batteries (to lighten the load on the power grid), or to create fertilizer to feed the world without creating 2 percent of global carbon emissions (as nitrogen fixation does today), or to create more targeted medicines (to stop the next pandemic before it starts), we need to understand and design molecules better. That means simulating nature accurately. But you can't simulate molecules very well using classical computers."

Australia

Ancient Australian 'Superhighways' Suggested By Massive Supercomputing Study (sciencemag.org) 56

sciencehabit shares a report from Science Magazine: When humans first set foot in Australia more than 65,000 years ago, they faced the perilous task of navigating a landscape they'd never seen. Now, researchers have used supercomputers to simulate 125 billion possible travel routes and reconstruct the most likely "superhighways" these ancient immigrants used as they spread across the continent. The project offers new insight into how landmarks and water supplies shape human migrations, and provides archaeologists with clues for where to look for undiscovered ancient settlements.

It took weeks to run the complex simulations on a supercomputer operated by the U.S. government. But the number crunching ultimately revealed a network of "optimal superhighways" that had the most attractive combinations of easy walking, water, and landmarks. Optimal road map in hand, the researchers faced a fundamental question, says lead author Stefani Crabtree, an archaeologist at Utah State University, Logan, and the Santa Fe Institute: Was there any evidence that real people had once used these computer-identified corridors? To find out, the researchers compared their routes to the locations of the roughly three dozen archaeological sites in Australia known to be at least 35,000 years old. Many sites sat on or near the superhighways. Some corridors also coincided with ancient trade routes known from indigenous oral histories, or aligned with genetic and linguistic studies used to trace early human migrations. "I think all of us were surprised by the goodness of the fit," says archaeologist Sean Ulm of James Cook University, Cairns.

The map has also highlighted little-studied migration corridors that could yield future archaeological discoveries. For example, some early superhighways sat on coastal lands that are now submerged, giving marine researchers a guide for exploration. Even more intriguing, the authors and others say, are major routes that cut across several arid areas in Australia's center and in the northeastern state of Queensland. Those paths challenge a "long-standing view that the earliest people avoided the deserts," Ulm says. The Queensland highway, in particular, presents "an excellent focus point" for future archaeological surveys, says archaeologist Shimona Kealy of the Australian National University.
The study has been published in the journal Nature Human Behavior.
Intel

Nvidia To Make CPUs, Going After Intel (bloomberg.com) 111

Nvidia said it's offering the company's first server microprocessors, extending a push into Intel's most lucrative market with a chip aimed at handling the most complicated computing work. Intel shares fell more than 2% on the news. From a report: The graphics chipmaker has designed a central processing unit, or CPU, based on technology from Arm, a company it's trying to acquire from Japan's SoftBank Group. The Swiss National Supercomputing Centre and U.S. Department of Energy's Los Alamos National Laboratory will be the first to use the chips in their computers, Nvidia said Monday at an online event. Nvidia has focused mainly on graphics processing units, or GPUs, which are used to power video games and data-heavy computing tasks in data centers. CPUs, by contrast, are a type of chip that's more of a generalist and can do basic tasks like running operating systems. Expanding into this product category opens up more revenue opportunities for Nvidia.

Founder and Chief Executive Officer Jensen Huang has made Nvidia the most valuable U.S. chipmaker by delivering on his promise to give graphics chips a major role in the explosion in cloud computing. Data center revenue contributes about 40% of the company's sales, up from less than 7% just five years ago. Intel still has more than 90% of the market in server processors, which can sell for more than $10,000 each. The CPU, named Grace after the late pioneering computer scientist Grace Hopper, is designed to work closely with Nvidia graphics chips to better handle new computing problems that will come with a trillion parameters. Systems working with the new chip will be 10 times faster than those currently using a combination of Nvidia graphics chips and Intel CPUs. The new product will be available at the beginning of 2023, Nvidia said.

Supercomputing

US Adds Chinese Supercomputing Entities To Economic Blacklist (reuters.com) 81

The U.S. Commerce Department said Thursday it was adding seven Chinese supercomputing entities to a U.S. economic blacklist for assisting Chinese military efforts. From a report: The department is adding Tianjin Phytium Information Technology, Shanghai High-Performance Integrated Circuit Design Center, Sunway Microelectronics, the National Supercomputing Center Jinan, the National Supercomputing Center Shenzhen, the National Supercomputing Center Wuxi, and the National Supercomputing Center Zhengzhou to its blacklist. The Commerce Department said the seven were "involved with building supercomputers used by China's military actors, its destabilizing military modernization efforts, and/or weapons of mass destruction programs.' The Chinese Embassy in Washington did not immediately respond to requests for comment. "Supercomputing capabilities are vital for the development of many -- perhaps almost all -- modern weapons and national security systems, such as nuclear weapons and hypersonic weapons," Commerce Secretary Gina Raimondo said in a statement.
Hardware

Samsung Unveils 512GB DDR5 RAM Module (engadget.com) 33

Samsung has unveiled a new RAM module that shows the potential of DDR5 memory in terms of speed and capacity. Engadget reports: The 512GB DDR5 module is the first to use High-K Metal Gate (HKMG) tech, delivering 7,200 Mbps speeds -- over double that of DDR4, Samsung said. Right now, it's aimed at data-hungry supercomputing, AI and machine learning functions, but DDR5 will eventually find its way to regular PCs, boosting gaming and other applications. Developed by Intel, it uses hafnium instead of silicon, with metals replacing the normal polysilicon gate electrodes. All of that allows for higher chip densities, while reducing current leakage.

Each chip uses eight layers of 16Gb DRAM chips for a capacity of 128Gb, or 16GB. As such, Samsung would need 32 of those to make a 512GB RAM module. On top of the higher speeds and capacity, Samsung said that the chip uses 13 percent less power than non-HKMG modules -- ideal for data centers, but not so bad for regular PCs, either. With 7,200 Mbps speeds, Samsung's latest module would deliver around 57.6 GB/s transfer speeds on a single channel.

HP

Hewlett Packard Enterprise Will Build a $160 Million Supercomputer in Finland (venturebeat.com) 9

Hewlett Packard Enterprise (HPE) today announced it has been awarded over $160 million to build a supercomputer called LUMI in Finland. LUMI will be funded by the European Joint Undertaking EuroHPC, a joint supercomputing collaboration between national governments and the European Union. From a report: The supercomputer will have a theoretical peak performance of more than 550 petaflops and is expected to best the RIKEN Center for Computational Science's top-performing Fugaku petascale computer, which reached 415.5 petaflops in June 2020.
Power

Researchers Use Supercomputer to Design New Molecule That Captures Solar Energy (liu.se) 36

A reader shares some news from Sweden's Linköping University: The Earth receives many times more energy from the sun than we humans can use. This energy is absorbed by solar energy facilities, but one of the challenges of solar energy is to store it efficiently, such that the energy is available when the sun is not shining. This led scientists at Linköping University to investigate the possibility of capturing and storing solar energy in a new molecule.

"Our molecule can take on two different forms: a parent form that can absorb energy from sunlight, and an alternative form in which the structure of the parent form has been changed and become much more energy-rich, while remaining stable. This makes it possible to store the energy in sunlight in the molecule efficiently", says Bo Durbeej, professor of computational physics in the Department of Physics, Chemistry and Biology at LinkÃping University, and leader of the study...

It's common in research that experiments are done first and theoretical work subsequently confirms the experimental results, but in this case the procedure was reversed. Bo Durbeej and his group work in theoretical chemistry, and conduct calculations and simulations of chemical reactions. This involves advanced computer simulations, which are performed on supercomputers at the National Supercomputer Centre, NSC, in Linköping. The calculations showed that the molecule the researchers had developed would undergo the chemical reaction they required, and that it would take place extremely fast, within 200 femtoseconds. Their colleagues at the Research Centre for Natural Sciences in Hungary were then able to build the molecule, and perform experiments that confirmed the theoretical prediction...

"Most chemical reactions start in a condition where a molecule has high energy and subsequently passes to one with a low energy. Here, we do the opposite — a molecule that has low energy becomes one with high energy. We would expect this to be difficult, but we have shown that it is possible for such a reaction to take place both rapidly and efficiently", says Bo Durbeej.

The researchers will now examine how the stored energy can be released from the energy-rich form of the molecule in the best way...

Supercomputing

ARM Not Just For Macs: Might Make Weather Forecasting Cheaper Too (nag.com) 41

An anonymous reader writes: The fact that Apple is moving away from Intel to ARM has been making a lot of headlines recently — but that's not the only new place where ARM CPUs have been making a splash.

ARM has also been turning heads in High Performance Computing (HPC), and an ARM-based system is now the world's most powerful supercomputer (Fugaku). AWS recently made their 2nd generation ARM Graviton chips available which allows everyone to test HPC workloads on ARM silicon. A company called The Numerical Algorithms Group recently published a small benchmark study that compared weather simulations on Intel, AMD and ARM instances on AWS and reported that although the ARM silicon is slowest, it is also the cheapest for this benchmark.

The benchmark test concludes the ARM processor provides "a very cost-efficient solution...and performance is competitive to other, more traditional HPC processors."
Businesses

Nvidia Reportedly Could Be Pursuing ARM In Disruptive Acquisition Move (hothardware.com) 89

MojoKid writes: Word across a number of business and tech press publications tonight is that NVIDIA is reportedly pursuing a possible acquisition of Arm, the chip IP juggernaut that currently powers virtually every smartphone on the planet (including iPhones), to a myriad of devices in the IoT and embedded spaces, as well as supercomputing and in the datacenter. NVIDIA has risen in the ranks over the past few years to become a force in the chip industry, and more recently has even been trading places with Intel as the most valuable chipmaker in the United States, with a current market cap of $256 billion. NVIDIA has found major success in consumer and pro graphics, the data center, artificial intelligence/machine learning and automotive sectors in recent years, meanwhile CEO Jensen Huang has expressed a desire to further branch out into the growing Internet of Things (IoT) market, where Arm chip designs flourish. However, Arm's current parent company, SoftBank, is looking for a hefty return on its investment and Arm reportedly could be valued at around $44 billion, if it were to go public. A deal with NVIDIA, however, would short-circuit those IPO plans and potentially send shockwaves in the semiconductor market.
Supercomputing

A Volunteer Supercomputer Team is Hunting for Covid Clues (defenseone.com) 91

The world's fastest computer is now part of "a vast supercomputer-powered search for new findings pertaining to the novel coronavirus' spread" and "how to effectively treat and mitigate it," according to an emerging tech journalist at Nextgov.

It's part of a consortium currently facilitating over 65 active research projects, for which "Dozens of national and international members are volunteering free compute time...providing at least 485 petaflops of capacity and steadily growing, to more rapidly generate new solutions against COVID-19."

"What started as a simple concept has grown to span three continents with over 40 supercomputer providers," Dario Gil, director of IBM Research and consortium co-chair, told Nextgov last week. "In the face of a global pandemic like COVID-19, hopefully a once-in-a-lifetime event, the speed at which researchers can drive discovery is a critical factor in the search for a cure and it is essential that we combine forces...."

[I]ts resources have been used to sort through billions of molecules to identify promising compounds that can be manufactured quickly and tested for potency to target the novel coronavirus, produce large data sets to study variations in patient responses, perform airflow simulations on a new device that will allow doctors to use one ventilator to support multiple patients — and more. The complex systems are powering calculations, simulations and results in a matter of days that several scientists have noted would take a matter of months on traditional computers.

The Undersecretary for Science at America's Energy Department said "What's really interesting about this from an organizational point of view is that it's basically a volunteer organization."

The article identifies some of the notable participants:
  • IBM was part of the joint launch with America's Office of Science and Technology Policy and its Energy Department.
  • The chief of NASA's Advanced Supercomputing says they're "making the full reserve portion of NASA supercomputing resources available to researchers working on the COVID-19 response, along with providing our expertise and support to port and run their applications on NASA systems."
  • Amazon Web Services "saw a clear opportunity to bring the benefits of cloud... to bear in the race for treatments and a vaccine," according to a company executive.
  • Japan's Fugaku — "which surpassed leading U.S. machines on the Top 500 list of global supercomputers in late June" — also joined the consortium in June.

Other consortium members:

  • Google Cloud
  • Microsoft
  • Massachusetts Institute of Technology
  • Rensselaer Polytechnic Institute
  • The National Science Foundation
  • Argonne, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia National laboratories.
  • National Center for Atmospheric Research's Wyoming Supercomputing Center
  • AMD
  • NVIDIA
  • Dell Technologies. ("The company is now donating cycles from the Zenith supercomputer and other resources.")

Security

Supercomputers Breached Across Europe To Mine Cryptocurrency (zdnet.com) 43

An anonymous reader quotes ZDNet: Multiple supercomputers across Europe have been infected this week with cryptocurrency mining malware and have shut down to investigate the intrusions. Security incidents have been reported in the UK, Germany, and Switzerland, while a similar intrusion is rumored to have also happened at a high-performance computing center located in Spain.

Cado Security, a US-based cyber-security firm, said the attackers appear to have gained access to the supercomputer clusters via compromised SSH credentials... Once attackers gained access to a supercomputing node, they appear to have used an exploit for the CVE-2019-15666 vulnerability to gain root access and then deployed an application that mined the Monero cryptocurrency.

Biotech

Quantum Computing Milestone: Researchers Compute With 'Hot' Silicon Qubits (ieee.org) 18

"Two research groups say they've independently built quantum devices that can operate at temperatures above 1 Kelvin — 15 times hotter than rival technologies can withstand," reports IEEE Spectrum. (In an article shared by Slashdot reader Wave723.)

"The ability to work at higher temperatures is key to scaling up to the many qubits thought to be required for future commercial-grade quantum computers..." HongWen Jiang, a physicist at UCLA and a peer reviewer for both papers, described the research as "a technological breakthrough for semiconductor based quantum computing." In today's quantum computers, qubits must be kept inside large dilution refrigerators at temperatures hovering just above absolute zero. Electronics required to manipulate and read the qubits produce too much heat and so remain outside of the fridge, which adds complexity (and many wires) to the system...

"To me, these works do represent, in rapid succession, pretty big milestones in silicon spin qubits," says John Gamble, a peer reviewer for one of the papers and a senior quantum engineer at Microsoft. "It's compelling work...." Moving forward, Gamble is interested to see if the research groups can scale their approach to include more qubits. He's encouraged by their efforts so far, saying, "The fact that we're seeing these types of advances means the field is progressing really well and that people are thinking of the right problems."

Besides Microsoft, Google and IBM have also "invested heavily in superconducting qubits," the article points out. And there's also a hopeful comment from Lee Bassett, a physicist focused on quantum systems at the University of Pennsylvania. "Each time these silicon devices pass a milestone — and this is an important milestone — it's closer and closer to the inflection point.

"This infrastructure of integrated, silicon-based electronics could take over, and this technology could just explode."
Supercomputing

NVIDIA Is Contributing Its AI Smarts To Help Fight COVID-19 (engadget.com) 12

NVIDIA is using its background in AI and optimizing supercomputer throughput to the COVID-19 High Performance Computing Consortium group, which plans to support researchers by giving them time with 30 supercomputers offering a combined 400 petaflops of performance. Engadget reports: NVIDIA will add to this by providing expertise in AI, biology and large-scale computing optimizations. The company likened the Consortium's efforts to the Moon race. Ideally, this will speed up work for scientists who need modelling and other demanding tasks that would otherwise take a long time. NVIDIA has a number of existing contributions to coronavirus research, including the 27,000 GPUs inside the Summit supercomputer and those inside many of the computers from the crowdsourced Folding@Home project. This is still a significant step forward, though, and might prove lifesaving if it leads to a vaccine or more effective containment.
Supercomputing

D-Wave Makes Its Quantum Computers Free To Anyone Working On Coronavirus Crisis 18

An anonymous reader quotes a report from VentureBeat: D-Wave today made its quantum computers available for free to researchers and developers working on responses to the coronavirus (COVID-19) crisis. D-Wave partners and customers Cineca, Denso, Forschungszentrum Julich, Kyocera, MDR, Menten AI, NEC, OTI Lumionics, QAR Lab at LMU Munich, Sigma-i, Tohoku University, and Volkswagen are also offering to help. They will provide access to their engineering teams with expertise on how to use quantum computers, formulate problems, and develop solutions.

Quantum computing leverages qubits to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing. D-Wave says the move to make access free is a response to a cross-industry request from the Canadian government for solutions to the COVID-19 pandemic. Free and unlimited commercial contract-level access to D-Wave's quantum computers is available in 35 countries across North America, Europe, and Asia via Leap, the company's quantum cloud service. Just last month, D-Wave debuted Leap 2, which includes a hybrid solver service and solves problems of up to 10,000 variables.
Hardware

Open Source CPU Architecture RISC-V Is Gaining Momentum (insidehpc.com) 41

The CEO of the RISC-V Foundation (a former IBM executive) touted the open-source CPU architecture at this year's HiPEAC conference, arguing there's "a growing demand for custom processors purpose-built to meet the power and performance requirements of specific applications..." As I've been travelling across the globe to promote the benefits of RISC-V at events and meet with our member companies, it's really stuck me how the level of commitment to drive the mainstream adoption of RISC-V is like nothing I've seen before. It's exhilarating to witness our community collaborate across industries and geographies with the shared goal of accelerating the RISC-V ecosystem...With more than 420 organizations, individuals and universities that are members of the RISC-V Foundation, there is a really vibrant community collaborating together to drive the progression of ratified specs, compliance suites and other technical deliverables for the RISC-V ecosystem.

While RISC-V has a BSD open source license, designers are welcome to develop proprietary implementations for commercial use as they see fit. RISC-V offers a variety of commercial benefits, enabling companies to accelerate development time while also reducing strategic risk and overall costs. Thanks to these design and cost benefits, I'm confident that members will continue to actively contribute to the RISC-V ecosystem to not only drive innovation forward, but also benefit their bottom line... I don't have a favorite project, but rather I love the amazing spectrum that RISC-V is engaged in — from a wearable health monitor to scaled out cloud data centres, from universities in Pakistan to the University of Bologna in Italy or Barcelona Supercomputing Center in Spain, from design tools to foundries, from the most renowned global tech companies to entrepreneurs raising their first round of capital. Our community is broad, deep, growing and energized...

The RISC-V ecosystem is poised to significantly grow over the next five years. Semico Research predicts that the market will consume a total of 62.4 billion RISC-V central processing unit (CPU) cores by 2025! By that time I look forward to seeing many new types of RISC-V implementations including innovative consumer devices, industrial applications, high performance computing applications and much more... Unlike legacy instruction set architectures (ISAs) which are decades old and are not designed to handle the latest workloads, RISC-V has a variety of advantages including its openness, simplicity, clean-slate design, modularity, extensibility and stability. Thanks to these benefits, RISC-V is ushering in a new era of silicon design and processor innovation.

They also highlighted a major advantage. RISC-V "provides the flexibility to create thousands of possible custom processors. Since implementation is not defined at the ISA level, but rather by the composition of the system-on-chip and other design attributes, engineers can choose to go big, small, powerful or lightweight with their designs."

Slashdot Top Deals