×
Hardware

Open Source CPU Architecture RISC-V Is Gaining Momentum (insidehpc.com) 41

The CEO of the RISC-V Foundation (a former IBM executive) touted the open-source CPU architecture at this year's HiPEAC conference, arguing there's "a growing demand for custom processors purpose-built to meet the power and performance requirements of specific applications..." As I've been travelling across the globe to promote the benefits of RISC-V at events and meet with our member companies, it's really stuck me how the level of commitment to drive the mainstream adoption of RISC-V is like nothing I've seen before. It's exhilarating to witness our community collaborate across industries and geographies with the shared goal of accelerating the RISC-V ecosystem...With more than 420 organizations, individuals and universities that are members of the RISC-V Foundation, there is a really vibrant community collaborating together to drive the progression of ratified specs, compliance suites and other technical deliverables for the RISC-V ecosystem.

While RISC-V has a BSD open source license, designers are welcome to develop proprietary implementations for commercial use as they see fit. RISC-V offers a variety of commercial benefits, enabling companies to accelerate development time while also reducing strategic risk and overall costs. Thanks to these design and cost benefits, I'm confident that members will continue to actively contribute to the RISC-V ecosystem to not only drive innovation forward, but also benefit their bottom line... I don't have a favorite project, but rather I love the amazing spectrum that RISC-V is engaged in — from a wearable health monitor to scaled out cloud data centres, from universities in Pakistan to the University of Bologna in Italy or Barcelona Supercomputing Center in Spain, from design tools to foundries, from the most renowned global tech companies to entrepreneurs raising their first round of capital. Our community is broad, deep, growing and energized...

The RISC-V ecosystem is poised to significantly grow over the next five years. Semico Research predicts that the market will consume a total of 62.4 billion RISC-V central processing unit (CPU) cores by 2025! By that time I look forward to seeing many new types of RISC-V implementations including innovative consumer devices, industrial applications, high performance computing applications and much more... Unlike legacy instruction set architectures (ISAs) which are decades old and are not designed to handle the latest workloads, RISC-V has a variety of advantages including its openness, simplicity, clean-slate design, modularity, extensibility and stability. Thanks to these benefits, RISC-V is ushering in a new era of silicon design and processor innovation.

They also highlighted a major advantage. RISC-V "provides the flexibility to create thousands of possible custom processors. Since implementation is not defined at the ISA level, but rather by the composition of the system-on-chip and other design attributes, engineers can choose to go big, small, powerful or lightweight with their designs."
PlayStation (Games)

The Rise and Fall of the PlayStation Supercomputers (theverge.com) 50

"On the 25th anniversary of the original Sony PlayStation, The Verge shares the story of the PlayStation supercomputers," writes Slashdot reader jimminy_cricket. From the report: Dozens of PlayStation 3s sit in a refrigerated shipping container on the University of Massachusetts Dartmouth's campus, sucking up energy and investigating astrophysics. It's a popular stop for tours trying to sell the school to prospective first-year students and their parents, and it's one of the few living legacies of a weird science chapter in PlayStation's history. Those squat boxes, hulking on entertainment systems or dust-covered in the back of a closet, were once coveted by researchers who used the consoles to build supercomputers. With the racks of machines, the scientists were suddenly capable of contemplating the physics of black holes, processing drone footage, or winning cryptography contests. It only lasted a few years before tech moved on, becoming smaller and more efficient. But for that short moment, some of the most powerful computers in the world could be hacked together with code, wire, and gaming consoles. "The game consoles entered the supercomputing scene in 2002 when Sony released a kit called Linux for the PlayStation 2," reports The Verge. Craig Steffen, senior research scientist at the National Center for Supercomputing Applications, and his group hooked up between 60 and 70 PlayStation 2s, wrote some code, and built out a library.

"The PS3 entered the scene in late 2006 with powerful hardware and an easier way to load Linux onto the devices," the report adds. "Researchers would still need to link the systems together, but suddenly, it was possible for them to imagine linking together all of those devices into something that was a game-changer instead of just a proof-of-concept prototype."
United States

The World's Fastest Supercomputers Hit Higher Speeds Than Ever With Linux (zdnet.com) 124

An anonymous reader quotes a report from ZDNet: In the latest Top 500 supercomputer ratings, the average speed of these Linux-powered racers is now an astonishing 1.14 petaflops. The fastest of the fast machines haven't changed since the June 2019 Top 500 supercomputer list. Leading the way is Oak Ridge National Laboratory's Summit system, which holds top honors with an HPL result of 148.6 petaflops. This is an IBM-built supercomputer using Power9 CPUs and NVIDIA Tesla V100 GPUs. In a rather distant second place is another IBM machine: Lawrence Livermore National Laboratory's Sierra system. It uses the same chips, but it "only" hit a speed of 94.6 petaflops.

Close behind at No. 3 is the Sunway TaihuLight supercomputer, with an HPL mark of 93.0 petaflops. TaihuLight was developed by China's National Research Center of Parallel Computer Engineering and Technology (NRCPC) and is installed at the National Supercomputing Center in Wuxi. It is powered exclusively by Sunway's SW26010 processors. Sunway's followed by the Tianhe-2A (Milky Way-2A). This is a system developed by China's National University of Defense Technology (NUDT). It's deployed at the National Supercomputer Center in China. Powered by Intel Xeon CPUs and Matrix-2000 accelerators, it has a top speed of 61.4 petaflops. Coming at No. 5, the Dell-built, Frontera, a Dell C6420 system is powered by Intel Xeon Platinum processors. It speeds along at 23.5 petaflops. It lives at the Texas Advanced Computing Center of the University of Texas. The most powerful new supercomputer on the list is Rensselaer Polytechnic Institute Center for Computational Innovations (CCI)'s AiMOS. It made the list in the 25th position with 8.0 petaflops. The IBM-built system, like Summit and Sierra, is powered by Power9 CPUs and NVIDIA V100 GPUs.
In closing, ZDNet's Steven J. Vaughan-Nichols writes: "Regardless of the hardware, all 500 of the world's fastest supercomputers have one thing in common: They all run Linux."
Oracle

Oracle's New Supercomputer Has 1,060 Raspberry Pis (tomshardware.com) 71

An anonymous reader quotes Tom's Hardware: One Raspberry Pi can make a nice web server, but what happens if you put more than 1,000 of them together? At Oracle's OpenWorld convention on Monday, the company showed off a Raspberry Pi Supercomputer that combines 1,060 Raspberry Pis into one powerful cluster.

According to ServeTheHome, which first reported the story, the supercomputer features scores of racks with 21 Raspberry Pi 3 B+ boards each. To make everything run well together, the system runs on Oracle Autonomous Linux... Every unit connects to a single rebranded Supermicro 1U Xeon server, which functions as a central storage server for the whole supercomputer. The Oracle team also created custom, 3D printed brackets to help support all the Pis and connecting components...

ServeTheHome asked Oracle why it chose to create a cluster of Raspberry Pis instead of using a virtualized Arm server and one company rep said simply that "...a big cluster is cool."

Supercomputing

University of Texas Announces Fastest Academic Supercomputer In the World (utexas.edu) 31

On Tuesday the University of Texas at Texas launched the fastest supercomputer at any academic facility in the world.

The computer -- named "Frontera" -- is also the fifth most-powerful supercomputer on earth. Slashdot reader aarondubrow quotes their announcement: The Texas Advanced Computing Center (TACC) at The University of Texas is also home to Stampede2, the second fastest supercomputer at any American university. The launch of Frontera solidifies UT Austin among the world's academic leaders in this realm...

Joined by representatives from the National Science Foundation (NSF) -- which funded the system with a $60 million award -- UT Austin, and technology partners Dell Technologies, Intel, Mellanox Technologies, DataDirect Networks, NVIDIA, IBM, CoolIT and Green Revolution Cooling, TACC inaugurated a new era of academic supercomputing with a resource that will help the nation's top researchers explore science at the largest scale and make the next generation of discoveries.

"Scientific challenges demand computing and data at the largest and most complex scales possible. That's what Frontera is all about," said Jim Kurose, assistant director for Computer and Information Science and Engineering at NSF. "Frontera's leadership-class computing capability will support the most computationally challenging science applications that U.S. scientists are working on today."

Frontera has been supporting science applications since June and has already enabled more than three dozen teams to conduct research on a range of topics from black hole physics to climate modeling to drug design, employing simulation, data analysis, and artificial intelligence at a scale not previously possible.

Here's more technical details from the announcement about just how fast this supercomputer really is.
AI

Microsoft Invests $1 Billion in OpenAI To Develop AI Technologies on Azure (venturebeat.com) 28

Microsoft today announced that it would invest $1 billion in OpenAI, the San Francisco-based AI research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman. From a report: In a blog post, Brockman said the investment will support the development of artificial general intelligence (AGI) -- AI with the capacity to learn any intellectual task that a human can -- with "widely distributed" economic benefits. To this end, OpenAI intends to partner with Microsoft to jointly develop new AI technologies for the Seattle company's Azure cloud platform and will enter into an exclusivity agreement with Microsoft to "further extend" large-scale AI capabilities that "deliver on the promise of AGI." Additionally, OpenAI will license some of its technologies to Microsoft, which will commercialize them and sell them to as-yet-unnamed partners, and OpenAI will train and run AI models on Azure as it works to develop new supercomputing hardware while "adhering to principles on ethics and trust."

According to Brockman, the partnership was motivated in part by OpenAI's continued pursuit of enormous computational power. Its researchers recently released analysis showing that from 2012 to 2018 the amount of compute used in the largest AI training runs grew by more than 300,000 times, with a 3.5-month doubling time, far exceeding the pace of Moore's Law. Perhaps exemplifying the trend is OpenAI's OpenAI Five, an AI system that squared off against professional players of the video game Dota 2 last summer. On Google's Cloud Platform -- in the course of training -- it played 180 years' worth of games every day on 256 Nvidia Tesla P100 graphics cards and 128,000 processor cores, up from 60,000 cores just a few years ago.

Earth

How The Advance Weather Forecast Got Good (npr.org) 80

NPR notes today's "supercomputer-driven" weather modelling can crunch huge amounts of data to accurately forecast the weather a week in advance -- pointing out that "a six-day weather forecast today is as good as a two-day forecast was in the 1970s."

Here's some highlights from their interview with Andrew Blum, author of The Weather Machine: A Journey Inside the Forecast : One of the things that's happened as the scale in the system has shifted to the computers is that it's no longer bound by past experience. It's no longer, the meteorologists say, "Well, this happened in the past, we can expect it to happen again." We're more ready for these new extremes because we're not held down by past expectations...

The models are really a kind of ongoing concern. ... They run ahead in time, and then every six hours or every 12 hours, they compare their own forecast with the latest observations. And so the models in reality are ... sort of dancing together, where the model makes a forecast and it's corrected slightly by the observations that are coming in...

It's definitely run by individual nations -- but individual nations with their systems tied together... It's a 150-year-old system of governments collaborating with each other as a global public good... The positive example from last month was with Cyclone Fani in India. And this was a very similar storm to one 20 years ago, that tens of thousands of people had died. This time around, the forecast came far enough in advance and with enough confidence that the Indian government was able to move a million people out of the way.

China

China Has Almost Half of The World's Supercomputers, Explores RISC-V and ARM (techtarget.com) 90

Slashddot reader dcblogs quote Tech Target: Ten years ago, China had 21 systems on the Top500 list of the world's largest supercomputing systems. It now has 219, according to the biannual listing, which was updated just this week. At its current pace of development, China may have half of the supercomputing systems on the Top500 list by 2021.... U.S. supercomputers make up 116 of the latest Top500 list.

Despite being well behind China in total system count, the U.S. leads in overall performance, as measured by the High Performance Linpack (HPL) benchmark. The HPL benchmark is used to solve linear equations. The U.S. has about 38% of the aggregate Top500 list performance. China is in second, at nearly 30% of the performance total. But this performance metric has flip-flopped between China and the U.S., because it's heavily weighted by the largest systems. The U.S. owns the top two spots on the latest Top500 list, thanks to two IBM supercomputers at U.S. national laboratories. These systems, Summit and Sierra, alone, represent 15.6% of the HPL performance measure.

Nathan Brookwood, principal analyst at Insight 64, says China is concerned the U.S. may limit its x86 chip imports, and while China may look to ARM, they're also investigating the RISC-V processor architecture.

Paresh Kharya, director of product marketing at Nvidia, tells Tech Target "We expect x86 CPUs to remain dominant in the short term. But there's growing interest in ARM for supercomputing, as evidenced by projects in the U.S., Europe and Japan. Supercomputing centers want choice in CPU architecture."
Supercomputing

Nvidia Will Support ARM Hardware For High-Performance Computing (venturebeat.com) 24

An anonymous reader quotes a report from VentureBeat: At the International Supercomputing Conference (ISC) in Frankfurt, Germany this week, Santa Clara-based chipmaker Nvidia announced that it will support processors architected by British semiconductor design company Arm. Nvidia anticipates that the partnership will pave the way for supercomputers capable of "exascale" performance -- in other words, of completing at least a quintillion floating point computations ("flops") per second, where a flop equals two 15-digit numbers multiplied together. Nvidia says that by 2020 it will contribute its full stack of AI and high-performance computing (HPC) software to the Arm ecosystem, which by Nvidia's estimation now accelerates over 600 HPC applications and machine learning frameworks. Among other resources and services, it will make available CUDA-X libraries, graphics-accelerated frameworks, software development kits, PGI compilers with OpenACC support, and profilers. Nvidia founder and CEO Jensen Huang pointed out in a statement that, thanks to this commitment, Nvidia will soon accelerate all major processor architectures: x86, IBM's Power, and Arm. "As traditional compute scaling has ended, the world's supercomputers have become power constrained," said Huang. "Our support for Arm, which designs the world's most energy-efficient CPU architecture, is a giant step forward that builds on initiatives Nvidia is driving to provide the HPC industry a more power-efficient future."
Businesses

Hewlett Packard Enterprise To Acquire Supercomputer Maker Cray for $1.3 Billion (anandtech.com) 101

Hewlett Packard Enterprise will be buying the supercomputer maker Cray for roughly $1.3 billion, the companies said this morning. Intending to use Cray's knowledge and technology to bolster their own supercomputing and high-performance computing technologies, when the deal closes, HPE will become the world leader for supercomputing technology. From a report: Cray of course needs no introduction. The current leader in the supercomputing field and founder of supercomputing as we know it, Cray has been a part of the supercomputing landscape since the 1970s. Starting at the time with fully custom systems, in more recent years Cray has morphed into an integrator and scale-out specialist, combining processors from the likes of Intel, AMD, and NVIDIA into supercomputers, and applying their own software, I/O, and interconnect technologies. The timing of the acquisition announcement closely follows other major news from Cray: the company just landed a $600 million US Department of Energy contract to supply the Frontier supercomputer to Oak Ridge National Laboratory in 2021. Frontier is one of two exascale supercomputers Cray is involved in -- the other being a subcontractor for the 2021 Aurora system -- and in fact Cray is involved in the only two exascale systems ordered by the US Government thus far. So in both a historical and modern context, Cray was and is one of the biggest players in the supercomputing market.
Supercomputing

'Pi VizuWall' Is a Beowulf Cluster Built With Raspberry Pi's (raspberrypi.org) 68

Why would someone build their own Beowulf cluster -- a high-performance parallel computing prototype -- using 12 Raspberry Pi boards? It's using the standard Beowulf cluster architecture found in about 88% of the world's largest parallel computing systems, with an MPI (Message Passing Interface) system that distributes the load over all the nodes.

Matt Trask, a long-time computer engineer now completing his undergraduate degree at Florida Atlantic University, explains how it grew out of his work on "virtual mainframes": In the world of parallel supercomputers (branded 'high-performance computing', or HPC), system manufacturers are motivated to sell their HPC products to industry, but industry has pushed back due to what they call the "Ninja Gap". MPI programming is hard. It is usually not learned until the programmer is in grad school at the earliest, and given that it takes a couple of years to achieve mastery of any particular discipline, most of the proficient MPI programmers are PhDs. And this, is the Ninja Gap -- industry understands that the academic system cannot and will not be able to generate enough 'ninjas' to meet the needs of industry if industry were to adopt HPC technology.

As part of my research into parallel computing systems, I have studied the process of learning to program with MPI and have found that almost all current practitioners are self-taught, coming from disciplines other than computer science. Actual undergraduate CS programs rarely offer MPI programming. Thus my motivation for building a low-cost cluster system with Raspberry Pis, in order to drive down the entry-level costs. This parallel computing system, with a cost of under $1000, could be deployed at any college or community college rather than just at elite research institutions, as is done [for parallel computing systems] today.

The system is entirely open source, using only standard Raspberry Pi 3B+ boards and Raspbian Linux. The version of MPI that is used is called MPICH, another open-source technology that is readily available.

But there's an added visual flourish, explains long-time Slashdot reader iamacat. "To visualize computing, each node is equipped with a servo motor to position itself according to its current load -- lying flat when fully idle, standing up 90 degrees when fully utilized."

Its data comes from the /proc filesystem, and the necessary hinges for this prototype were all generated with a 3D printer. "The first lesson is to use CNC'd aluminum for the motor housings instead of 3D-printed plastic," writes Trask. "We've seen some minor distortion of the printed plastic from the heat generated in the servos."
AI

The World's Fastest Supercomputer Breaks an AI Record (wired.com) 66

Along America's west coast, the world's most valuable companies are racing to make artificial intelligence smarter. Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors. But late last year, a project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government. From a report: The record-setting project involved the world's most powerful supercomputer, Summit, at Oak Ridge National Lab. The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list. As part of a climate research project, the giant computer booted up a machine-learning experiment that ran faster than any before. Summit, which occupies an area equivalent to two tennis courts, used more than 27,000 powerful graphics processors in the project. It tapped their power to train deep-learning algorithms, the technology driving AI's frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

"Deep learning has never been scaled to such levels of performance before," says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab. His group collaborated with researchers at Summit's home base, Oak Ridge National Lab. Fittingly, the world's most powerful computer's AI workout was focused on one of the world's largest problems: climate change. Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century's worth of three-hour forecasts for Earth's atmosphere.

Education

A Supercomputer In a 19th Century Church Is 'World's Most Beautiful Data Center' (vice.com) 62

"Motherboard spoke to the Barcelona Supercomputing Center about how it outfitted a deconsecrated 19th century chapel to host the MareNostrum 4 -- the 25th most powerful supercomputer in the world," writes Slashdot reader dmoberhaus. From the report: Heralded as the "most beautiful data center in the world," the MareNostrum supercomputer came online in 2005, but was originally hosted in a different building at the university. Meaning "our sea" in Latin, the original MareNostrum was capable of performing 42.35 teraflops -- 42.35 trillion operations per second -- making it one of the most powerful supercomputers in Europe at the time. Yet the MareNostrum rightly became known for its aesthetics as much as its computing power. According to Gemma Maspoch, head of communications for Barcelona Supercomputing Center, which oversees the MareNostrum facility, the decision to place the computer in a giant glass box inside a chapel was ultimately for practical reasons.

"We were in need of hundreds of square meters without columns and the capacity to support 44.5 tons of weight," Maspoch told me in an email. "At the time there was not much available space at the university and the only room that satisfied our requirements was the Torre Girona chapel. We did not doubt it for a moment and we installed a supercomputer in it." According to Maspoch, the chapel required relatively few modifications to host the supercomputer, such as reinforcing the soil around the church so that it would hold the computer's weight and designing a glass box that would house the computer and help cool it.
The supercomputer has been beefed up over the years. Most recently, the fourth iteration came online in 2017 "with a peak computing capacity of 11 thousand trillion operations per second (11.15 petaflops)," reports Motherboard. "MareNostrum 4 is spread over 48 server racks comprising a total of 3,456 nodes. A node consists of two Intel chips, each of which has 24 processors."
Cloud

Is Linux Taking Over The World? (networkworld.com) 243

"2019 just might be the Year of Linux -- the year in which Linux is fully recognized as the powerhouse it has become," writes Network World's "Unix dweeb." The fact is that most people today are using Linux without ever knowing it -- whether on their phones, online when using Google, Facebook, Twitter, GPS devices, and maybe even in their cars, or when using cloud storage for personal or business use. While the presence of Linux on all of these systems may go largely unnoticed by consumers, the role that Linux plays in this market is a sign of how critical it has become. Most IoT and embedded devices -- those small, limited functionality devices that require good security and a small footprint and fill so many niches in our technology-driven lives -- run some variety of Linux, and this isn't likely to change. Instead, we'll just be seeing more devices and a continued reliance on open source to drive them.

According to the Cloud Industry Forum, for the first time, businesses are spending more on cloud than on internal infrastructure. The cloud is taking over the role that data centers used to play, and it's largely Linux that's making the transition so advantageous. Even on Microsoft's Azure, the most popular operating system is Linux. In its first Voice of the Enterprise survey, 451 Research predicted that 60 percent of nearly 1,000 IT leaders surveyed plan to run the majority of their IT off premises by 2019. That equates to a lot of IT efforts relying on Linux. Gartner states that 80 percent of internally developed software is now either cloud-enabled or cloud-native.

The article also cites Linux's use in AI, data lakes, and in the Sierra supercomputer that monitors America's nuclear stockpile, concluding that "In its domination of IoT, cloud technology, supercomputing and AI, Linux is heading into 2019 with a lot of momentum."

And there's even a long list of upcoming Linux conferences...
Intel

Intel Cascade Lake-AP Xeon CPUs Embrace the Multi-Chip Module (techreport.com) 72

Ahead of the annual Supercomputing 2018 conference next week, Intel today announced part of its upcoming Cascade Lake strategy. From a report: The company teased plans for a new Xeon platform called Cascade Lake Advanced Performance, or Cascade Lake-AP, this morning ahead of the Supercomputing 2018 conference. This next-gen platform doubles the cores per socket from an Intel system by joining a number of Cascade Lake Xeon dies together on a single package with the blue team's Ultra Path Interconnect, or UPI. Intel will allow Cascade Lake-AP servers to employ up to two-socket (2S) topologies, for as many as 96 cores per server.

Intel chose to share two competitive performance numbers alongside the disclosure of Cascade Lake-AP. One of these is that a top-end Cascade Lake-AP system can put up 3.4x the Linpack throughput of a dual-socket AMD Epyc 7601 platform. This benchmark hits AMD where it hurts. The AVX-512 instruction set gives Intel CPUs a major leg up on the competition in high-performance computing applications where floating-point throughput is paramount. Intel used its own compilers to create binaries for this comparison, and that decision could create favorable Linpack performance results versus AMD CPUs, as well.

Operating Systems

Finally, It's the Year of the Linux... Supercomputer (zdnet.com) 171

Beeftopia writes: From ZDNet: "The latest TOP500 Supercomputer list is out. What's not surprising is that Linux runs on every last one of the world's fastest supercomputers. Linux has dominated supercomputing for years. But, Linux only took over supercomputing lock, stock, and barrel in November 2017. That was the first time all of the TOP500 machines were running Linux. Before that IBM AIX, a Unix variant, was hanging on for dear life low on the list."

An interesting architectural note: "GPUs, not CPUs, now power most of supercomputers' speed."

IT

HPE Announces World's Largest ARM-based Supercomputer (zdnet.com) 57

The race to exascale speed is getting a little more interesting with the introduction of HPE's Astra -- what will be the world's largest ARM-based supercomputer. From a report: HPE is building Astra for Sandia National Laboratories and the US Department of Energy's National Nuclear Security Administration (NNSA). The NNSA will use the supercomputer to run advanced modeling and simulation workloads for things like national security, energy, science and health care.

HPE is involved in building other ARM-based supercomputing installations, but when Astra is delivered later this year, "it will hands down be the world's largest ARM-based supercomputer ever built," Mike Vildibill, VP of Advanced Technologies Group at HPE, told ZDNet. The HPC system is comprised of 5,184 ARM-based processors -- the Thunder X2 processor, built by Cavium. Each processor has 28 cores and runs at 2 GHz. Astra will deliver over 2.3 theoretical peak petaflops of performance, which should put it well within the top 100 supercomputers ever built -- a milestone for an ARM-based machine, Vildibill said.

Cloud

Nvidia Debuts Cloud Server Platform To Unify AI and High-Performance Computing (siliconangle.com) 15

Hoping to maintain the high ground in AI and high-performance computing, Nvidia late Tuesday debuted a new computing architecture that it claims will unify both fast-growing areas of the industry. From a report: The announcement of the HGX-2 cloud-server platform, made by Nvidia Chief Executive Jensen Huang at its GPU Technology Conference in Taipei, Taiwan, is aimed at many new applications that combine AI and HPC. "We believe the future requires a unified platform for AI and high-performance computing," Paresh Kharya, product marketing manager for Nvidiaâ(TM)s accelerated-computing group, said during a press call Tuesday.

Others agree. "I think that AI will revolutionize HPC," Karl Freund, a senior analyst at Moor Insights & Strategy, told SiliconANGLE. "I suspect many supercomputing centers will deploy HGX2 as it can add dramatic computational capacity for both HPC and AI." More specifically, the new architecture enables applications involving scientific computing and simulations, such as weather forecasting, as well as both training and running of AI models such as deep learning neural networks, for jobs such as image and speech recognition and navigation for self-driving cars.

Network

On This Day 25 Years Ago, the Web Became Public Domain (popularmechanics.com) 87

On April 30, 1993, CERN -- the European Organization for Nuclear Research -- announced that it was putting a piece of software developed by one of its researchers, Tim Berners-Lee, into the public domain. That software was a "global computer networked information system" called the World Wide Web, and CERN's decision meant that anyone, anywhere, could run a website and do anything with it. From a report: While the proto-internet dates back to the 1960s, the World Wide Web as we know it had been invented four year earlier in 1989 by CERN employee Tim Berners-Lee. The internet at that point was growing in popularity among academic circles but still had limited mainstream utility. Scientists Robert Kahn and Vinton Cerf had developed Transmission Control Protocol and Internet Protocol (TCP/IP), which allowed for easier transfer of information. But there was the fundamental problem of how to organize all that information.

In the late 80s, Berners-Lee suggested a web-like system of mangement, tied together by a series of what he called hyperlinks. In a proposal, Berners-Lee asked CERN management to "imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document you could skip to them with a click of the mouse."

Four years later, the project was still growing. In January 1993, the first major web browser, known as MOSAIC, was released by the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign. While there was a free version of MOSAIC, for-profit software companies purchased nonexclusive licenses to sell and support it. Licensing MOSAIC at the time cost $100,000 plus $5 each for any number of copies.

The Internet

Mosaic, the First HTML Browser That Could Display Images Alongside Text, Turns 25 (wired.com) 132

NCSA Mosaic 1.0, the first web browser to achieve popularity among the general public, was released on April 22, 1993. It was developed by a team of students at the University of Illinois' National Center for Supercomputing Applications (NCSA), and had the ability to display text and images inline, meaning you could put pictures and text on the same page together, in the same window. Wired reports: It was a radical step forward for the web, which was at that point, a rather dull experience. It took the boring "document" layout of your standard web page and transformed it into something much more visually exciting, like a magazine. And, wow, it was easy. If you wanted to go somewhere, you just clicked. Links were blue and underlined, easy to pick out. You could follow your own virtual trail of breadcrumbs backwards by clicking the big button up there in the corner. At the time of its release, NCSA Mosaic was free software, but it was available only on Unix. That made it common at universities and institutions, but not on Windows desktops in people's homes.

The NCSA team put out Windows and Mac versions in late 1993. They were also released under a noncommercial software license, meaning people at home could download it for free. The installer was very simple, making it easy for just about anyone to get up and running on the web. It was then that the excitement really began to spread. Mosaic made the web come to life with color and images, something that, for many people, finally provided the online experience they were missing. It made the web a pleasure to use.

Slashdot Top Deals