How artificial intelligence is bringing new superpowers to supercomputers

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today


The world’s fastest and most powerful supercomputers are capable of many things, but increasingly the world of high-performance computing (HPC) relies on artificial intelligence (AI).

At the International Supercomputing Conference (ISC) 2022, which took place from May 29 to June 2 in Hamburg, Germany, vendors announced new hardware and software systems for the world’s fastest supercomputers.

Among the great announcements, AMD unveiled that its silicon now powers the most powerful supercomputer ever built with the Frontier system, which is being built by Hewlett Packard Enterprise and will be deployed at Oak Ridge National Laboratory in Tennessee. not to be surpassed, Intel has announced are silicon efforts that will power future HPC systems, including the Sapphire Rapids CPU and upcoming Rialto Bridge GPU technologies.

Nvidia used ISC 2022 as the venue to announce that its Grace Hopper superchip will power the Venado supercomputer at Los Alamos National Laboratory. Nvidia has also detailed multiple case studies on how its HPC innovations are being used to enable AI for nuclear fusion and brain health research. HPC isn’t just about the world’s fastest supercomputers, either. Linux vendor Red Hat announced that it is working with the U.S. Department of Energy to help bridge the gap between cloud environments and HPC.

In terms of the intersection of HPC and artificial intelligence/machine learning (AI/ML), it is an area that the ISC conference is likely to continue to highlight in the coming years.

“It is clear that AI/ML will continue to play a greater role in HPC, but not all AI/ML is HPC or even HPC relevant,” John Shalf, program president for ISC, told VentureBeat. “We really want to dig deeper into the AI/ML applications and implementations that have a direct impact on scientific and engineering applications in both industry and academia.”

Intel sees increasing role for HPC and AI workloads

For Intel, the intersection of HPC and AI is relatively clear.

Anil Nanduri, vice president for strategy and market initiatives at Intel’s Super Compute Group, explained to VentureBeat that HPC workloads are uniquely demanding, require powerful clusters of computing power, and are typically used for scientific computing. He added that most of the top 500 supercomputers are great examples for high-value applications where the scientific community is researching new drug discoveries and materials science, and performing climate change models, simulations for manufacturing, complex fluid dynamics models and more.

“Like these traditional HPC workloads, AI/ML workloads are becoming more complex with higher compute requirements,” says Nanduri. “There are large-scale AI models running on data center infrastructures that require comparable compute performance to some of the leading HPC clusters.”

Nanduri sees continued demand and potential for HPC-powered AI as it can help improve performance and increase productivity.

“As AI workloads scale with massive data sets requiring HPC-level analytics, we will see more AI in HPC and more HPC computing requirements in AI,” Nanduri added.

How AI makes HPC more powerful

One of the big announcements at last week’s ISC was the unveiling of the Frontier system, which has been crowned the world’s fastest supercomputer.

According to Yan Fisher, global evangelist for emerging technologies at Red Hat, the adoption of AI/ML will take the computing power of supercomputers to a whole new level. For example, FLOPS (floating point operations per second) is the primary benchmark used in the top 500 supercomputer list. Fisher explained that FLOPS is designed to express the capabilities of any supercomputer to perform floating point calculations with very high precision. These complex calculations take time and a lot of computing power to complete.

“Using AI, on the other hand, helps to get results much faster by performing calculations with lower precision and then evaluating the outcome to fine-tune the answer with a high degree of accuracy,” Fisher told VentureBeat. “The Frontier System, using the HPL-AI benchmarkhas demonstrated the ability to perform more than six times more AI-focused computations per second than traditional floating point computations, greatly expanding that system’s computational capabilities.”

From HPC supercomputers to enterprise-level improvements in AI

HPC powers large systems, but what is the impact of AI innovations for supercomputers on business users? Fisher noted that companies are adopting AI/ML as they undergo a digital transformation.

What’s more interesting, he says, is that once enterprises have figured out how to deploy and benefit from AI/ML, the demand for AI/ML infrastructure will begin to rise. That question drives the next stage of adoption: the ability to scale.

“This is where HPC has traditionally been ahead of the pack, breaking big problems into smaller chunks and running them parallel or just in a more optimal way,” Fisher said.

On the other hand, Fisher noted that in the HPC space, the use of containers is not as common and if they are present, they are not the traditional application containers we see in enterprise and cloud deployments. That’s one of the reasons Red Hat is partnering with the Department of Energy National Labs, as their IT infrastructure teams want to better support their scientists with modern infrastructure tools.

At Intel, Nanduri said he sees a growing demand for computing acceleration for general purpose computing, HPC and AI workloads. Nanduri noted that Intel plans to deliver a diverse portfolio of heterogeneous architecture coupled with software and systems.

“These architectures, software and systems will allow us to improve performance by an order of magnitude while reducing power requirements for HPC and general AI/ML workloads,” said Nanduri. “The beauty of the Cambrian explosion in AI is that any innovations driven by the need for scalable computing will enable enterprises to leapfrog without having to invest in lengthy research cycles.”

The mission of VentureBeat is a digital city square for technical decision makers to gain knowledge about transformative business technology and transactions. Learn more about membership.

Leave a Comment

Your email address will not be published.