frontier exascale supercomputer HPE Cray AMD Oak Ridge National Laboratory (ORNL)

Wickedly Fast Frontier Supercomputer Officially Heralds the Next Computer Age

Today Oak Ridge National Laboratory’s Frontier supercomputer was crowned fastest in the world in the six months Top500 list. Frontier has more than doubled the speed of its latest title holder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds of over a trillion calculations per second — a milestone it’s been pursuing for 14 years.

That’s a big number. So before we go any further, it’s worth speaking in more human terms.

Imagine giving all 7.9 billion people on Earth a pencil and a list of simple math or multiplication problems. Now ask everyone to solve one problem per second for four and a half years. By pooling the math skills of the world’s population over half a decade, you have now solved more than a trillion problems.

Frontier can do the same job in a second and keep it going indefinitely. A thousand years of computation by everyone on Earth would take Frontier just under four minutes.

This blistering achievement heralds a new era known as exascale computing.

The Age of Exascale

The number of floating point operations, or simple math problems, that a computer solves per second is referred to as FLOP/s or popularly called “flops”. Progress is tracked in multiples of a thousand: a thousand flops equals a kiloflop, a million flops equals a megaflop, and so on.

The ASCI Red supercomputer was the first to record speeds of a trillion flopsor a teraflop, in 1997. (Notably now an Xbox Series X game console) packs 12 teraflops.) Road runner first broke the petaflop barrier, a quadrillion flops, in 2008. Since then, the fastest computers have been measured in petaflops. Frontier is the first to officially hit speeds over an exaflop – 1,102 exaflops, to be exact – or 1,000 times faster than Roadrunner.

It’s true that today’s supercomputers are much faster than older machines, but they still take up entire rooms, with rows of cabinets full of wires and chips. Frontier, in particular, is a liquid-cooled system from HPE Cray with 8.73 million AMD processing cores. Not only is it the fastest in the world, but it’s also the second most efficient – surpassed only by a test system consisting of one of its cabinets – with a rating of 52.23 gigaflops/watt.

So, what’s the problem?

Most supercomputers are funded, built, and managed by government agencies. They are used by scientists to model physical systems, such as the climate or the structure of the universe, as well as by the military for nuclear weapons research.

Supercomputers are now tailor-made to also use the latest algorithms in artificial intelligence. Indeed, a few years ago, Top500 added a new lower precision benchmark to measure supercomputing speed on AI applications. By that mark, Fugaku overshadowed an exaflop way back in 2020. The Fugaku system set the most recent record for machine learning at 2 exaflops. Frontier broke that record with AI speeds of 6.86 exaflops.

As very large machine learning algorithms have emerged in recent years, private companies have started building their own machines together with governments. Microsoft and OpenAI made headlines in 2020 with a machine they say was fifth fastest in the world† In January, Meta said it was… upcoming RSC supercomputer would be the fastest on AI in the world at 5 exaflops. (Looks like they need a few more chips now to match Frontier.)

Frontier and other private supercomputers will enable machine learning algorithms to push the boundaries further. Today’s most advanced algorithms have hundreds of billions of parameters – or internal connections – but emerging algorithms are likely to grow to trillions.

So exascale supercomputers will allow researchers to advance technology and do new advanced science that was once impractical on slower machines.

Is Frontier really the first Exascale machine?

When exactly supercomputing broke the exaflop barrier depends in part on how you define it and what was measured.

[email protected], a distributed system consisting of a motley collection of volunteer laptops, broke an exaflop at the start of the pandemic. But according to Top500 co-founder Jack Dongarra[email protected] is a specialized system that is “embarrassingly parallel” and only works on problems with pieces that can be solved completely independently.

Even more pertinently, last year rumors circulated that China had as many as two exascale supercomputers in secret. Researchers published some details about the machines in newspapers late last year, but they have yet to be officially benchmarked by Top500. in a IEEE spectrum In an interview last December, Dongarra speculated that if exascale machines exist in China, the government might try not to put them in the spotlight to avoid creating geopolitical tensions that could push the U.S. to export key exports. limit technology.

So it’s possible that China beat the US with the exascale punch, but going from the Top500, a metric the supercomputer field has been using to determine top dog since the early 1990s, Frontier still gets the official nod.

Next: Zetta Scale?

It took about 12 years to go from terascale to petascale and another 14 to get to exascale. The next great leap forward could take just as long or longer. The computer industry continues steady progress on chipsbut the pace has slowed and each step has become more expensive. Moore’s Law Isn’t Deadbut it’s not as stable as it used to be.

For supercomputers, the challenge goes beyond pure computing power. It may seem like you should be able to scale any system to reach any benchmark you want: just make it bigger. But scale also requires efficiency, or energy needs get out of hand. It is also more difficult to write software to solve problems in parallel across increasingly large systems.

The next 1000-fold jump, known as zettascale, will require innovations in chips, the systems that connect them to supercomputers, and the software that runs on them. A team of Chinese researchers predicted that we would reach zettascale computing by 2035† But of course nobody really knows for sure. Exascale, expected to get by 2018 or 2020made the scene a few years behind schedule.

What is certain is that the hunger for more computing power is unlikely to abate. Consumer applications, such as self-driving cars and mixed reality, and research applications, such as modeling and artificial intelligence, will require faster, more efficient computers. If necessity is the mother of invention, you can expect ever faster computers for a while yet.

Image credit: Oak Ridge National Laboratory (ORNL)

Leave a Comment

Your email address will not be published.