hardware

How GPUs Revolutionized Supercomputing and AI

November 18, 2025 · 3 min read

How GPUs Revolutionized Supercomputing and AI

The trajectory of computing power has undergone a dramatic reversal over the past 15 years. Where processing capability once flowed from massive supercomputers down to consumer devices, graphics processing units originally developed for gaming have surged upstream to redefine high-performance computing. This fundamental shift has positioned GPU-accelerated systems as the new standard for scientific research and artificial intelligence.

The JUPITER supercomputer at Germany's Forschungszentrum Jülich exemplifies this transformation. Ranking among the world's most efficient systems at 63.3 gigaflops per watt, it also delivers staggering AI performance of 116 AI exaflops, up from 92 at the recent ISC High Performance conference. This represents more than just raw power—it signals how AI capabilities have become central to scientific computing.

The statistics reveal an unmistakable trend. In 2019, approximately 70% of TOP100 high-performance computing systems relied solely on CPUs. Today, that figure has plummeted below 15%, with 88 of the TOP100 systems now accelerated and 80 of those powered by NVIDIA GPUs. Across the broader TOP500 list, 388 systems—representing 78% of the total—now incorporate NVIDIA technology.

NVIDIA founder and CEO Jensen Huang anticipated this shift years before the current generative AI boom. At SC16, he described deep learning as arriving "like Thor's hammer falling from the sky," providing researchers with unprecedented tools to tackle complex global s. The mathematical realities of power consumption were already pushing computing toward GPUs, but the AI revolution accelerated this transition dramatically.

The CUDA-X computing platform built on NVIDIA GPUs enabled supercomputers to handle diverse precision formats—from double precision (FP64) for traditional scientific computing to mixed precision (FP32, FP16) and ultra-efficient formats like INT8 that form the backbone of modern AI. This flexibility allowed researchers to maximize performance within constrained power budgets, running larger simulations and training more sophisticated neural networks.

Even before AI became dominant, power efficiency concerns were driving the shift toward acceleration. Systems like Titan at Oak Ridge National Laboratory in 2012 and Piz Daint in Europe in 2013 demonstrated how combining CPUs with GPUs could unlock massive performance gains. By 2017, leadership systems like Summit and Sierra established acceleration-first as the new standard for scientific computing.

The convergence of simulation and AI represents the latest evolution in this transformation. JUPITER's combination of 116 AI exaflops alongside 1 exaflop of traditional computing power illustrates how scientific research now blends these approaches. Power efficiency hasn't just made exascale computing attainable—it has made AI at exascale practical, enabling breakthroughs in climate modeling, drug , and quantum simulation.

This shift began as a response to power constraints, evolved into an architectural advantage, and has now matured into a scientific capability that combines simulation and AI at unprecedented scales. As these technologies continue to advance, the rest of the computing world appears poised to follow the path blazed by scientific computing.