ai

NVIDIA's AI Supercomputing Push: New Chips, Quantum Links

November 17, 2025 · 3 min read

NVIDIA's AI Supercomputing Push: New Chips, Quantum Links

At the SC25 supercomputing conference, NVIDIA unveiled a sweeping set of advances across its accelerated computing portfolio, signaling a major push to cement its role in the next generation of AI infrastructure. The announcements span data center networking, quantum computing integration, and new desktop-scale supercomputers, all designed to meet the exploding computational demands of trillion-parameter AI models.

Among the most notable reveals was DGX Spark, which NVIDIA began shipping last month. Billed as the world's smallest AI supercomputer, it packs a petaflop of performance into a desktop form factor. Built on the Grace Blackwell architecture, it integrates NVIDIA's GPUs, CPUs, and networking technologies, offering developers the ability to run inference on models up to 200 billion parameters and fine-tune them locally.

The company also introduced NVIDIA Apollo, a family of open models for AI physics that aims to accelerate scientific simulation across fields from semiconductors to weather forecasting. Industry leaders including Applied Materials, Cadence, and Siemens are reportedly adopting these models to streamline design processes. Apollo incorporates neural operators, transformers, and diffusion s, providing pretrained checkpoints and reference workflows for developers to customize.

Complementing these efforts is NVIDIA Warp, an open-source Python framework that promises up to 245x GPU acceleration for computational physics and AI workloads. By combining Python's accessibility with performance approaching native CUDA code, Warp allows developers to build GPU-accelerated 3D simulations that integrate with machine learning pipelines in PyTorch and JAX without leaving their programming environment.

On the networking front, NVIDIA highlighted its BlueField-4 data processing units (DPUs), which offload critical data center functions to free up CPUs and GPUs for compute-intensive tasks. Storage innovators DDN, VAST Data, and WEKA are adopting BlueField-4 to enhance performance for AI and scientific workloads, transforming storage into what NVIDIA describes as a 'performance multiplier' for supercomputing infrastructure.

Perhaps the most ambitious networking development is the NVIDIA Quantum-X Photonics platform, which uses co-packaged optics to drastically reduce energy consumption and improve resiliency in AI factories. Early adopters including TACC, Lambda, and CoreWeave plan to integrate these switches into next-generation systems as early as next year, addressing power and signal-integrity s at massive scale.

In quantum computing, NVIDIA's NVQLink technology is gaining traction at more than a dozen top scientific computing centers worldwide. This universal interconnect links quantum processors with NVIDIA GPUs, enabling real-time quantum error correction and hybrid applications. Quantinuum recently demonstrated the world's first real-time decoding of scalable quantum error-correction codes using NVQLink, achieving 99% fidelity.

The company also announced partnerships with RIKEN in Japan to build two new GPU-accelerated supercomputers featuring 2,140 Blackwell GPUs, strengthening the country's sovereign AI strategy. These systems, scheduled for operation in spring 2026, build on RIKEN's collaboration with Fujitsu and NVIDIA to codesign FugakuNEXT, the successor to the Fugaku supercomputer.