NVIDIA releases two open-source tools for quantum labs: Ising-Decoding, an Apache-2.0 neural pre-decoder for surface codes, and Ising-Calibration-1, a 35B MoE vision-language model that analyzes calibration plots.
NVIDIA has moved deeper into quantum computing with two releases this week that target the two hardest bottlenecks of running a real quantum lab: error correction and calibration analysis.
The first is Ising-Decoding, an open-source Apache-2.0 framework for training neural-network pre-decoders that run alongside classical decoders like PyMatching. A 3D convolutional neural network consumes detector syndromes across space and time, predicts corrections that reduce syndrome density, and hands a cleaner signal to the final decoder. The repository ships three model variants at different receptive fields (R=9, 13, 17) and deployment paths through PyTorch, ONNX, and TensorRT, with optional INT8 and FP8 quantization.
The approach is detailed in a new NVIDIA Research paper, "Fast AI-based Pre-decoders for Surface Codes," dated April 2026. The README includes performance charts for X-basis decoding at physical error rates p=0.003 and p=0.006, the regime where surface codes become interesting for fault-tolerant quantum computation.
The second release is Ising-Calibration-1-35B-A3B, a vision-language model fine-tuned to analyze quantum calibration experiment plots. Built on Qwen3.5-35B-A3B, a sparse mixture-of-experts architecture with 256 experts and 8 active per token, the model has 35B total parameters but activates only 3B per inference step. It runs on 2x NVIDIA L40S (48GB) or a single H100 (80GB).
Benchmarks that matter to quantum labs
On NVIDIA's QCalEval benchmark, the fine-tuned model scored 74.7 overall versus 55.5 for the Qwen3.5 base, a 35% relative gain, evaluated by a panel that averaged GPT-5.4 and Gemini-3.1-Pro as judges. The largest gaps show up in analytical tasks: experimental conclusion jumps from 39.9 to 67.1, fit quality assessment leaps from 52.7 to 90.5, and experiment success classification moves from 50.6 to 75.3.
Technical description barely moves (86.8 to 87.8), which is the expected pattern. Base models already describe what they see. The jump is in reasoning about whether an experiment actually worked, extracting parameters, and ranking fit quality, tasks that normally require a senior calibration engineer.
Why two releases in parallel
The strategic logic is clean. Surface codes need pre-decoders that run in microseconds to keep up with error syndrome generation. Calibration engineers need analysis tools that can read a plot, classify it, and tell whether to re-run or accept. Both are NP-hard-feeling problems in practice, both lend themselves to neural network approximations, and both are now accessible to any lab with a handful of NVIDIA GPUs.
Ising-Decoding targets superconducting qubits using standard surface-code syndromes. Ising-Calibration-1 supports both superconducting and neutral-atom systems, notable because neutral-atom platforms from QuEra and similar vendors have their own calibration pipelines that the big-lab software has historically struggled with.
Licensing separates ops from models
Ising-Decoding ships under Apache 2.0, making it fully redistributable and embeddable in closed-source stacks. Ising-Calibration-1 ships under the NVIDIA Open Model License, which permits commercial and non-commercial use but carries NVIDIA's standard terms on model weights. The base Qwen3.5 weights remain Apache 2.0.
That split is deliberate. The training framework is infrastructure NVIDIA wants everyone building on. The fine-tuned domain-expert model is a product moat.
What this changes for the field
Neural decoders have been a research topic for years. Moving them to Apache 2.0 with TensorRT deployment paths means any quantum hardware vendor can now plug NVIDIA's pre-decoder into its control stack without retraining from scratch. The Ising-Calibration model lowers the bar even further for labs that lack senior calibration engineers. The documentation repeatedly warns that outputs should be validated by domain experts, but the 74.7 QCalEval score suggests the model is already useful as a first-pass filter.
For the broader AI industry, the pattern is worth watching: domain-specialized MoE models targeting physical-science workflows. NVIDIA is not alone here. Google's quantum AI team and IBM Quantum both have internal calibration tooling. Open releases of this weight class tuned for a single scientific domain are still rare.
What to watch next
The release notes tag Ising-Decoding v0.1.0, implying more model sizes and error models are coming. The paper's April 2026 date suggests conference submission at QIP or a QEC venue in the next cycle. And the fact that NVIDIA is shipping a 35B vision model for plot analysis signals that the company sees quantum calibration as a workflow problem big enough to fund a domain-specific foundation model, not just decoders.
For labs running surface-code experiments today, the practical question is whether a TensorRT-deployed 3D CNN fits inside their existing syndrome-generation latency budget. If yes, this is a plug-in upgrade.
