Today at CES, NVIDIA unveiled a transformative leap in personal AI computing: NVIDIA® Project DIGITS. Powered by the brand-new NVIDIA GB10 Grace Blackwell Superchip, Project DIGITS condenses a petaflop of AI performance into a compact desktop system. With 128GB of unified memory, a Linux-based NVIDIA DGX OS, and the full NVIDIA AI software stack preinstalled, this personal AI supercomputer gives researchers, data scientists, engineers, and students the ability to prototype and deploy large models—up to 200 billion parameters—all from a single, power-efficient machine.
In the words of Jensen Huang, founder and CEO of NVIDIA, “AI will be mainstream in every application for every industry. With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers. Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI.”
Below, we dive into the foundational technologies behind Project DIGITS, explore key specifications, discuss developer workflows, and highlight why this launch represents a critical milestone in AI adoption.
1. Meet the NVIDIA GB10 Grace Blackwell Superchip
At the heart of Project DIGITS lies the new NVIDIA GB10 Grace Blackwell Superchip—a singular system-on-a-chip (SoC) that fuses CPU and GPU architectures:
- Grace Blackwell Architecture
- Grace CPU: Based on 20 power-efficient Arm cores.
- Blackwell GPU: Featuring latest-generation CUDA® cores and fifth-generation Tensor Cores.
- Petaflop-Class AI Throughput
- Capable of delivering 1 petaflop of AI performance at FP4 precision.
- Ideal for large-scale model experimentation, fine-tuning, and inference across a wide range of AI applications.
- MediaTek Collaboration
- MediaTek’s deep expertise in Arm-based SoC designs ensures best-in-class power efficiency, performance, and connectivity.
- Despite its unparalleled power, the GB10 Superchip sips electricity from a standard electrical outlet, enabling AI experimentation in a typical office environment.
- NVLink®-C2C Chip-to-Chip Interconnect
- GPU and CPU are tightly bound by NVIDIA NVLink®-C2C, optimizing data flow and drastically reducing latency.
- This synergy empowers Project DIGITS to handle memory-intensive workloads—vital for today’s deep learning tasks, which involve continuous, large-scale data processing.
2. Bringing Petascale AI to Your Desk
Project DIGITS sets a new bar in how developers and researchers access cutting-edge AI performance at home, in universities, and across enterprise labs:
- Petaflop in a Compact Form Factor
The system’s sleek desktop design belies the immense power under the hood. By housing the Grace CPU and Blackwell GPU on a single superchip, Project DIGITS yields powerful performance once reserved for data center racks—now miniaturized to fit easily on a workbench. - 128GB Unified, Coherent Memory
Every Project DIGITS system boasts 128GB of memory, ensuring that large datasets, advanced models, and computational graphs coexist seamlessly. This unified memory approach helps avoid data bottlenecks typical of discrete CPU-GPU designs. - Up to 4TB of NVMe Storage
With substantial high-speed storage available locally, developers can iterate quickly on massive training sets, high-fidelity simulations, or production-scale inference tasks without constantly streaming data from external sources. - Scalable to 405-Billion-Parameter Models
By harnessing NVIDIA ConnectX® networking, two Project DIGITS machines can be linked to reach an astonishing 405-billion-parameter capacity. This expands the horizons of model exploration and large-scale AI experimentation on premises.
3. Seamless Workflow: From Local Prototyping to Data Center Production
One of the greatest strengths of Project DIGITS is its versatility. It is designed to enable end-to-end AI workflows, from development and fine-tuning to final deployment:
- Local Development
- Run large models on your own desktop, with rapid iteration cycles.
- Experiment freely with advanced training configurations, parameter tuning, and performance optimizations without sharing data across distant cloud nodes.
- Cloud Integration
- When your application scales, effortlessly move your Project DIGITS workloads to the NVIDIA DGX Cloud™—maintaining consistency in architecture and software stacks.
- The same Grace Blackwell architecture underpins both environments, eliminating the headache of refactoring code for different GPU or CPU architectures.
- Data Center Deployment
- For enterprises that already leverage on-premises NVIDIA DGX systems or other accelerated infrastructures, Project DIGITS integrates seamlessly with the broader cluster environment.
- Build, test, and refine models locally, then deploy them with confidence on your data center cluster using the same NVIDIA AI Enterprise software platform.
4. NVIDIA AI Software Stack: Powering the Future of AI Innovation
Project DIGITS arrives with the entire NVIDIA AI ecosystem at developers’ fingertips, unlocking powerful tools and frameworks right out of the box.
- NVIDIA DGX OS (Linux-Based)
A specialized operating system used in NVIDIA’s high-performance computing and AI systems. Developers can benefit from advanced kernel optimizations, container orchestration, and security features tailored for deep learning workloads. - NVIDIA NGC Catalog and Developer Portal
Access prebuilt software development kits (SDKs), containers, and pretrained models. Experiment with NVIDIA’s recommended reference designs or tailor an existing model to specific tasks. - NVIDIA NeMo™
A robust framework for building, customizing, and deploying large language models, speech AI, and generative AI workflows. Perfect for prototyping with billion-parameter-scale networks locally on Project DIGITS. - NVIDIA RAPIDS™
Accelerates data science pipelines using GPU-accelerated libraries. Data preprocessing, feature engineering, and model training all benefit from synergy with the Blackwell GPU’s massive parallelism. - Popular Frameworks and Tools
Project DIGITS supports frameworks such as PyTorch, TensorFlow, and JAX, as well as Python, Jupyter notebooks, and a host of specialized data science toolkits. This open environment encourages frictionless experimentation. - NVIDIA Blueprints and NVIDIA NIM™
For those crafting next-gen agentic AI applications, NVIDIA Blueprints and NIM microservices can help you orchestrate complex multi-agent workflows. These are available for experimentation and testing via the NVIDIA Developer Program. - NVIDIA AI Enterprise License
When ready to shift from prototype to production, an enterprise license offers commercial support, security updates, and validated product releases. This ensures a smooth and secure transition to mission-critical deployments.
5. Spec Highlights and Technical Overview
- Grace CPU
- 20 Arm-based cores designed for power efficiency and robust computing.
- Ultra-fast NVLink®-C2C for direct communication with the Blackwell GPU.
- Blackwell GPU
- Latest-generation CUDA cores for parallel computation.
- Fifth-generation Tensor Cores specialized for deep learning tasks and accelerated matrix operations.
- Memory
- 128GB of unified, coherent memory shared across CPU and GPU.
- Up to 4TB of NVMe local storage for data, checkpoints, and model weights.
- Performance
- 1 petaflop of AI performance (FP4) on a single Project DIGITS.
- Scale to 405-billion-parameter models by linking two systems.
- Software Environment
- Preinstalled with Linux-based NVIDIA DGX OS.
- Bundled with AI frameworks and development tools from the NVIDIA NGC catalog.
- Form Factor & Power Consumption
- Compact footprint that fits on a desk.
- Runs on a standard electrical outlet, offering a petaflop of AI without requiring specialized data center power or cooling.
6. Availability and Pricing
NVIDIA will release Project DIGITS in May, starting at $3,000, in partnership with top system integrators and distributors. This entry-level pricing opens new possibilities for teams and individuals who historically lacked access to high-powered GPU clusters or enterprise-scale HPC environments.
To ensure you’re first in line for product notifications and preorders, sign up for notifications today.
Conclusion
NVIDIA Project DIGITS signals a definitive shift in the AI landscape—one where the immense power of large-scale neural networks becomes democratized, accessible, and easy to integrate into everyday development processes. By harnessing the NVIDIA GB10 Grace Blackwell Superchip, Project DIGITS delivers jaw-dropping petaflop performance on a single desktop, bridging the gap between prototype-scale experiments and full production deployment.
Whether you’re an AI researcher pushing the boundaries of model complexity, a data scientist accelerating workflows, a student entering the world of machine learning, or an enterprise professional deploying next-gen AI solutions, Project DIGITS stands ready to propel your AI aspirations to unprecedented heights.