Nvidia's Vera Rubin NVL72, unveiled at CES 2026, introduces rack-scale encryption, marking a potential turning point for enterprise AI security. The new platform encrypts every bus across 72 GPUs, 36 CPUs, and the entire NVLink fabric, offering confidential computing across CPU, GPU, and NVLink domains for the first time at this scale.
This development addresses growing concerns about the security of AI models and the infrastructure supporting them. As AI model training becomes increasingly expensive, with Epoch AI research indicating costs growing at 2.4x annually since 2016, the need for robust security measures intensifies. The Rubin platform aims to provide a more secure environment for these valuable AI assets.
According to Louis Columbus, "For security leaders, this fundamentally shifts the conversation. Rather than attempting to secure complex hybrid cloud configurations through contractual trust with cloud providers, they can verify them cryptographically." This shift is particularly relevant given the increasing sophistication and frequency of cyberattacks, including those launched by nation-state adversaries.
The core issue is that traditional security approaches are struggling to keep pace with the rapid advancements in AI model training. Existing security budgets and methods may not be sufficient to protect the massive investments in frontier training models. The Vera Rubin NVL72 aims to solve this by providing a hardware-based encryption solution that scales to meet the demands of modern AI workloads.
The implications of rack-scale encryption extend beyond mere data protection. It enables organizations to maintain greater control over their AI infrastructure and data, reducing reliance on third-party cloud providers for security. This is especially important for industries dealing with sensitive data, such as healthcare, finance, and defense.
The introduction of the Vera Rubin NVL72 represents a significant step toward securing the future of AI. By providing a comprehensive encryption solution at the rack scale, Nvidia is addressing a critical vulnerability in the AI ecosystem and enabling organizations to develop and deploy AI models with greater confidence. The platform is expected to be available in the latter half of 2026, with further details on pricing and specific configurations to be released closer to the launch date.
Discussion
Join the conversation
Be the first to comment