
Presented by NVIDIA
The AI revolution is accelerating driven by billion-parameter reasoning models needed to develop agentic and physical AI. As NVIDIA founder and CEO Jensen Huang shared in his GTC keynote, the move from training to full-production inference is causing AI compute demand to skyrocket as data centers worldwide transform into AI factories designed to churn out millions of user queries efficiently and effectively. To meet this $1 trillion opportunity, NVIDIA at GTC unveiled major advancements – from the Blackwell Ultra AI platform and an operating system for AI factories to advancements in networking, robotics and accelerated computing.
Blackwell is already in full production — delivering an astonishing 40x performance boost over Hopper. This architecture is redefining AI model training and inference, making AI applications more efficient and more scalable. And coming second-half 2025 is the next evolution of the Blackwell AI factory platform: Blackwell Ultra — a powerhouse GPU with expanded memory to support the next generation of AI models.
NVIDIA continues to move fast, committed to an annual AI architecture refresh. NVIDIA Vera Rubin is designed to supercharge AI data center performance and efficiency.
Beyond GPUs, AI infrastructure is undergoing a seismic shift with innovations in photonics, AI-optimized storage and advanced networking. These breakthroughs will dramatically enhance scalability, efficiency and energy consumption across massive AI data centers.
Meanwhile, physical AI for robotics and industry is a colossal $50 trillion opportunity, according to Huang. From manufacturing and logistics to healthcare and beyond, AI-powered automation is poised to reshape entire industries. NVIDIA Isaac and Cosmos platforms are at the forefront, driving the next era of AI-driven robotics.
Some of the NVIDIA announcements at GTC
NVIDIA Roadmap: The NVIDIA roadmap includes Vera Rubin, tobe released in the second half of 2026, followed by the launch of Vera Rubin Ultra in 2027. The Rubin chips and servers boast improved speeds, especially in data transfers between chips, which is a critical feature for large AI systems with many chips. And scheduled for 2028 is Feynman, the next architecture to be released, making use of next-gen HBM memory.
DGX Personal AI computers: Powered by the NVIDIA Grace Blackwell platform, DGX Spark and DGX Station are designed to develop, fine-tune and inference large models on desktops. They’ll be manufactured by a number of companies, including ASUS, Dell and HP.
Spectrum-X AND Quantum-X networking platforms: These silicon photonics networking switches help AI factories connect millions of GPUs across sites, and reduce energy consumption dramatically. The Quantum-X Photonics InfiniBand switches will be available later this year, and Spectrum-X Photonics Ethernet switches will arrive in 2026.
Dynamo Software: Released for free, open-source Dynamo software helps speed the process of multi-step reasoning, improving efficiency and reducing time to innovation in AI factories.
NVIDIA Accelerated Quantum Research Center: A Boston-based research center will provide cutting-edge technologies to advance quantum computing in collaboration with leading hardware and software makers.
NVIDIA ISAAC GR00T N1: A foundational model for humanoid robots, GR00T N1 is the world’s first open, fully customizable foundation model for generalized humanoid reasoning and skills. It has a dual system similar to reasoning models, for both fast and slow thinking.
Newton Physics Engine: NVIDIA also announced a collaboration with Google DeepMind and Disney Research to develop Newton, an open-source physics engine that lets robots learn how to handle complex tasks with greater precision.
These are just the highlights — don’t miss the full GTC recap, live on NVIDIA’s blog.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected].