Nvidia's GTC conference kicked off with a long keynote from CEO Jensen Huang, a roadmap extending in 2028 and an integrated AI stack that's hard for rivals to match.
Here's a look at the questions that are lingering after GTC kicked off.
Can Nvidia's cadence keep demand going?
Nvidia's ability to cannibalize itself with an annual cadence and dangle enough value and performance to convince customers to upgrade has been impressive.
What's unclear is whether this roadmap can keep being Nvidia's greatest trick. Nvidia CEO Jensen Huang joked during his keynote that his salespeople aren't going to be happy that he keeps dissing Hopper, the GPU that arguably started an AI boom. But Blackwell is way better. Blackwell Ultra will be better than Blackwell. Vera Rubin, Rubin Ultra and Feynman will all be better than what was there a year before.
Huang's bet is that AI will lead to a continuing scale up and scale out cycle for AI factories. Once you scale up, scaling out will lead to better cost of ownership. "Rubin will bring costs down dramatically," said Huang.
- Nvidia launches Blackwell Ultra, Dynamo. outlines roadmap through 2027
- Oracle Cloud adds Nvidia AI Enterprise, Nvidia Blackwell GB200 NVL72
- Nvidia launches DGX Spark, DGX Station personal AI supercomputers
- Nvidia's model parade: Llama Nemotron, Cosmos additions, Isaac GROOT N1
Here's the catch: Agentic AI will lead to more AI infrastructure. Cheaper models will bring more consumption as will enterprise use cases. The wrinkle is that Nvidia's big customers--AWS, Microsoft Azure, Google Cloud and Meta--all are building custom silicon to lessen their dependence on Nvidia. Can hyperscalers catch up and do they even half to if good enough AI infrastructure becomes the norm? Nvidia's answer is that it can deliver performance and value faster.
Can DeepSeek and cheaper models add to Nvidia's moat?
Nvidia's GTC opener revolved around reasoning models and ways to scale. There's a good reason for that--Wall Street is worried that reasoning models will lessen the need to spend on Nvidia gear.
The jury is way out on the DeepSeek impact, but I'd call the impact on Nvidia mostly a coin flip. Cheaper models may speed up enterprise usage and benefit Nvidia. Or cheaper models may mean good enough AI infrastructure means the latest GPU can wait.
- DeepSeek: What CxOs and enterprises need to know
- DeepSeek's real legacy: Shifting the AI conversation to returns, value, edge
- Nvidia strong Q4, outlook eases AI infrastructure spending fears for now
Will the AI factory vision become reality?
The short answer is that Nvidia's AI factory vision is going to be reality. The debate is over timing and whether there will be hiccups or overcapacity at some point.
Nvidia's roadmap is public and on a one-year rhythm because you need time to plan AI factories. You need energy, which is the gating factor for AI, as well as land and all of this stuff that goes beyond infrastructure.
Nvidia has a roadmap to Gigawatt AI factories. How fast that road gets paved remains to be seen.
Is Nvidia now that de facto enterprise infrastructure provider?
If you believe that AI will be at the center of every workload, it's a no-brainer to think that Nvidia will power most data centers. There's a reason that Nvidia has expanded so heavily into networking and even desktops. It wants to offer you the full stack.
It remains to be seen whether enterprises build out on-prem AI operations, but that's why Nvidia is also focused on software and open-sourcing models. Its Llama-based models tailored for industry use cases will be used by SAP, ServiceNow and others.
Whether the Nvidia stack becomes the enterprise stack remains to be seen, but I wouldn't rule it out. GM is betting on Nvidia for its AI factory and the industry references cited by Huang are impressive.
All you have to do is look at Nvidia's networking and storage plans to realize the company has more on its mind than GPUs. The key vendors in compute, storage and networking are all following Nvidia's lead.
How long until Nvidia's robotics vision becomes reality?
Huang spent a lot of time talking about models for robotics and the future.
Nvidia's bet is that there will be billions of digital workers to collaborate with humans, there will be a shortage of employees and robots will fill that gap. Robots are likely to be less expensive, but don't bet against a $50,000 annual cost.
There was some evidence that Nvidia's autonomous vehicle business was gaining traction in its most recent quarter. Robots--even the humanoid variety--may be here closer than you'd think due to models that can do a lot more than language. Watch Nvidia's physical AI push closely since it's the enabler for robotics going forward.
How underappreciated is Nvidia's software stack?
Yes, Nvidia pays the bills with accelerating computing systems, but its software stack is what maintains the company's dominance.
Aside from the bevy of models to advance various enterprise use cases, Nvidia Dynamo is a sleeper hit in GTC. Huang said Dynamo is the "operating system of the AI factory."
Dynamo separates the processing and generation phases of large language models on different GPUs. Nvidia said Dynamo optimizes each phase to be independent and maximize resources.
By breaking various AI workloads up and optimizing compute, Dynamo may become the enabler for Nvidia's entire stack. When running the DeepSeek-R1 model on a large cluster of GB200 NVL72 racks, Dynamo boosts the number of tokens by 30x per GPU.