

Last year, Nvidia’s annual GTC conference—hailed as the “Woodstock of AI”—drew a crowd of 18,000 to a packed arena befitting rock legends like the Rolling Stones. On stage, CEO Jensen Huang, clad in a shiny black leather jacket, delivered his keynote for the AI chip behemoth’s annual developer’s conference with the flair of a headlining act.
Today, a year later, Huang was onstage once again, shooting off a series of T-shirt cannons and clad this time in an edgy motorcycle black leather jacket worthy of a halftime show. This time, Nvidia-watchers tossed around the metaphor of the “Super Bowl of AI” like a football. Nvidia did not shy away from the pigskin comparison, offering a keynote “pre-game” event and a live broadcast that had guest commentators like Dell CEO Michael Dell calling plays on how Nvidia would continue to rule the AI world.
As Huang took the stage in front of a stadium-sized image of the Nvidia headquarters—making sure to highlight the “gaussian splatting” 3D rendering tech behind it to his high-tech audience—his message was clear, even if unspoken: Nvidia’s best defense is a strong offense. With recent reasoning models from Chinese startup DeepSeek shaking up AI, followed by others from companies including OpenAI, Baidu and Google, Nvidia wants its business customers to know they need its GPUs and software more than ever.
That’s because DeepSeek’s R1 model, which debuted in January, created some doubts about Nvidia’s momentum. The new model, its maker claimed, had been trained for a fraction of the cost and computing power of U.S. models. As a result, Nvidia’s stock took a beating from investors worried that companies would no longer need to buy as many of Nvidia’s chips.
Reasoning models require more computing power
But Huang thinks those selling off made a big mistake. Reasoning models, he said, require more computing power, not less. A lot more, in fact, thanks to their more detailed answers, or in the parlance of AI folks, “inference.” The ChatGPT revolution was about a chatbot spitting out answers to queries—but today’s models must “think” harder, which requires more “tokens,” or the fundamental units text models use—whether it’s a word in a phrase or just part of a word.
The more tokens used, the more efficiency customers demand, and the more computing power AI reasoning models will require. So making sure Nvidia customers can process more tokens, faster, is the not-so-secret Nvidia play—and Huang did not need to mention DeepSeek until one hour into the keynote to get that point across.
All of the Nvidia GTC announcements that followed were positioned with that in mind. Stock-watchers might well have wanted to see an accelerated timeline for Nvidia’s new AI chip, the Vera Rubin, to be released at the end of 2026, or more details about the company’s short-term roadmap. But Huang focused on the fact that while AI pundits had insisted over the past year that the pace of AI once rapid improvements were slowing down, Nvidia believes getting AI improvements to “scale” is increasing faster than ever. Of course, that would be to Nvidia’s benefit in terms of revenue. “The amount of computation we need as a result of agentic AI, as a result of reasoning, is easily 100 times more than we thought we needed this time last year,” Huang said.
Will Nvidia’s efforts to drive growth be enough to win?
Nvidia’s announcements that followed were all about making sure customers understand they will have everything they need to keep up in a world where extreme speed at providing detailed answers and better reasoning will be the difference between a company’s AI success and failure. Blackwell GPUs, Nvidia’s latest, top of the line AI chips, are in full production—with 3.6 million of them already used. An upgraded version, the Blackwell Ultra, boasts 3x performance. The new Vera Rubin chip and infrastructure is on the way. Nvidia’s “world’s smallest AI supercomputer” is at the ready. Software for AI agents is quickly being used in the physical world, including self-driving cars, robotics, and manufacturing.
But will Nvidia’s efforts to drive growth be enough to keep enterprise companies investing in Nvidia products? Will buying Nvidia’s costly AI chips—which can cost between $30,000 to $40,000 each, prove too expensive, given the still-unclear-ROI of AI investments? Ultimately, Nvidia’s premium picks and shovels require enough customers willing to keep digging.
Huang is confident that there are enough—and that Nvidia’s Super Bowl win is not just a victory for the 31-year-old company. “Everyone wins,” he insisted.
Perhaps, but there is no doubt that as Nvidia seeks to establish a dynasty in the AI era, expectations remain higher than ever. Huang, for his part, appears undaunted even as the AI continues to evolve at high speed. He’s always reaching for the brass ring, it seems—or in this case, the Super Bowl ring.
This story was originally featured on Fortune.com
https://fortune.com/img-assets/wp-content/uploads/2025/03/GettyImages-2205235919-e1742352429637.jpg?resize=1200,600
2025-03-19 05:30:00