Nvidia sees massive datacentre compute growth, says networking is next


Nvidia has reported a record quarter in its datacentre division, which posted revenue of $19.4bn, up 427% from a year ago. The company said the results reflect higher shipments of the Nvidia Hopper GPU computing platform used for training and inferencing with large language models, recommendation engines and generative AI (GenAI) applications.

Overall, the company posted revenue of $26bn for the first quarter of 2025, up 18% from the previous quarter and up 262% from a year ago.

In a transcript of the earnings call posted on Seeking Alpha, Nvidia’s chief financial officer, Colette Kress, said: “In our trailing four quarters, we estimate that inference drove about 40% of our datacentre revenue. Both training and inference are growing significantly. Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production – what we refer to as AI factories.”

In the latest quarter, she said Nvidia had worked with 100 customers building AI factories ranging in size from hundreds to tens of thousands of graphics processing units (GPUs), with some reaching 100,000 GPUs.

During the earnings call, when CEO Jensen Huang was asked how the company would ensure greater utilisation of its GPUs, he said: “The computer is no longer an instruction-driven only computer. It’s an intention-understanding computer. Every aspect of the computer is changing in such a way that instead of retrieving pre-recorded files, it is now generating contextually relevant, intelligent answers. That’s going to change computing stacks all over the world.”

Huang said the company’s Blackwell platform represented the next wave of growth. According to Huang, Blackwell forms the foundation for trillion-parameter-scale GenAI. Within its datacentre business, Nvidia’s networking arm reported revenue of $3.2bn, an increase of 242% from the previous year.

Juang used the earnings presentation to highlight one of the opportunities in networking, saying the company’s Spectrum-X product opens a brand-new market to bring large-scale AI to ethernet-only datacentres.

The product family includes the Spectrum-4 switch, BlueField-3 DPU and new software technologies, which, according to Kress, overcome the challenges of AI on ethernet to deliver 1.6 times higher networking performance for AI processing compared with traditional ethernet.

Discussing the networking opportunity, Huang said: “Ethernet is a network, and with Spectrum-X, we’re going to make it a much better computing fabric.”

He said the company would continue to support its NVLink computing fabric for single computing domains, InfiniBand computing fabric and ethernet networking computing fabric. “We’re going to take all three of them forward at a very fast clip. You’re going to see new switches, new NICs [network interface cards], new capability, and new software stacks that run on all three of them.”

Kress added: “Spectrum-X opens a brand-new market to Nvidia networking and enables ethernet-only datacentres to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year.”



Source link