Nvidia Deep Dive: Does its Current Results Make Chips From AMD Irrelevant?
Abridged Version for Free Subscribers
Topics
Nvidia’s Cadence of New Chips Per Year
Nvidia’s Sales and Competition in China
Analysis of Nvidia’s Software Competition
Data Center Growth
What About AMD
Comparing Nvidia and AMD Data Centers
Comparing Nvidia and AMD Non-Data Centers Metrics
Investor Takeaway
Nvidia (NVDA) reported record-breaking revenue this quarter as the surge in corporate demand for artificial intelligence continues. For the first quarter of fiscal year 2025, Nvidia achieved $26 billion in revenue, marking an 18% increase from the previous quarter and a staggering 262% rise from the same period last year. The company's net profit soared to $14.88 billion, up from $2 billion a year ago, reflecting a 644% year-on-year increase. Nvidia also projects revenue to reach $28 billion in the current quarter ending in July, surpassing Wall Street expectations and more than doubling from the same period last year.
Nvidia's adjusted gross margin for the first quarter was 78.9%, exceeding expectations of 77%. For the second quarter, the company anticipates an adjusted gross margin of 75.5%, with a variance of plus or minus 50 basis points, against analysts' forecast of 75.8%. The AI chipmaker, considered a bellwether for AI's ongoing transformation, reported earnings of $5.98 per share, up 21% from the previous quarter and 629% from the same period last year, outperforming investors' expectations of $5.59 per share on $24.65 billion in revenue. Nvidia also announced a 10-for-1 stock split on June 7, with shares currently trading at $962.
Chart 1 illustrates revenue growth by segment in the recent quarter as shown below:
Revenue grew +18% QoQ to $26.0 billion.
Data Center jumped +23% QoQ to $22.6 billion.
Gaming dropped 8% QoQ to $2.6 billion.
Professional Visualization declined 8% QoQ to $0.4 billion.
Automotive leaped +17% QoQ to $0.3 billion.
OEM & Other dropped by 13% QoQ to $0.1 billion.
Chart 1
Nvidia's market value has skyrocketed this year, adding over $1.1 trillion to its worth. At the end of 2022, Nvidia's market cap stood at $359 billion. By mid-2024, with a 90% stock increase this year, the company's valuation has surged to $2.33 trillion, trailing only Apple by $500 billion and Microsoft by $900 billion.
It's important to note that a significant portion of Nvidia's new revenue comes from a select group of customers. Amazon, Meta Platforms, Microsoft, and Alphabet's Google collectively account for about 40% of Nvidia's sales. CEO Jensen Huang is diversifying by producing complete computers, software, and services to help more businesses and government agencies deploy AI systems.
Nvidia's data center unit, the largest contributor to its sales, generated $22.6 billion in revenue, while gaming chips brought in $2.6 billion. Analysts had forecasted $21 billion for the data center unit and $2.6 billion for the gaming unit. Nvidia also emphasized the strong performance of its networking components, which are crucial for connecting large clusters of chips. The company reported $3.2 billion in networking revenue, primarily from its InfiniBand products, tripling from the same period last year.
Nvidia holds over 80% of the AI chip market, positioning it as both the primary driver and beneficiary of AI's rapid growth. The high performance of Nvidia's chips makes them indispensable in current AI data centers, and its proprietary CUDA software framework, essential for programming AI processors, solidifies its leadership.
The company's flagship processor, the H100, is in high demand, powering AI applications like OpenAI's ChatGPT. While most high-end processors cost a few thousand dollars, the H100 ranges from $15,000 to $40,000 per unit, depending on production volumes and other factors, according to analysts.
CFO Colette Kress highlighted that Nvidia has worked with over 100 customers recently, building new “AI factories” with hundreds to tens of thousands of GPUs, some with as many as 100,000.
Nvidia’s Cadence of New Chips Per Year
According to CEO Huang, "After Blackwell, we have another chip, and our cadence is to release one once a year."
Nvidia is set to roll out a successor to the H100, code-named Blackwell, announced in March. Demand for this new chip is already high, suggesting that some customers might wait for it instead of purchasing the H100. Despite this, Nvidia's recent results showed no signs of a slowdown. Kress mentioned that demand for Blackwell chips far exceeds supply and is expected to continue outstripping supply next year. Huang added that the new chips will be operational in data centers later this year, contributing significantly to revenue and preparing for the next wave of growth.
Nvidia has a track record of introducing new architectures every two years—Ampere in 2020, Hopper in 2022, and Blackwell in 2024. The H100 AI chip belongs to the Hopper architecture, while the upcoming B200 will be part of Blackwell, with both architectures also used in gaming and creator GPUs.
Looking ahead, Nvidia's next-generation AI chip will be known as R100. R100 will run on TSMC’s N3 process for 3nm architecture, while the recent Blackwell chips use TSMC’s N4P process for 5nm design. The R100 AI chips are expected to start mass production in Q4 2025.
….
This is an abridged version of a full, more complete article that is available for Paid Subscribers only. Sign up today.