And The Winners are….. Nvidia and TSMC
Note: I no longer write for Seeking Alpha, so all my technical-marketing-investor content will now be on SubStack. As a heads-up, all articles over the next week will be free to readers as an Introductory Program. Following that I will initiate a Paid Subscriber policy, so sign up now.
Nvidia (NVDA) today reported revenue for the Fiscal Q2 2025 ended July 28, 2024, of $30.0 billion, up 15% from the previous quarter and up 122% from a year ago.
For the quarter, GAAP earnings per diluted share was $0.67, up 12% from the previous quarter and up 168% from a year ago. Non-GAAP earnings per diluted share was $0.68, up 11% from the previous quarter and up 152% from a year ago.
“Hopper demand remains strong, and the anticipation for Blackwell is incredible,” said Jensen Huang, founder and CEO of Nvidia in the company’s Press release. “Nvidia achieved record revenues as global data centers are in full throttle to modernize the entire computing stack with accelerated computing and generative AI.”
According to Colette Kress, NVIDIA’s executive vice president and chief financial officer noted:
“We shipped customer samples of our Blackwell architecture in the second quarter. We executed a change to the Blackwell GPU mask to improve production yield. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal 2026. In the fourth quarter, we expect to ship several billion dollars in Blackwell revenue. Hopper demand is strong, and shipments are expected to increase in the second half of fiscal 2025.”
I discussed this possible delay in my August 25, 2024 Substack article entitled “Nvidia: Even if Rumors of Delays are True, Nvidia has Other Solutions.”
Second-quarter revenue was a record $26.3 billion, up 16% from the previous quarter and up 154% from a year ago. As shown in Chart 1, Data Center growth has been strong for the past five quarters. Strong data center growth by members of the “Magnificent Seven” have been responsible, as I discuss below.
Chart 1
And the Winner is - Nvidia
In the second quarter of 2024, several leading technology companies demonstrated significant financial performance and strategic initiatives. Alphabet (Google), Microsoft, Meta, Amazon, Apple, and Tesla all reported notable developments in their revenues, profits, and capital expenditures, reflecting their ongoing growth and investments in key areas.
Alphabet (Google) achieved remarkable financial results with a second-quarter revenue of $84.742 billion, marking a 13.59% increase year-over-year and a 5.22% rise from the previous quarter. The company's net profit reached $23.619 billion, up 28.59% compared to the same period last year, though it experienced a slight decline of 0.18% sequentially. Alphabet's capital expenditure saw a substantial increase, totaling $13.186 billion, which represents a 91.43% year-over-year rise and a 9.77% increase quarter-over-quarter. Noteworthy developments for Alphabet include the impressive performance of the TPU v6, which offers 4.7 times the peak computing power of the TPU v5e, as well as the doubling of the Gemini 1.5 Flash token count from 1 million to 2 million. Additionally, the company is gearing up for its "Made By Google 2024" event, where the Pixel 9 smartphone is anticipated to be unveiled.
Microsoft (MSFT) also reported strong performance, with second-quarter revenues amounting to $64.227 billion, reflecting a 15.2% increase year-over-year and a 4.64% rise from the previous quarter. The company’s net profit was $22.036 billion, showing a 9.74% increase year-over-year and a modest 0.44% gain sequentially. Microsoft’s capital expenditure for the quarter was $13.873 billion, marking a 55.13% year-over-year growth and a notable 26.67% increase from the previous quarter. The company has seen a boost in its intelligent cloud business, with AI services now contributing 8% to Azure’s revenue growth, up from 7% in the previous quarter. Azure AI’s customer base has expanded to over 60,000, reflecting a nearly 60% year-over-year increase, and GitHub Copilot has gained more than 77,000 customers, an impressive 180% rise year-over-year.
Meta (META) reported a second-quarter revenue of $39.071 billion, a 22.1% increase year-over-year and a 7.18% rise from the previous quarter. The company's net profit surged to $13.465 billion, marking a substantial 72.89% year-over-year increase and an 8.86% sequential growth. Meta's capital expenditure was $8.173 billion, up 33.24% year-over-year and 27.7% from the previous quarter. Meta’s business performance includes a 7% year-over-year increase in daily active users, averaging 3.27 billion, and a 10% increase in advertising impressions from its applications. Additionally, the average price per advertisement rose by 10% year-over-year.
Amazon (AMZN) delivered strong results with a revenue of $147.977 billion in the second quarter, representing a 10.12% year-over-year increase and a 3.25% rise from the previous quarter. The company's net profit doubled to $13.485 billion, reflecting a 99.78% year-over-year increase and a 29.28% sequential growth. Amazon's capital expenditure reached $16.393 billion, up 57.44% year-over-year and 17.64% quarter-over-quarter. The company has launched several AI-driven consumer features, such as Rufus, a shopping assistant, and Maestro, a playlist builder. In addition, Amazon's Zoox self-driving taxis are now being tested on public roads in Austin and Miami.
Apple (AAPL) reported second-quarter revenue of $85.777 billion, an increase of 4.87% year-over-year but a decrease of 5.48% from the previous quarter. The company’s net profit was $21.448 billion, up 7.88% year-over-year but down 9.26% sequentially. Apple’s capital expenditure totaled $2.151 billion, a 2.77% increase year-over-year and a 7.77% rise quarter-over-quarter. Apple is preparing to introduce new features in iOS18, iPadOS 18, and macOS Sequoia this summer, with the new iPhone 16 expected to launch in September.
Tesla (TSLA) saw a second-quarter revenue of $25.5 billion, representing a 2.3% year-over-year increase and a significant 19.71% rise from the previous quarter. The company's net profit was $1.478 billion, a decrease of 45.32% year-over-year and 30.91% sequentially. Tesla's capital expenditure amounted to $2.272 billion, up 10.29% year-over-year and 18.19% from the previous quarter. The company highlighted advancements such as the second-generation Optimus robot, which boasts improved performance and a reduction in weight. Tesla's Texas supercomputing cluster, "Cortex," now includes about 100,000 NV H100 and H200 chips designed for training its autonomous driving and humanoid robot technologies.
Table 1 Summarizes CY Q2 revenues reported by the above companies. Data center customers Alphabet, Microsoft, Meta, and Amazon reported double digit YoY increases and positive QoQ increases.
Table 2 shows Capex spend of these companies. Capex combines data center buildings as well as expenditures for Nvidia (and other) chips. As with data in Table 1, data center hyperscalers invested double digit capital expenditures YoY and QoQ in Q2.
And the Winner is - TSMC
In addition to Nvidia AI GPU processors, these companies are also developing their won internal processors. In addition to leveraging Nvidia’s cutting-edge AI GPU processors, such as the H100 and A100, several leading technology companies are also investing significantly in the development of their own internal AI processors. This dual approach allows these companies to tailor their hardware to meet specific needs, optimize performance, and gain a competitive edge in the rapidly evolving AI landscape.
Alphabet (Google). Internal Processor Development: Google’s internal AI processors are known as Tensor Processing Units (TPUs). The latest iteration, TPU v6, represents a significant leap in performance and efficiency. Google designs these chips to optimize AI and machine learning workloads specifically for its extensive cloud infrastructure and data centers. The TPU v6 offers impressive processing power, reportedly around 85 TFLOPS, enabling rapid training and inference of large-scale AI models. This custom silicon is integral to Google Cloud's AI services, providing a substantial performance boost over general-purpose GPUs.
Microsoft. Internal Processor Development: Microsoft has developed custom AI processors for use within its Azure cloud platform. These processors, tailored for AI tasks, complement Nvidia GPUs by offering optimized performance for specific applications. The Azure AI processors are designed to handle diverse machine learning tasks efficiently and are used to support Microsoft’s extensive cloud-based services. With around 45 TFLOPS of processing power, these processors are crucial for managing large-scale AI operations and providing scalable solutions to customers.
Meta (Facebook). Internal Processor Development: Meta Platforms has invested heavily in designing custom AI processors to support its expansive AI research and development efforts. These processors are developed to handle the complex and demanding AI workloads associated with Meta’s vast social media and advertising platforms. With around 35 TFLOPS of processing power, Meta’s processors are engineered to drive advancements in machine learning, data analysis, and content personalization.
Amazon (AWS). Internal Processor Development: Amazon Web Services (AWS) has created its own AI inference processor known as Inferentia. This custom chip is optimized for machine learning inference tasks and is used extensively in AWS’s cloud services to accelerate AI applications. With a performance of approximately 25 TFLOPS, Inferentia is designed to handle high-throughput inference workloads efficiently, complementing the capabilities of Nvidia’s GPUs in AWS’s data centers.
Apple. Internal Processor Development: Apple has developed its own Neural Engine, integrated into its A-series and M-series chips, to enhance on-device AI capabilities. This internal processor, which offers around 20 TFLOPS of performance, is used to support a wide range of AI tasks on Apple devices, including image recognition, natural language processing, and augmented reality. The Neural Engine is designed to deliver high performance while optimizing power consumption, making it well-suited for mobile and edge computing applications.
Tesla. Internal Processor Development: Tesla’s Full Self-Driving (FSD) chips, also known as D1 chips, are custom-designed for autonomous driving and related AI applications. With approximately 40 TFLOPS of processing power, these chips are integral to Tesla’s autonomous driving technology, enabling advanced features such as real-time object detection, lane-keeping, and navigation. The FSD chips are part of Tesla’s broader strategy to develop in-house hardware that is closely integrated with its software and vehicle systems.
Table 4 summarizes the Internal Processors developed and used by these companies and the number of units used in the 2023-2024 timeline. In this context, "deployment" refers to the process of integrating and using the purchased Nvidia AI processors or internal AI processors into a company's infrastructure or products.
But how do these internal processors compare to those of Nvidia? In Table 5, I use a common metric for AI Processing Power (measured in TFLOPS). It indicates the number of trillion floating-point operations per second a processor can handle, which is crucial for tasks like AI model training and inference.
These processors are listed below:
Nvidia H100: The H100 Tensor Core processor leads in AI processing power with approximately 60 TFLOPS, making it ideal for demanding AI training and inference tasks.
Nvidia A100: The A100 provides strong performance at around 20 TFLOPS, but it is less powerful compared to the H100. It is well-suited for a wide range of AI applications but has been surpassed by newer models.
Alphabet (TPU v6): Google's TPU v6 outperforms both Nvidia models with approximately 85 TFLOPS, highlighting its significant advantage in AI processing.
Microsoft (Azure AI): Microsoft's custom-built AI processors offer around 45 TFLOPS, placing them between the H100 and A100 in terms of performance.
Meta (Facebook AI Research): Meta's processors provide around 35 TFLOPS, indicating high performance but still lower than TPU v6 and H100.
Amazon (AWS Inferentia): With 25 TFLOPS, AWS Inferentia is more focused on inference rather than training, making it less powerful compared to the H100 but adequate for its intended applications.
Apple (Neural Engine): Apple’s Neural Engine delivers around 20 TFLOPS, similar to the A100, but optimized for on-device tasks.
Tesla (FSD Chips): Tesla’s D1 chips, with about 40 TFLOPS, are designed specifically for autonomous driving and show high performance tailored to this application.
Taiwan Semiconductor (TSM) (TSMC) dominates all semiconductor foundries for AI processors, as shown in Table 6. All processors are manufactured by TSMC at the 7nm node or below.
Financial Metrics of Nvidia and TSMC
In Chart 2, I plot TSMC's High Performance Computing (HPC) revenues against Nvidia's Data Center revenues from Q1 2021 to Q2 2024. To ensure visual compatibility, I scaled TSMC's HPC revenues in NT $billion by a factor of 15, while Nvidia's revenues are presented in $ million.
This supports the argument that there is a close relationship between TSMC's HPC and Nvidia's Data Center revenues up until Q2 2023 when Nvidia's data center revenues started skyrocketing.
To be clear, TSMC's HPC products include personal computer central processing units ("CPUs"), graphics processor units ("GPUs"), field programmable gate arrays ("FPGAs"), server processors, accelerators, and high-speed networking chips.
Chart 2
In Chart 3, I add AMD's (AMD) Data Center revenues for the same period. Revenues began increasing in the fourth quarter 2024 due to AMD launching its MI300 accelerator family in December with strong partner and ecosystem support from multiple large cloud providers, all the major OEMs, and many leading AI developers, according to the company. Data Center growth for AMD was 42.8% QoQ in Q4, but it dropped to just 2.4% in Q1 before increasing 21.3% in Q2 2024.
TSMC’s HPC growth was 17% in QoQ in Q4, but it dropped to just 3.1% in Q1 before increasing 27.6% in Q2, 2024.
Chart 3
Investor Takeaway
The company forecasts revenues of $32.5 billion, with a margin of error of ±2%. At the midpoint of this guidance, the implied sequential growth rate is 8%. Should Nvidia achieve the guidance midpoint, it will represent a significant slowdown in growth compared to the first half of the year. Sequential revenue growth was 15% in Q2 and 18% in Q1.
Because of its lackluster guidance, Nvidia stock was down 7% after hours. I suggest that Nvidia might have foreseen potential market disappointment regarding performance expectations, leading to the announcement of a new $50 billion share buyback plan to reassure investors. However, despite this substantial buyback initiative, it has not yet succeeded in reversing the decline in stock prices, and investor sentiment continues to be affected negatively.
Recently, technology stocks, particularly Nvidia and the "Magnificent 7" (Apple, Microsoft, Nvidia, Alphabet, Meta, Amazon, and Tesla), have experienced notable fluctuations driven by several key factors beyond recent earnings reports. High valuations have been a significant concern, as elevated stock prices for these tech giants have made them vulnerable to market corrections and shifts in investor sentiment.
Additionally, there has been a noticeable movement of investment capital into other sectors such as energy, biotechnology, and financial technology. This diversification is partly driven by the search for growth opportunities outside the highly competitive and mature technology sector.
In Chart 4 I show the Price % Change Off High for Nvidia, TSMC, and AMD over the past year. To date, and prior to the after hours performance following todays call, Nvidia is just 7.26% off its high compared to 11.72% for TSMC and 30.8% for AMD. As a comparison, I show the S&P Technology Select Sector (^IXT), which is off 7.81%.
Chart 4
In other words, with a potential recover in the technology sector, Nvidia has recovered from its high as much as the S&P. This positive momentum in the stock should propel the company to recover quickly from the probable downturn from the lackluster guidance in the near term.
As for TSMC, its stock price was down just 3% after hours based on Nvidia sentiment. But recovery will be rapid as I anticipate strong near-term demand for chips from Apple, TSMC’s largest customer.
I rate Nvidia a Buy and TSMC a Strong Buy