Dr. Robert Castellano's Semiconductor Deep Dive Newsletter

Dr. Robert Castellano's Semiconductor Deep Dive Newsletter

Share this post

Dr. Robert Castellano's Semiconductor Deep Dive Newsletter
Dr. Robert Castellano's Semiconductor Deep Dive Newsletter
Nvidia’s Downgraded H20 Strategy for China: Technical and Financial Tradeoffs

Nvidia’s Downgraded H20 Strategy for China: Technical and Financial Tradeoffs

Dr. Robert Castellano's avatar
Dr. Robert Castellano
May 21, 2025
∙ Paid
2

Share this post

Dr. Robert Castellano's Semiconductor Deep Dive Newsletter
Dr. Robert Castellano's Semiconductor Deep Dive Newsletter
Nvidia’s Downgraded H20 Strategy for China: Technical and Financial Tradeoffs
Share

In my recent article “Nvidia and AMD Can Recapture $4.5B and $700M from China Sanctioned Chips by Selling Them to Data Centers as Inference Accelerators,” I made the case that these companies could still salvage meaningful revenue by repositioning restricted chips as inference-only hardware. Nvidia’s latest move with the H20 confirms that strategy. Rather than abandon the China market, Nvidia is cutting down the H20’s performance by removing high-bandwidth memory and replacing it with GDDR7, making it compliant with U.S. export controls. That choice reshapes the chip’s role, limits its training capability, and slashes its price—but keeps the product alive in one of Nvidia’s biggest markets.

LPDDR5 and GDDR7 vs HBM3e

The switch from HBM3e to GDDR7 is all about tradeoffs. HBM offers massive bandwidth, ideal for AI training, but it’s expensive and power-hungry. LPDDR5 and GDDR7 can’t match that speed, but they’re cheaper, easier to integrate, and much better suited to inference. In workloads that don’t need top-tier performance, they get the job done more efficiently.

How the New H20 Compares to the Old

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Dr. Robert Castellano
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share