Nvidia’s Downgraded H20 Strategy for China: Technical and Financial Tradeoffs
In my recent article “Nvidia and AMD Can Recapture $4.5B and $700M from China Sanctioned Chips by Selling Them to Data Centers as Inference Accelerators,” I made the case that these companies could still salvage meaningful revenue by repositioning restricted chips as inference-only hardware. Nvidia’s latest move with the H20 confirms that strategy. Rather than abandon the China market, Nvidia is cutting down the H20’s performance by removing high-bandwidth memory and replacing it with GDDR7, making it compliant with U.S. export controls. That choice reshapes the chip’s role, limits its training capability, and slashes its price—but keeps the product alive in one of Nvidia’s biggest markets.
LPDDR5 and GDDR7 vs HBM3e
The switch from HBM3e to GDDR7 is all about tradeoffs. HBM offers massive bandwidth, ideal for AI training, but it’s expensive and power-hungry. LPDDR5 and GDDR7 can’t match that speed, but they’re cheaper, easier to integrate, and much better suited to inference. In workloads that don’t need top-tier performance, they get the job done more efficiently.