🇺🇸 US vs 🇨🇳 China numbers here are unbelievable.
The US controls the absolute majority of known AI training compute on this planet and continues to build the biggest, most power hungry clusters.
China is spending heavily to close the gap. Recent reporting pegs 2025 AI capital expenditure in China at up to $98B, up 48% from 2024, with about $56B from government programs and about $24B from major internet firms. Capacity will grow, but translating capex into competitive training compute takes time, especially under export controls.
With US controls constraining access to top Nvidia and AMD parts, Chinese firms are leaning more on domestic accelerators. Huawei plans mass shipments of the Ascend 910C in 2025, a two-die package built from 910B chips. US officials argue domestic output is limited this year, and Chinese buyers still weigh tradeoffs in performance, memory, and software.
📜 Chips and policy are moving targets
The policy environment shifted again this week.
A new US arrangement now lets Nvidia and AMD resume limited AI chip sales to China in exchange for a 15% revenue share paid to the US government, covering products like Nvidia H20 and AMD MI308.
This could boost near-term Chinese access to mid-tier training parts, yet it does not restore availability of the top US chips.
Beijing is cautious about reliance on these parts. Chinese regulators have urged companies to pause H20 purchases pending review, and local media describe official pressure to prefer domestic chips.
🇺🇸 Why performance still favors the US stack like NVIDIA
Independent analysts compare Nvidia’s export-grade H20 with Huawei’s Ascend 910B and find the Nvidia part still holds advantages in memory capacity and bandwidth, which matter for training large models.
But software maturity gaps around Huawei’s stack remains, that reduce effective throughput, even when nominal specs look close to older Nvidia parts like A100.
These issues make it harder for Chinese labs to match US training runs at the same wall-clock cost.