DeepSeek V4: official open-source release with Day-0 adaptation for Huawei Ascend

DeepSeek

Models / LLM official + media 5 src. ~1 min

DeepSeek officially released the V4 lineup in open-source under the MIT license on April 29. It includes DeepSeek-V4-Pro at 1.6T parameters (49B active) and DeepSeek-V4 at 284B (13B active) — both MoE models with native 1M token context. The release claims roughly a 9.5x reduction in memory requirements versus V3.2 and a near-closed gap with frontier closed models on reasoning benchmarks. A defining feature of the release is optimization for Chinese accelerators: Huawei Ascend, Cambricon, Hygon, and Moore Threads completed Day-0 adaptation on release day, with multi-deploy on Ascend 950 expected in the second half of the year.

Why it matters

The first major frontier open-weights release purpose-built for Ascend rather than Nvidia — an infrastructure shift for the Chinese AI stack and a signal that US export restrictions have accelerated the formation of a self-sufficient inference ecosystem.

Importance: 5/5

Paradigm shift: open-weights frontier release with Day-0 adaptation for a non-Nvidia stack; ≥4 independent primary-media confirmations.

Sources