April 25, 2026 AI News Digest: Breakthroughs in Long-Context Models and Resilient AI Training DeepSeek AI Releases DeepSeek-V4: Compressed Sparse Attention and Heavily Compressed Attention Enable One-Million-Token Contexts DeepSeek-AI has released preview versions of the DeepSeek-V4 series, consisting of two Mixture-of-Experts (MoE) language models designed to make one-million-token context windows practical and affordable. The DeepSeek-V4-Pro ➡️➡️➡️