Nvidia unveiled a new technique that drastically reduces the memory costs associated with large language model (LLM) reasoning, while Chinese AI startup MiniMax released a new language model promising to make high-end AI more affordable, according to reports from February 12 and 13, 2026. These developments come as the AI industry continues to evolve rapidly, with Nvidia's dynamic memory sparsification (DMS) potentially cutting LLM reasoning costs by up to eight times, and MiniMax's M2.5 model offering a cost-effective alternative to existing high-end AI solutions.
Nvidia's DMS technique compresses the key value (KV) cache, the temporary memory LLMs use to process prompts and reason through problems. According to VentureBeat, experiments showed that DMS enables LLMs to "think" longer and explore more solutions without increasing memory usage. While other methods have been proposed to compress this cache, Nvidia's approach maintains, and in some cases improves, the model's reasoning capabilities.
Meanwhile, MiniMax, headquartered in Shanghai, launched its M2.5 language model in two variants. VentureBeat reported that this model promises to make high-end AI so cheap that users might not worry about the cost. The model was made open source on Hugging Face under a modified MIT License, requiring that those using the model for commercial purposes "prominently display 'MiniMax M2.5' on the user interface of such product or service."
These advancements in AI come at a time when the industry is experiencing significant shifts. Fifteen years prior, Marc Andreessen's prediction that software would "eat the world" has come to fruition in ways that were not fully anticipated, according to Fortune. Software has indeed transformed industries like retail, video, music, and telecommunications.
In other news, Venezuela is debating a sweeping amnesty for political prisoners, as reported by NPR News on February 13, 2026. Additionally, a study suggests that moderate caffeine intake might reduce dementia risk, according to Nature News.
Discussion
AI Experts & Community
Be the first to comment