Nvidia unveiled a new technique to drastically reduce the memory costs of large language model (LLM) reasoning, while OpenAI launched a new coding model powered by Cerebras chips, marking a shift away from its reliance on Nvidia hardware. These developments, alongside ongoing research into brain aging and language learning, highlight the rapid advancements in artificial intelligence and related fields.
According to VentureBeat, Nvidia's new technique, called dynamic memory sparsification (DMS), can compress the key value (KV) cache – the temporary memory LLMs use – by up to eight times. This allows LLMs to "think" longer and explore more solutions without running out of memory, as reported by VentureBeat. While other methods have attempted to compress the cache, Nvidia's approach maintains or even improves the model's reasoning capabilities.
OpenAI's move to Cerebras chips for its new GPT-5.3-Codex-Spark coding model represents a significant departure from its traditional reliance on Nvidia. VentureBeat noted that this model is designed for near-instantaneous response times and is OpenAI's first major inference partnership outside of Nvidia. The partnership comes at a pivotal time for OpenAI, which is navigating a strained relationship with Nvidia, criticism over its decision to introduce advertisements into ChatGPT, a newly announced Pentagon contract, and internal organizational upheaval.
In other news, research continues to explore the impact of caffeine on brain aging. A study of 130,000 people suggests that moderate caffeine intake might reduce dementia risk, according to Nature News. The Nature Podcast also discussed the use of AI to decode the rules of a long-forgotten ancient Roman board game.
Meanwhile, the tech community continues to innovate. The open-source project "zed" is working on reimplementing its Linux renderer with wgpu, as indicated by a pull request on GitHub. Additionally, the website "lairner" offers courses in over 60 languages, including rare and endangered languages, according to Hacker News.
Discussion
AI Experts & Community
Be the first to comment