Nvidia unveiled a new technique to reduce the memory costs of large language model (LLM) reasoning by up to eight times, while OpenAI deployed Cerebras chips for "near-instant" code generation, marking a significant move away from its traditional reliance on Nvidia. These developments come amid a flurry of activity in the AI and tech sectors, including a fusion energy milestone and a startup's success without increasing headcount.
Nvidia's new technique, called dynamic memory sparsification (DMS), compresses the key value (KV) cache, the temporary memory LLMs use to process prompts and reason through problems, according to VentureBeat. Experiments showed that DMS allows LLMs to "think" longer and explore more solutions without increasing memory demands. Meanwhile, OpenAI launched GPT-5.3-Codex-Spark, a coding model designed for rapid response times, running on hardware from Cerebras Systems. This partnership represents OpenAI's first major inference collaboration outside of Nvidia, as reported by VentureBeat.
The tech industry is also seeing advancements in other areas. Helion Energy, a fusion power developer chaired by Sam Altman, announced a milestone of achieving record plasma temperatures at 150 million degrees Celsius, ten times the core of the sun. This achievement is part of Helion's goal to bring power to the grid in Washington state by 2028, as reported by Fortune. However, some remain skeptical of Helion's timeline and technological approach.
In other news, the startup Abacum, raised over $100 million and tripled revenue without increasing headcount, according to Fortune. The company chose to avoid hiring, opting instead to address underlying issues.
In addition to these developments, the company Asimov (YC W26) is hiring for a remote position. The company is building training data for humanoid robots by collecting egocentric video of people doing everyday tasks. According to Hacker News, the role involves wearing a phone mounted on a headband while performing daily activities.
Discussion
AI Experts & Community
Be the first to comment