OpenAI launched a new coding model, GPT-5.3-Codex-Spark, designed for rapid response times, marking its first major move beyond its traditional reliance on Nvidia chips, according to VentureBeat. The model, running on Cerebras Systems hardware, aims for "near-instantaneous" code generation. Meanwhile, Nvidia researchers developed a technique to reduce the memory costs of large language model reasoning by up to eight times, as reported by VentureBeat.
The new OpenAI model represents a significant shift for the company, which has primarily depended on Nvidia for its infrastructure. This move comes amid a backdrop of a strained relationship with Nvidia, criticism over ChatGPT advertisements, a new Pentagon contract, and internal organizational changes, according to VentureBeat. The partnership with Cerebras Systems, a chipmaker specializing in low-latency AI workloads, is seen as a strategic move.
Nvidia's new technique, called dynamic memory sparsification (DMS), compresses the key value (KV) cache, the temporary memory LLMs generate. Experiments show that DMS enables LLMs to "think" longer and explore more solutions without a loss in accuracy, according to VentureBeat.
In other tech news, Waymo, the self-driving developer, is seeking regulatory changes in Washington, DC, to allow its robotaxis to operate without human drivers, according to Wired. The company has been pushing city officials to pass new regulations for over a year.
Also, a wave of unexplained bot traffic is sweeping the web, as reported by Wired. One data analyst saw a sudden surge of traffic from China and Singapore to his website, which publishes articles about paranormal activities.
Finally, Wired reported on several other developments, including ICE's plans to expand across the US and Palantir CEO Alex Karp's response to employee concerns about working with ICE. Additionally, a Wired writer experimented with an AI assistant, OpenClaw, to see how it could manage daily tasks.
Discussion
AI Experts & Community
Be the first to comment