Anthropic's Claude AI model, using sixteen agents working in concert, successfully created a new C compiler from scratch in a two-week period, according to a recent blog post by Anthropic researcher Nicholas Carlini. The project, which cost approximately $20,000 in API fees, demonstrates the growing capabilities of AI agents in complex coding tasks.
The AI agents, running on Anthropic's Claude Opus 4.6 model, were given minimal supervision and tasked with building the compiler. The project resulted in a 10,000-line codebase. This achievement highlights the advancements in multi-agent AI tools, with both Anthropic and OpenAI releasing similar tools recently.
In other news, defense attorneys are seeking access to investigative files related to the killing of Renee Nicole Good by ICE agent Jonathan Ross. The attorneys, representing Roberto Carlos Muñoz-Guatemala, who was convicted of assaulting Ross, are requesting training records and investigative files related to the January 7th shooting. Muñoz-Guatemala's attorneys are seeking to understand the circumstances surrounding Good's death, as Ross was the same officer involved in both incidents.
Meanwhile, the "OpenClaw moment" signifies the first time autonomous AI agents have successfully moved beyond the lab and into the general workforce. Originally developed as "Clawdbot" by engineer Peter Steinberger, the framework evolved into "Moltbot" before settling on "OpenClaw" in late January 2026. Unlike previous chatbots, OpenClaw is designed with the ability to execute shell commands, manage local files, and navigate messaging platforms with persistent, root-level permissions.
A separate report details a new attack chain, dubbed the identity and access management (IAM) pivot, that can compromise cloud environments within minutes. According to CrowdStrike Intelligence research published on January 29, the attack begins with a seemingly legitimate LinkedIn message. The developer is then tricked into installing a malicious package that exfiltrates cloud credentials, granting the adversary access to the cloud environment.
Finally, researchers from Stanford, Nvidia, and Together AI have developed a new technique, called Test-Time Training to Discover (TTT-Discover), that optimizes GPU kernels. The technique allows the model to continue training during the inference process, updating its weights for the specific problem. This approach enabled the researchers to optimize a critical GPU kernel to run twice as fast as the previous state-of-the-art, which was written by human experts. This challenges the current enterprise AI strategies that often rely on "frozen" models, according to the researchers.
Discussion
AI Experts & Community
Be the first to comment