A New York federal judge took the rare step of terminating a case this week due to an attorney's repeated misuse of AI in drafting filings, while simultaneously, the AI coding wars heated up with OpenAI and Anthropic launching competing models and preparing for Super Bowl advertisements. Furthermore, malicious packages on npm and PyPI repositories were found to be stealing wallet credentials, and researchers developed a new technique to optimize GPU kernels.
District Judge Katherine Polk Failla ruled that extraordinary sanctions were warranted against attorney Steven Feldman after he repeatedly submitted filings containing fake citations and "conspicuously florid prose," according to Ars Technica. The judge's decision highlights growing concerns about the misuse of AI in legal contexts.
In the tech world, the competition between AI giants OpenAI and Anthropic intensified. OpenAI released GPT-5.3-Codex, its most capable coding agent to date, coinciding with Anthropic's unveiling of its upgraded Claude Opus 4.6. This synchronized launch marked the opening of what industry observers are calling the "AI coding wars," a battle to capture the enterprise software development market, as reported by VentureBeat. The companies are also set to air competing Super Bowl advertisements.
Meanwhile, security researchers discovered malicious packages on the npm and PyPI repositories that stole wallet credentials from dYdX developers and backend systems, and in some cases, backdoored devices, Ars Technica reported. The compromised packages put applications using them at risk, potentially leading to complete wallet compromise and irreversible cryptocurrency theft.
Another development involved researchers from Stanford, Nvidia, and Together AI, who developed a new technique called Test-Time Training to Discover (TTT-Discover). This technique allows a model to continue training during the inference process, updating its weights for the specific problem. They successfully optimized a critical GPU kernel to run twice as fast as the previous state-of-the-art written by human experts, according to VentureBeat.
In a separate incident, a developer received a LinkedIn message from a recruiter, which led to a coding assessment that required installing a package. This package then exfiltrated cloud credentials, including GitHub personal access tokens and AWS API keys, granting the adversary access to the cloud environment within minutes, VentureBeat reported. This attack chain is becoming known as the identity and access management (IAM) pivot, representing a fundamental gap in how enterprises monitor identity-based attacks.
Discussion
AI Experts & Community
Be the first to comment