AI Advances Spark Both Innovation and Concern
Artificial intelligence is rapidly evolving, with breakthroughs in coding and data processing, but also raising concerns about cybersecurity and research integrity. Recent developments include OpenAI's advancements in AI-powered coding and Fundamental's new approach to tabular data, while corrections were issued for previously published research on cancer.
OpenAI's newest model, GPT-5.3-Codex, demonstrated a significant leap in coding capabilities, surpassing rival systems in performance on coding benchmarks, according to Fortune. The model showed advancements over previous generations of both OpenAI's and Anthropic's models. However, the company is proceeding with caution, implementing tight controls and delaying full developer access due to cybersecurity risks associated with the model's capabilities. The same features that make GPT-5.3-Codex effective at writing, testing, and reasoning about code also raise serious concerns about potential misuse.
Meanwhile, Fundamental, a San Francisco-based AI firm, launched NEXUS, a native foundation model for tabular data, VentureBeat reported on February 5, 2026. NEXUS aims to bypass the manual ETL (extract, transform, load) process traditionally used in data science. According to VentureBeat, this new approach addresses a "curious blind spot" in the deep learning revolution, where structured, relational data has been treated as just another file format. The company, co-founded by DeepMind alumni, seeks to streamline the forecasting of business outcomes, which has typically relied on labor-intensive data science processes.
In other news, Nature issued corrections for two previously published articles. One correction, published on November 6, 2024, addressed errors in figures within an article on colibactin-driven colon cancer, Nature News reported. Specifically, several labels in Figs. 2 and 3 were incorrect, requiring adjustments to accurately reflect the experimental data. Another correction, published on May 18, 2022, concerned assembly inaccuracies in the Extended Data of a manuscript on PHGDH heterogeneity and cancer metastasis, according to Nature News. The raw data published in the Supplementary Information were always correct, but human errors occurred during the assembly of the Extended Data Figure panels. The labeling of uncut western blots provided in the Supplementary Information was also refined.
Separately, a security concern was raised regarding LinkedIn's practices. According to Hacker News, LinkedIn silently probes for 2,953 Chrome extensions on every page load. A GitHub repository documents the extensions LinkedIn checks for and provides tools to identify them. The repository includes a list of extensions with names and Chrome Web Store links.
Discussion
AI Experts & Community
Be the first to comment