Meditation Study Reveals Heightened Brain Activity; AI Developments Show Progress and Risks
ROME, ITALY - In a study that challenges conventional wisdom, researchers found that meditation is not a state of brain rest, but rather one of heightened cerebral activity. Simultaneously, the AI landscape saw significant developments, with a Chinese AI startup achieving a record-low hallucination rate and MIT researchers unveiling a new fine-tuning method for large language models. However, concerns about the security of AI assistants persist, as highlighted by the emergence of a new tool created by an independent software engineer.
Researchers from the University of Montreal and Italy's National Research Council analyzed the brain activity of 12 monks of the Thai Forest Tradition using magnetoencephalography (MEG), according to a report from Wired. The study, conducted at a Buddhist monastery outside Rome, revealed that meditation profoundly alters brain dynamics.
Meanwhile, in the AI world, Zhupai (z.ai) launched its new large language model, GLM-5, which achieved a record-low hallucination rate on the independent Artificial Analysis Intelligence Index v4.0, according to VentureBeat. The model scored -1 on the AA-Omniscience Index, representing a 35-point improvement over its predecessor. This places GLM-5 ahead of competitors like Google, OpenAI, and Anthropic in knowledge reliability.
MIT researchers also made strides in AI development. They developed a new technique called self-distillation fine-tuning (SDFT) that allows LLMs to learn new skills without forgetting their existing knowledge, VentureBeat reported. This method leverages the in-context learning abilities of modern LLMs and consistently outperforms traditional supervised fine-tuning.
However, the rapid advancement of AI also raises security concerns. An independent software engineer, Peter Steinberger, created OpenClaw, a tool that allows users to create their own bespoke AI assistants, according to MIT Technology Review. The project went viral in late January 2026. The article notes that "AI agents are a risky business" and that even within a chatbox window, LLMs can make mistakes.
These developments come amid other global events, such as the U.S. claiming China is conducting secret nuclear tests, as reported by NPR Politics. The U.S. claims China may be developing new nuclear warheads for its hypersonic weapons.
Discussion
AI Experts & Community
Be the first to comment