AI agents and their impact on cybersecurity and web browsing are making headlines. Google's new Auto Browse agent, part of Chrome, is being rolled out to AI Pro and AI Ultra subscribers, while security concerns surrounding AI assistants are being highlighted by experts. Simultaneously, advancements in AI models like z.ai's GLM-5, which boasts a record-low hallucination rate, are emerging, alongside the development of more secure versions of open-source AI assistants like NanoClaw.
Google's Auto Browse agent, which allows the AI to surf the web on a user's behalf, is currently in preview for AI Pro and AI Ultra subscribers, according to Ars Technica. This development comes as the AI landscape shifts from chatbot dominance to the capabilities of AI agents. However, the article notes that AI agents are still "rough around the edges," suggesting that relying on them for critical tasks might be premature.
Meanwhile, the rapid adoption of open-source AI assistant OpenClaw has raised security concerns. OpenClaw, developed by Peter Steinberger, allows users to autonomously complete tasks across their devices using natural language prompts. Its "permissionless" architecture, however, prompted the creation of NanoClaw, a more secure version. NanoClaw, which debuted under an open source MIT license, addresses these security vulnerabilities, according to VentureBeat.
In the realm of AI models, z.ai's GLM-5 has achieved a record-low hallucination rate, according to VentureBeat. The model, which retains an open source MIT License, scored -1 on the AA-Omniscience Index, a 35-point improvement over its predecessor. This achievement places GLM-5 ahead of competitors like Google, OpenAI, and Anthropic in knowledge reliability.
The advancements in AI are also intersecting with cybersecurity. AI is already making online crimes easier, according to MIT Technology Review. Hackers are using AI to reduce the time and effort required to orchestrate attacks, lowering the barriers for less experienced attackers. Some experts warn of the potential for fully automated attacks, while others emphasize the immediate risks of AI-enhanced scams. "Criminals are increasingly," the MIT Technology Review noted.
Discussion
AI Experts & Community
Be the first to comment