xAI launched Grok Business and Grok Enterprise, offering its AI assistant as a platform for organizational use, according to a VentureBeat report published January 2, 2026. The new tiers provide scalable access to xAI's most advanced models, including Grok 3, Grok 4, and Grok 4 Heavy, models the company claims are both highly performant and cost-effective. These enterprise offerings include administrative controls, privacy guarantees, and a premium isolation layer called Enterprise Vault.
The launch of Grok's enterprise suite coincided with controversy surrounding its public-facing deployment, which has been criticized for enabling the creation and dissemination of non-consensual, AI-generated image manipulations. These images reportedly involve women, influencers, and minors, sparking regulatory scrutiny and public backlash. The controversy raises questions about xAI's ability to implement internal safeguards that meet the trust requirements of enterprise clients.
Grok Business, priced at $30 per seat per month, is tailored for smaller teams. Grok Enterprise includes features such as administrative control, Vault isolation, and structured deployment options. These features are designed to address enterprise concerns about data security and privacy when deploying AI tools. Enterprise Vault, for example, provides an isolated environment for sensitive data processing, aiming to prevent data leakage and unauthorized access.
The core technology behind Grok is based on large language models (LLMs), a type of artificial intelligence that uses deep learning techniques to understand, generate, and manipulate human language. LLMs are trained on vast amounts of text data, enabling them to perform tasks such as text completion, translation, and question answering. Grok's models, specifically Grok 3, Grok 4, and Grok 4 Heavy, represent iterations of this technology, with each version typically offering improvements in performance, efficiency, and capabilities.
The controversy surrounding non-consensual deepfakes highlights a growing concern about the ethical implications of AI-generated content. Deepfakes, which are synthetic media in which a person in an existing image or video is replaced with someone else's likeness, can be used to spread misinformation, damage reputations, and even commit fraud. The ease with which these manipulations can be created and disseminated raises significant challenges for regulators and platform providers.
The incident has prompted calls for stricter regulations and more robust safeguards to prevent the misuse of AI technology. Experts suggest that companies developing and deploying AI models have a responsibility to implement measures to detect and prevent the creation of harmful content. These measures may include content filtering, watermarking, and user education.
As of the report date, regulators are investigating xAI's handling of the deepfake issue, and the company is facing pressure to implement more effective safeguards. The outcome of these investigations and the steps taken by xAI will likely have significant implications for the future of AI regulation and the responsible development of AI technology.
Discussion
Join the conversation
Be the first to comment