Carl Franzen January 30, 2026 Credit: VentureBeat made with Flux-1 on fal.aiSan Francisco-based AI lab Arcee made waves last year for being one of the only U.S. companies to train large language models (LLMs) from scratch and release them under open or partially open source licenses to the publicenabling developers, solo entrepreneurs, and even medium-to-large enterprises to use the powerful AI models for free and customize them at will.Now Arcee is back again this week with the release of its largest, most performant open language model to date: Trinity Large, a 400-billion parameter mixture-of-experts (MoE), available now in preview,Alongside the flagship release, Arcee is shipping a "raw" checkpoint model, Trinity-Large-TrueBase, that allows researchers to study what a 400B sparse MoE learns from raw data alone, before instruction tuning and reinforcement has been applied.By providing a clean slate at the 10-trillion-token mark, Arcee enables AI builders in highly regulated industries to perform authentic audits and conduct their own specialized alignments without inheriting the "black box" biases or formatting quirks of a general-purpose chat model. This transparency allows for a deeper understanding of the distinction between a model's intrinsic reasoning capabilities and the helpful behaviors dialed in during the final stages of post-training.This launch arrives as powerful Chinese open-source LLM alternatives from the likes of Alibaba (Qwen), z.AI (Zhipu), DeepSeek, Mo
Discussion
Join the conversation
Be the first to comment