Mistral AI Ships Devstral 2 Coding Models And Mistral Vibe CLI For Agentic, Terminal Native Development

Mistral AI has introduced Devstral 2, a next generation coding model family for software engineering agents, together with Mistral Vibe CLI, an open source command line coding assistant that runs inside the terminal or IDEs that support the Agent Communication Protocol. https://mistral.ai/news/devstral-2-vibe-cli Devstral 2 and Devstral Small 2, model sizes, context and benchmarks Devstral 2 …
Continue reading Mistral AI Ships Devstral 2 Coding Models And Mistral Vibe CLI For Agentic, Terminal Native Development

Zhipu AI Releases GLM-4.6V: A 128K Context Vision Language Model with Native Tool Calling

Zhipu AI has open sourced the GLM-4.6V series as a pair of vision language models that treat images, video and tools as first class inputs for agents, not as afterthoughts bolted on top of text. Model lineup and context length The series has 2 models. GLM-4.6V is a 106B parameter foundation model for cloud and …
Continue reading Zhipu AI Releases GLM-4.6V: A 128K Context Vision Language Model with Native Tool Calling

Z.ai debuts open source GLM-4.6V, a native tool-calling vision model for multimodal reasoning

Chinese AI startup Zhipu AI aka Z.ai has released its GLM-4.6V series, a new generation of open-source vision-language models (VLMs) optimized for multimodal reasoning, frontend automation, and high-efficiency deployment. The release includes two models in "large" and "small" sizes: GLM-4.6V (106B), a larger 106-billion parameter model aimed at cloud-scale inference GLM-4.6V-Flash (9B), a smaller model …
Continue reading Z.ai debuts open source GLM-4.6V, a native tool-calling vision model for multimodal reasoning