
Tsinghua University's THUDM team has released a new open-source AI model, GLM-4-9B, which is now available on the Hugging Face hub. The model, developed by ZHIPU AI, features a 9 billion parameter base and supports up to 1 million tokens in context. It is trained on 10 trillion tokens and supports 26 languages. The GLM-4-9B includes advanced features such as function calling, web browsing, code execution, and long-text reasoning. It also includes a vision language model and supports 8k and 128k tokens. It is designed to compete with models like GPT-4, Mistral, and Llama 3 8B, making it a strong candidate for on-device applications.
MASSIVE - GLM-4 9B, base, chat (& 1M variant), vision language model New Model 🔥 📌They released 3 models, 1 VLM with 8k tokens, 1 LLM with 128k tokens and a last LLM with 1M token. 📌needle in a haystack test shows some pretty insane results. 📌License is not the worse,… https://t.co/OzL0WawPCk
🚀 Check out GLM-4! This open-source, multilingual, multimodal chat model supports 26 languages and offers advanced features like code execution and long-text reasoning. Perfect for AI enthusiasts and developers! 🌐 #AI #OpenSource #MachineLearning https://t.co/YGex8f9Ppv
NEW DROP: GLM 4 from THUDM Tsinghua University! 🔥 > GLM 4 9B base, chat (& 1M variant), vision language model > Beats Mistral and Llama 3 8B (looks like a pretty strong model for on-device) > Trained on 10T tokens spanning 26 languages > Supports function calling, web browsing,… https://t.co/CiLZxJFFkE
