
The newly released Mixtral 8x22B Instruct model, developed by Mistral AI, is setting new benchmarks in the AI community. This model, which operates under the Apache 2.0 license, boasts 39 billion active parameters out of a total of 141 billion, making it highly efficient and cost-effective. It features a 64K token context window and supports function calling. Notably, Mixtral 8x22B Instruct has demonstrated superior performance across various benchmarks, including multilingual capabilities in English, French, Italian, German, and Spanish, and strong math performance with high scores on GSM8K and Math maj@4 benchmarks. It has outperformed other open-source models like LLaMA 2 70B and GPT-4 in specific benchmarks. The model is now available on multiple platforms, including Hugging Face and Clarifai, and is priced competitively at $0.65 per million tokens on DeepInfra and $0.90 on Anyscale Compute.









Cool. Now we have full-precision Mixtral-8x22B available in @togethercompute API https://t.co/VgtMmoFN1G
Mixtral 8x22B is now available via the @MistralAILabs La Platforme API If you previously installed the llm-mistral plugin run "llm mistral refresh" to refresh the list of available models - otherwise a fresh install will provide it Released 0.3.1 anyway: https://t.co/aDeGBvnTju https://t.co/cMwlJoLAY8
Exciting news! Mixtral-8x22B and other Mistral AI models are now live on Promptly! 🎉 Test the model in our Playground and get ready to build innovative apps! 💡 https://t.co/ieupeLJfDv https://t.co/R0FfTUrWP6