Mistral Small 3.2 (24B) is a new open-weights (Apache 2.0) model from @MistralAI. TLDR: an update to 3.1 with better instruction following, fewer infinite generation issues with challenging prompts, and an improved tone. Available in LM Studio 👾 https://t.co/FMScGXzZta
Managed to mostly fix Mistral 3.2 tool calling for GGUF / transformers! 1. 3.2 tool calling is different from 3.1 2. timedelta(days=1) (yesterday) changed with a if-else - supports 2024 to 2028 dates - so now word for word same sys prompt! 3. Made experimental FP8 quant as well! https://t.co/707mzU7d6I
Creative Sound Blaster AWE32 fixes arrive for the Linux 6.16-rc3 kernel. https://t.co/5GeBkL1daa
Mistral AI has released an update to its open-source language model, upgrading Mistral Small 3.1 to Mistral Small 3.2. The 24-billion parameter model features improvements in instruction following, reducing infinite generation and repetitive output issues, and enhanced robustness in function calling templates. The update also brings a more refined tone and better handling of challenging prompts. Mistral Small 3.2 is available under the Apache 2.0 license and has been integrated into platforms such as LM Studio and Ollama. Additionally, developers have addressed tool calling compatibility for frameworks like GGUF and transformers, including support for date ranges from 2024 to 2028 and experimental FP8 quantization. This release aims to provide a smarter and more reliable model experience while maintaining open access for the AI community.