LocalAI has announced the release of several new AI models, enhancing its offerings for developers and researchers. Key introductions include the 't.e-8.1-iq-imatrix-request', 'Violet_Twilight-v0.2-iq-imatrix', and 'gemma-2-9b-it-abliterated', each designed for specific tasks within the AI landscape. Additionally, the Llama model family continues to evolve, with the launch of Llama 3.2, which boasts new features such as vision and voice capabilities. The model is noted for its competitive performance against larger counterparts, showcasing a significant improvement in benchmarks. Furthermore, the innovative 'Aria' model by Rhymes has been unveiled as the first open-source multimodal mixture of experts (MoE) model, featuring 24.9 billion total parameters and a context window of 64,000 tokens. This model is designed to handle text, code, image, and video tasks, positioning it as a versatile tool in the AI toolkit. Other notable releases include various fine-tuned versions of Llama 3.2, aimed at enhancing performance in specific applications.
"Introducing llama-3.2-3b-agent007! 🐒 This new model is a quantized version of EpistemeAI/Llama-3.2-3B-Agent007, trained 2x faster with Unsloth & Huggingface's TRL library. Try it out in LocalAI with `local-ai run llama-3.2-3b-agent007` #LocalAI #AIModels"
🎉 New model alert! 🎉 Check out "nihappy-l3.1-8b-v0.09", a quantized role-playing model focusing on dynamic storytelling. To use, run "local-ai run nihappy-l3.1-8b-v0.09" #LocalAI #AIModel #NewRelease
🚀🎉 New model alert! Introducing "gemma-2-ataraxy-v3i-9b"! A merge of pre-trained language models with a higher density, trained on Gutenberg. Get a taste of this experimental model by running `local-ai run gemma-2-ataraxy-v3i-9b`! #LocalAI #Gemma #NewModel #AIModel