Recent advancements in AI models have been highlighted, particularly with the release of Llama 3.2, which includes new vision language models and compact versions with 1 billion and 3 billion parameters. These models are pre-trained and aligned, with drop-in vision capabilities, and are available for use through Meta AI and Hugging Face. Additionally, a local real-time voice mode based on Llama 3 has been introduced, which is open-source and free. Another model, 'llama-3.1-whiterabbitneo-2-8b', has been released for cybersecurity applications as a public preview. This model can be installed and run using Local AI. Furthermore, Hugging Face has launched SmolLM2, a compact AI model that outperforms larger LLaMa models while utilizing only 1/8th of their parameters. This model is available on Bakery by Bagel Network. Lastly, new Smol TTS models have been introduced, featuring zero-shot voice cloning based on LLaMa architecture, which operates on-device with llama.cpp.
Smol TTS models are here! OuteTTS-0.1-350M - Zero shot voice cloning, built on LLaMa architecture, CC-BY license! 🔥 > Pure language modeling approach to TTS > Zero-shot voice cloning > LLaMa architecture w/ Audio tokens (WavTokenizer) > BONUS: Works on-device w/ llama.cpp ⚡… https://t.co/lXKAGwvvvH
The small model race is on fire! @huggingface has launched SmolLM2—a compact model outperforming larger LLaMa models on multiple benchmarks with just 1/8th of their parameters! It’s now live on the Bakery by @bagel_network for everyone to explore. Give it a try. (Link in… https://t.co/uIixiGFEgp
🚀 New Model Alert! 🚀 Introducing "llama-3.1-whiterabbitneo-2-8b"! A powerful #AI model for #cybersecurity, released as a public preview to assess its societal impact. Install & try it with `local-ai run llama-3.1-whiterabbitneo-2-8b`. #LocalAI #AI #MachineLearning