📢 New model alert! 🚨 We've added LFM2-1.2B to LocalAI! 🎉 This one's for you! Just run: `local-ai run lfm2-1.2b` 🚀 Get exploring! ✨
🔥 New model alert! 🔥 LFM2-VL-1.6B is now available in LocalAI! It's a vision-language model. 🤩 Install it with: `local-ai run lfm2-vl-1.6b` #LocalAI #AI #newmodel #LFM2
🔥 New model alert! 🔥 Check out LFM2-VL-450M, a multimodal model from LiquidAI! It handles images & text. 🖼️ ➡️ 💬 Install with: `local-ai run lfm2-vl-450m` 🚀 #LocalAI #AI #Multimodal
Open-source developers released a series of lightweight artificial-intelligence models aimed at bringing multimodal capabilities to consumer devices with limited processing power. LiquidAI published quantized checkpoints for its new vision-language models LFM2-VL-450M and the larger LFM2-VL-1.6B on the Hugging Face platform, using the GGUF format to enable deployment through the popular llama.cpp runtime. The company says the models are efficient enough to run on smartwatches and other small devices. Community project LocalAI simultaneously added support for LFM2-VL-450M, LFM2-VL-1.6B and the text-only LFM2-1.2B, allowing users to install the systems with a one-line command on local hardware without relying on cloud services. The releases expand LocalAI’s catalogue of edge-ready generative models and highlight growing interest in running AI workloads outside large data centres. Separately, contributors to the mlx-audio toolkit published version 0.2.4, introducing two speech models—IndexTTS and Voxtral—alongside multi-model visualisation, codec fixes and an updated troubleshooting guide for Swift developers. The parallel upgrades underscore the continuing momentum behind open-source alternatives to proprietary AI platforms.