Alibaba Cloud has launched Qwen3, the latest iteration in the Qwen series of large language models (LLMs), which reportedly outperforms leading models such as DeepSeek-R1, o1, and Gemini-2.5-Pro. Qwen3 features 32 billion parameters and supports hybrid reasoning and multilingual capabilities, with applications in solving STEM problems and executing agentic actions. The model is integrated with Lambda's Inference API for broader accessibility. Concurrently, LocalAI has introduced several new models, including Qwen2.5-Omni-7B, a multimodal model capable of processing text, image, and audio inputs, and the pku-ds-lab_fairyr1 series (14B-preview and 32B), which are efficient LLMs built on DeepSeek-R1 architecture. Additionally, LocalAI released Moondream2, a compact vision-language model. The DeepSeek-R1-0528 model has also been made available with a promotional discount for inference services.
As promised, always delivering the latest models to your door right after they emerge. Llama4, Qwen3... and now it is time for the new DeepSeek-R1-0528! This time with a BIG discount for you to run inference 👊 🔗 in the comments. https://t.co/noDhlPhDpi
As promised, always delivering the latest models to your door right after they emerge. Llama4, Qwen3... and now it is time for the new DeepSeek-R1-0528! This time with a BIG discount for you to run inference 👊 🔗 in the comments。 https://t.co/1g8BstFSQF
✨ New model alert! ✨ Moondream2 is here! 🤩 It's a small, efficient vision-language model. 🖼️➡️💬 Try it out with LocalAI: `local-ai run moondream2-20250414` 🚀 #LocalAI #moondream2 #multimodal #AI