NEW LLM from @alibabacloud now available at Serenity Star 🚀 Qwen3 is now available with 3 hostings: @togethercompute, @OpenRouterAI, @FireworksAI_HQ Go try it out now⬇ https://t.co/sPw6C98AGy It combines speed and depth to solve from code to complex problems. And this is https://t.co/45N3CxcUwQ
Huge improvements from the Qwen 2.5 Coder 32B! CAn't wait to see where this will lead us! Qwen is killing it! 🚀 https://t.co/z21N8GDWFY
Let's build an Agentic RAG app using Qwen 3 LLM (running locally):
Alibaba Cloud has expanded its AI large language model (LLM) family, Qwen, with the latest release of Qwen3, which includes dense and mixture-of-experts (MoE) models ranging from 0.6 billion to 235 billion parameters. Qwen3 has been recognized as the top-ranked open-source AI model globally, surpassing DeepSeek's R1 in evaluations of coding, mathematics, data analysis, and language instruction capabilities. The Qwen3 32B model is available on SambaNova's cloud platform, offering high processing speeds of 282 tokens per second and strong multilingual and reasoning performance. The model's hybrid reasoning approach enables step-by-step processing for complex prompts and rapid responses for simpler tasks. Qwen3 is also hosted by multiple vendors including Together Compute, OpenRouterAI, and FireworksAI. Local startups in Japan, such as Abeja, are leveraging Qwen to develop competitive AI models. The Qwen 2.5 Coder 32B variant has shown notable improvements, indicating ongoing development. The model is being adopted for various applications, including agentic retrieval-augmented generation (RAG) apps, reflecting its versatility and growing influence in the open-source AI ecosystem.