Congrats @Alibaba_Qwen on the new Qwen1.5-32B release🔥 We have shipped it to Arena, thanks to the fast & reliable endpoint support by @togethercompute! Qwen1.5 72B has been the best open model on Chatbot Arena leaderboard. Very excited to see how the 32B performs! https://t.co/raPCjdNibJ https://t.co/kyrWjij4VL
Today, we release a new model of the Qwen1.5 series: Qwen1.5-32B and Qwen1.5-32B-Chat! Blog: https://t.co/HG9xXU3Bn1 HF: https://t.co/oE1DBcrRNq , search repos with “Qwen1.5-32B” in model names. GitHub: https://t.co/5vKV1KFwfy For a long time, our users have been requesting us… https://t.co/EtpmtB36rT
New open LLM from @Alibaba_Qwen! Qwen1.5 32B is a new multilingual dense LLM with a context of 32k, outperforming Mixtral on the open LLM Leaderboard! 🌍🚀 TL;DR 🧮 32B with 32k context size 💬 Chat model used DPO for preference training 📜 Custom License, commercially useable… https://t.co/8FkH021SPz


Alibaba_Qwen announced the release of their latest open-source model in the Qwen1.5 series, the Qwen1.5-32B and Qwen1.5-32B-Chat, which supports 10 languages and introduces open weights for research. This new multilingual dense LLM, featuring a 32B parameter size and a 32k context size, has been praised for its language understanding, multilingual support, coding, and mathematical abilities, showing competitive performance especially in handling Dutch well. It outperforms the Mixtral model on the open LLM Leaderboard and is comparable in performance to the 72B model. The model uses DPO for preference training and comes with a custom license for commercial use. Following the release, the model has been integrated into Arena by lmsysorg, with endpoint support from togethercompute, and is expected to perform well on the Chatbot Arena leaderboard.