
Alibaba's Qwen division has announced the release of its latest open-source language models, Qwen1.5-32B and Qwen1.5-32B-Chat, marking a significant advancement in multilingual large language models (LLMs). The new models support 10 languages and boast open weights for research, aiming to enhance the capabilities of the open-source community. Qwen1.5-32B, with 32 billion parameters and a context size of 32k, outperforms its predecessor, Qwen1.5-72B, and other competitors like GPT-4 and Mixtral, in various benchmarks including language understanding, coding, and mathematical abilities. The models, which use DPO for preference training and come with a custom license for commercial use, have been praised for their performance across code, reasoning, and math benchmarks, and their multilingual capabilities. Additionally, the h2o-danube2-1.8b model was also released, featuring 1.8 billion parameters and trained with an additional 2T tokens, setting a new standard on the Open LLM Leaderboard benchmark.
More options for Qwen, more choices for MetaGPT, Qwen1.5 - 32B has just been released. 🙌Head over to MetaGPT to configure and try it out: https://t.co/1P8w3gIb8l 👇 https://t.co/52OBvDSJOL
More choices for Qwen, more choices for MetaGPT Qwen1.5 - 32B just released. 🙌Come configure and experience it on MetaGPT: https://t.co/1P8w3gIb8l https://t.co/52OBvDSJOL
Qwen 1.5 is out! @ollama #ai https://t.co/iFcrYUCIzt
