The recent release of open weights models has sparked excitement in the AI community, with many expressing enthusiasm for the advancements in this area. Users have noted the significance of the release, highlighting its historic nature. Amidst this, there are calls for greater recognition of Qwen models, which are characterized by their open weights, high quality, long context, multilingual support, and permissive licensing. Critics have pointed out that while closed-weights models, such as the newly released #ministral model, receive considerable attention, Qwen models from Alibaba are often overlooked. The discussions suggest a growing interest in the capabilities of open weights models compared to their closed counterparts, indicating a shift in focus within the AI landscape.
Qwen models don't get talked about enough. They offer open weights, top quality, long context, multilingual support, a permissive license, and a range of sizes and formats. No other model series can boast this.
Open weights are so good now that I think I'm more excited for the next Qwen release than the Anthropic one 🤗
Qwen is not getting enough recognition. A closed-weights, opaque #ministral model drops and everyone's hype. But there are almost no critical voices wondering why it is not being compared to Qwen in the blog. People are seriously sleeping on @Alibaba_Qwen models and I wonder why!