Qwen has released a preview version of its new reasoning model, QwQ-Max-Preview, which is now available on Qwen Chat. This model, built on the Qwen2.5-Max base model, showcases enhanced capabilities in deep reasoning, mathematics, coding, general domains, and agent-related tasks. The preview version is a sneak peek into the upcoming official release of QwQ-Max, with further refinements ongoing. Qwen has announced plans to open-source both QwQ-Max and Qwen2.5-Max under the Apache 2.0 license in the near future. This move is expected to democratize access to advanced reasoning capabilities and foster innovation across various applications. In addition to the open-source releases, Qwen is set to launch a dedicated app for Qwen Chat, aimed at making advanced AI accessible to a broader audience. The app will feature a user-friendly interface for tasks like problem-solving, code generation, and logical reasoning, with real-time responsiveness and integration with collaboration tools. Qwen also plans to release smaller variants of QwQ, such as QwQ-32B, designed for local device deployment and catering to privacy-sensitive applications or low-latency workflows, offering customization options. QwQ-Max-Preview has demonstrated performance on par with o3-mini medium at LiveCodeBench, indicating its competitive edge in reasoning and coding tasks.
I lowkey suspect that Qwen planned open release today already but after seeing 3.7 redefined it to ยซpreviewยป again and decided to go for a few thousand steps more Likewise with Grok's turn to Reasoning maybe a cooky theory but, well, just a feeling
Donโt sleep on @Alibaba_Qwen โofficial Apache 2.0-licensed open-source launch of QwQ-Max and Qwen2.5-Max planned soon.โ https://t.co/YpfRcPZXaK
Interesting Qwen will release a reasoning model that have performance similar to o3-mini medium at livecodebench. https://t.co/gnMAni8NWl