Sources
- Bruno Oliveira (effigy/acc)
on today's adventure in fine-tuning completely useless language models, behold: cifar-10 solver qwen-2.5-coder-0.5b (only 37.4% avg D:). https://t.co/pviKYtKnkI
- Rohan Paul
Quantization matters - Impact of quantization on Aider benchmark Comparing Qwen 2.5 32B on 4 different providers/quantizations. The best version of the model rivals GPT-4o, while the worst performer is more like GPT-3.5 Turbo. https://t.co/q1vOfl6AoP
- CAMEL-AI.org
📢 We've just added support for the @Alibaba_Qwen models from Tongyi Qianwen in the 🐫 CAMEL framework! 🚀 By integrating the Qwen series models, including Qwen2.5-Coder (specialized in code generation and repair), Qwen-max (a high-performance model), Qwen-plus (optimized for… https://t.co/7iuDtJtSFH