
Microsoft has made significant strides in AI with the introduction of Orca-Math, a specialized small language model that excels in solving complex mathematical word problems requiring multi-step reasoning. Developed as an offshoot of the Mistral-7B project, Orca-Math has achieved an impressive 86.81% score on the GSM8k benchmark, outperforming models that are ten times its size or have been trained with ten times more data. Notably, Orca-Math operates without the need for code, verifiers, or ensembling tricks, showcasing its efficiency and effectiveness. In addition to Orca-Math, Microsoft has also released new Orca-based models and their first Orca dataset, marking a significant contribution to the field of AI and demonstrating the potential of using feedback to improve language models.
Microsoft's new Orca-Math AI outperforms models 10x larger https://t.co/ow7TqKbVGB
Microsoft’s Orca-Math, a specialized small language model, outperforms much larger models in solving math problems that require multi-step reasoning and shows the potential of using feedback to improve language models. Learn more. https://t.co/lz72MdVQWy https://t.co/Pm5ooTjhLB
🙌 We're thrilled to announce the release of our chat models Sailor-Chat (and its gguf version!) from 0.5B to 7B, built on the awesome OpenOrca 🐬 @alignment_lab and Aya 🍀 @sarahookr @CohereForAI datasets! A massive thank you to the teams for open-sourcing these amazing… https://t.co/V1vXvOISTH
