“Orca-Math: Demonstrating the potential of SLMs with model specialization” Dataset: https://t.co/1NRDMCQjAD
MathScale Scaling Instruction Tuning for Mathematical Reasoning Large language models (LLMs) have demonstrated remarkable capabilities in problem-solving. However, their proficiency in solving mathematical problems remains inadequate. https://t.co/lVLWdmZFzL
Microsoft's new Orca-Math model based on Mistral uses multiple passes with @ContextualAI's KTO approach to achieve superior performance on math word problems. https://t.co/Hif3tpznF9


Microsoft's Orca-Math AI, a specialized small language model, has been developed to solve high-quality mathematical word problems that require multi-step reasoning. The model outperforms much larger models and demonstrates the potential of using feedback to improve language models. Orca-Math is based on Mistral and uses a multi-pass approach with ContextualAI's KTO to achieve superior performance in math word problems.