Sources
- David Borish
In a field where bigger usually means better, DeepScaleR has just turned the AI world on its head. This tiny 1.5B parameter model has achieved what tech giants spend millions trying to do: outperforming OpenAI's O1-preview on complex mathematical reasoning.
- AK
Logical Reasoning in Large Language Models: A Survey https://t.co/Fv5dsgkByt
- arXivGPT
🏷️:Competitive Programming with Large Reasoning Models 🔗:https://t.co/yiv0GrTmNW https://t.co/88AFh0B8nd