DeepNewz, mobile.
People-sourced. AI-powered. Unbiased News.
Download on the App Store
Screenshot of DeepNewz app showing story detail view.
Feb 6, 05:15 PM
New AI Model o3-mini Introduced by Stanford, UW, and Others, 63% Cheaper and Faster Than o1-mini
AI Modeling
AI

New AI Model o3-mini Introduced by Stanford, UW, and Others, 63% Cheaper and Faster Than o1-mini

Authors
  • Brian Roemmele
  • TuringPost
  • Towards AI
5

Researchers from Stanford University, the University of Washington, the Allen Institute for AI, and Contextual AI have introduced a new approach to test-time scaling for large language models (LLMs). This method aims to enhance reasoning capabilities and improve performance, particularly in comparison to OpenAI's o1 model. The new model, referred to as o3-mini, is reported to be faster, smarter, and 63% cheaper than the previous o1-mini, and 93% cheaper than the original o1 model. This development is part of a broader shift in AI towards models that utilize slow thinking, step-by-step reasoning, and self-correction, which are crucial for advancing AI research and applications.

Written with ChatGPT (GPT-4o mini).

Additional media

Image #1 for story new-ai-model-o3-mini-introduced-stanford-uw-others-63-cheaper-faster-than-o1-e5dd9fc1
Image #2 for story new-ai-model-o3-mini-introduced-stanford-uw-others-63-cheaper-faster-than-o1-e5dd9fc1
Image #3 for story new-ai-model-o3-mini-introduced-stanford-uw-others-63-cheaper-faster-than-o1-e5dd9fc1
Image #4 for story new-ai-model-o3-mini-introduced-stanford-uw-others-63-cheaper-faster-than-o1-e5dd9fc1