Microsoft has unveiled the Phi-3 Vision AI model at Build 2024, featuring multimodal capabilities. The Phi-3 models are setting new performance standards for Small Language Models (SLMs) with 4.2B parameters, outperforming larger models in visual reasoning and OCR.
The past week has changed the future of AI ✨ > OpenAI’s new GPT-4o model > Google’s new AI Overview for search > Gemini 1.5 Flash A lot of announcements were made. Check our article for the most important highlights 👇🏼 https://t.co/Wb51hUdznF
🤖🇺🇸 AI Models Shrink to Fit: A New Era of Computing Begins! 🌐 Microsoft’s pocket-sized AI models, like Phi-3-mini, are revolutionizing computing with capabilities similar to ChatGPT—right from your laptop or smartphone. Dive into the future! https://t.co/G6q8ggSpZl
The new Phi-3 models from Microsoft are here setting new performance standards for SMLs (Small language models). Phi-3 Vision with just 4.2B parameters for example, outperforms larger models like Anthropic Claude-3 Haiku and Google Gemini 1.0 Pro V in visual reasoning, OCR, and… https://t.co/70gSsINnTo