Congrats @Microsoft on the latest phi-4 release! A 14b model surpassing GPT-4o in hard benchmarks like GPQA/MATH! Now for the real challenge—Phi-4 is live in Arena for human evaluation. Bring your toughest prompts to https://t.co/gxIFU9kIc2 and let’s see how it performs! https://t.co/Myc3gEeaqz
Phi-4 got reasonably high scores on BigCodeBench among the open-weights models 🤔 https://t.co/APuyIFOJDQ https://t.co/XzWsIpNFpY
Did you know @MSFTResearch's phi-4 can be adapted to the @AIatMeta Llama architecture? That is possible by separating the QKV and gate/up layers, allowing more accurate LoRA fine-tuning. This means all of your Llama-built tooling can be used with phi-4. 👀 phi-4 is an… https://t.co/8Z91kKmwML
Microsoft has launched its Phi-4 model, a 14 billion parameter AI, now available on Hugging Face at a competitive price of 7 cents per 14 cents per Mtokens. The Phi-4 model has demonstrated the ability to outperform larger models, including OpenAI's o1, by 4.5% in mathematical reasoning tasks. This release marks a significant step in enhancing the capabilities of small language models (SLMs) through Microsoft's 'rStar-Math' technique. Additionally, the Phi-4 model has achieved high scores on BigCodeBench, indicating its effectiveness among open-weight models. The model is also adaptable to the Llama architecture developed by Meta, allowing for more precise fine-tuning. Microsoft aims to further evaluate Phi-4's performance through human testing in various challenging scenarios.