Loading...
Loading...
Browse all stories on DeepNewz
VisitMicrosoft's Phi-4 Model, With 14 Billion Parameters, Outperforms Larger AI Models in Math
Dec 13, 2024, 04:36 AM
Microsoft Corp. has unveiled Phi-4, a new small language model with 14 billion parameters, developed by Microsoft Research. This model specializes in complex reasoning and math, demonstrating exceptional performance in various benchmarks. Phi-4 outperforms larger models like GPT-4, Claude 3.5, Llama 3.3, and Gemini Pro 1.5 in competition math and on math benchmarks, achieving a 91.8% accuracy on AMC 10/12 math competition problems. The model's development focuses on synthetic data quality and innovative training techniques, rather than increasing model size, highlighting a shift from the traditional 'scale-first' mindset in AI. Phi-4 scored 56.1 on GPQA, 80.4 on MATH, and 82.6 on HumanEval benchmarks. It is currently available on Azure AI Foundry and will soon be accessible on Hugging Face.
View original story
Markets
No • 50%
Yes • 50%
Public reports and announcements by Fortune 500 companies
Yes • 50%
No • 50%
Published benchmark results from academic or industry sources
No • 50%
Yes • 50%
Market share reports from Hugging Face
NeurIPS • 25%
Other • 25%
AAAI • 25%
ICML • 25%
Official competition results and rankings
Other • 25%
GPT-5 • 25%
Claude 4 • 25%
Llama 4 • 25%
Benchmark reports and academic publications
Education • 25%
Technology • 25%
Finance • 25%
Healthcare • 25%
Industry reports and adoption announcements