Loading...
Loading...
Browse all stories on DeepNewz
VisitWill Phi-4 achieve a benchmark score above 95% on AMC 10/12 by end of 2025?
Yes • 50%
No • 50%
Published benchmark results from academic or industry sources
Microsoft's Phi-4 Model, With 14 Billion Parameters, Outperforms Larger AI Models in Math
Dec 13, 2024, 04:36 AM
Microsoft Corp. has unveiled Phi-4, a new small language model with 14 billion parameters, developed by Microsoft Research. This model specializes in complex reasoning and math, demonstrating exceptional performance in various benchmarks. Phi-4 outperforms larger models like GPT-4, Claude 3.5, Llama 3.3, and Gemini Pro 1.5 in competition math and on math benchmarks, achieving a 91.8% accuracy on AMC 10/12 math competition problems. The model's development focuses on synthetic data quality and innovative training techniques, rather than increasing model size, highlighting a shift from the traditional 'scale-first' mindset in AI. Phi-4 scored 56.1 on GPQA, 80.4 on MATH, and 82.6 on HumanEval benchmarks. It is currently available on Azure AI Foundry and will soon be accessible on Hugging Face.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
DeepSeek-R1-Lite-Preview • 25%
OpenAI's o1-preview • 25%
Google DeepMind's model • 25%
Other • 25%
Jordan Lefkowitz • 25%
Krishna Pothapragada • 25%
Jessica Wan • 25%
Other • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
NeurIPS • 25%
Other • 25%
AAAI • 25%
ICML • 25%
Other • 25%
GPT-5 • 25%
Claude 4 • 25%
Llama 4 • 25%