Loading...
Loading...
Browse all stories on DeepNewz
VisitMicrosoft's Phi-3 AI Models Outperform Competitors, Excel in Benchmarks
Apr 23, 2024, 02:47 AM
Microsoft has announced the release of its new language models under the Phi-3 series, featuring advancements in AI capabilities. The Phi-3-mini, a 3.8 billion parameter model trained on 3.3 trillion tokens, is designed to rival major models like Mixtral 8x7B and GPT-3.5. Additionally, the Phi-3-medium, with 14 billion parameters trained on 4.8 trillion tokens, achieves a 78% score on the MMLU benchmark and 8.9 on MT-bench. The Phi-3 14B model outperforms the Llama-3 8B, GPT-3.5, and Mixtral 8x7b MoE in most benchmarks, while the Phi-3 mini also surpasses the Llama-3 8B in MMLU and HellaSwag. The series also includes the Phi-3 7B model, which surpasses the Llama-3 7B model in performance, scoring 75.3 on MMLU. These models are part of Microsoft's effort to enhance the open-source community, sharing similar architecture with Llama-2.
View original story
Llama3 • 25%
GPT-4 • 25%
BERT • 25%
NVIDIA's Latest Model • 25%
Widely adopted in tech industry • 33%
Limited adoption to specific sectors • 33%
Minimal adoption • 34%
GPT-4o • 25%
Claude 3 • 25%
Google Bard • 25%
Other • 25%
Low adoption • 33%
Moderate adoption • 34%
High adoption • 33%
Llama3-8B • 25%
Llama3-TenyxChat-70B • 25%
GPT-4 • 25%
Other AI models • 25%
LLaMA-3 • 25%
BERT-Enhanced • 25%
GPT-4 • 25%
Transformer-XL • 25%
Falcon 2 • 33%
Meta's Llama 3 • 33%
OpenAI's models • 34%
Top 3 position • 33%
Top 5 position • 33%
Outside Top 5 • 34%
Phi-3 series • 25%
Llama series • 25%
GPT series • 25%
Mixtral series • 25%