Loading...
Loading...
Browse all stories on DeepNewz
VisitMixture-of-Agents Approach Enhances Large Language Model Capabilities, Scores 65.1% on AlpacaEval 2.0
Jun 10, 2024, 07:45 PM
Recent advancements in large language models (LLMs) have demonstrated significant improvements in natural language understanding and generation tasks. A new approach, termed Mixture-of-Agents (MoA), has been introduced by researchers J Wang, J Wang, B Athiwaratkun, C Zhang, and J Zou from Duke University and TogetherAI. This method constructs a layered architecture where each layer comprises multiple LLM agents, with each agent utilizing outputs from the previous layer to enhance its performance. The MoA approach has achieved state-of-the-art performance on benchmarks such as AlpacaEval 2.0, MT-Bench, and FLASK, surpassing the capabilities of GPT-4 Omni. Notably, MoA, using only open-source LLMs, scored 65.1% on AlpacaEval 2.0, significantly higher than GPT-4 Omni's 57.5%.
View original story
Markets
No • 50%
Yes • 50%
Results published on FLASK benchmark official website or research papers
Yes • 50%
No • 50%
Official announcements from major tech companies or press releases
No • 50%
Yes • 50%
Results published on MT-Bench official website or research papers
Above 75% • 25%
65% - 70% • 25%
70% - 75% • 25%
Below 65% • 25%
AlpacaEval 2.0 official results or related research papers
Other • 34%
MT-Bench • 33%
FLASK • 33%
Results published on official benchmark websites or research papers
Natural Language Processing • 25%
Computer Vision • 25%
Reinforcement Learning • 25%
Other • 25%
Official announcements, research papers, or industry reports