Loading...
Loading...
Browse all stories on DeepNewz
VisitWhich algorithm will dominate preference optimization by end of 2025?
SimPO • 33%
DPO • 33%
ORPO • 33%
AI research publications and industry adoption reports
Researchers Introduce SimPO, Llama-3-8B Model Achieves 44.7% LC Win Rate
May 25, 2024, 03:24 PM
Researchers from the University of Virginia and Princeton University have introduced SimPO (Simple Preference Optimization), a new offline preference optimization algorithm. SimPO, developed by Y Meng, M Xia, and D Chen in 2024, is designed to improve simplicity and training stability for offline preference tuning and significantly outperforms existing methods such as DPO (Direct Preference Optimization) and ORPO. The Llama-3-8B-SimPO model, utilizing SimPO, has achieved notable performance metrics, including a 44.7% LC win rate on AlpacaEval 2 and a 33.8% win rate on Arena-Hard. The algorithm is reference-free and uses the average log probability for optimization, making it a simpler yet effective alternative in the realm of reinforcement learning from human feedback (RLHF). Experts have praised SimPO for its effectiveness, with some noting its excellence in handling open-domain queries.
View original story