Loading...
Loading...
Browse all stories on DeepNewz
VisitWill SimPO achieve over a 50% win rate in future evaluations by end of 2024?
Yes • 50%
No • 50%
Future publications or reports from the AlpacaEval platform or similar evaluation systems
Researchers Introduce SimPO, Llama-3-8B Model Achieves 44.7% LC Win Rate
May 25, 2024, 03:24 PM
Researchers from the University of Virginia and Princeton University have introduced SimPO (Simple Preference Optimization), a new offline preference optimization algorithm. SimPO, developed by Y Meng, M Xia, and D Chen in 2024, is designed to improve simplicity and training stability for offline preference tuning and significantly outperforms existing methods such as DPO (Direct Preference Optimization) and ORPO. The Llama-3-8B-SimPO model, utilizing SimPO, has achieved notable performance metrics, including a 44.7% LC win rate on AlpacaEval 2 and a 33.8% win rate on Arena-Hard. The algorithm is reference-free and uses the average log probability for optimization, making it a simpler yet effective alternative in the realm of reinforcement learning from human feedback (RLHF). Experts have praised SimPO for its effectiveness, with some noting its excellence in handling open-domain queries.
View original story