Loading...
Loading...
Browse all stories on DeepNewz
VisitLlama3-8B surpasses Llama3-70B on reasoning benchmark by end of 2024?
Yes • 50%
No • 50%
Publicly available results from standardized reasoning benchmarks such as GLUE or SuperGLUE
New BoT Method Enhances LLM Reasoning, Llama3-8B Surpasses Llama3-70B
Jun 7, 2024, 04:17 PM
Researchers from Peking University and UC Berkeley, including L Yang, Z Yu, and T Zhang, have introduced a new prompting method for large language models (LLMs) called Buffer of Thoughts (BoT). This thought-augmented reasoning framework aims to enhance the accuracy, efficiency, and robustness of LLM-based reasoning. The BoT method leverages a meta-buffer containing high-level thoughts, or thought templates, distilled from previous tasks. Notably, the Llama3-8B model integrated with BoT has demonstrated the potential to surpass the performance of the larger Llama3-70B model on reasoning tasks.
View original story