Loading...
Loading...
Browse all stories on DeepNewz
VisitNew BoT Method Enhances LLM Reasoning, Llama3-8B Surpasses Llama3-70B
Jun 7, 2024, 04:17 PM
Researchers from Peking University and UC Berkeley, including L Yang, Z Yu, and T Zhang, have introduced a new prompting method for large language models (LLMs) called Buffer of Thoughts (BoT). This thought-augmented reasoning framework aims to enhance the accuracy, efficiency, and robustness of LLM-based reasoning. The BoT method leverages a meta-buffer containing high-level thoughts, or thought templates, distilled from previous tasks. Notably, the Llama3-8B model integrated with BoT has demonstrated the potential to surpass the performance of the larger Llama3-70B model on reasoning tasks.
View original story
Markets
Yes • 50%
No • 50%
Official competition results from major NLP challenges like Kaggle competitions, NeurIPS challenges, or other recognized NLP competitions
Yes • 50%
No • 50%
Official announcements or product releases from major commercial LLM developers like OpenAI, Google, or Microsoft
Yes • 50%
No • 50%
Publicly available results from standardized reasoning benchmarks such as GLUE or SuperGLUE
Google • 25%
Anthropic • 25%
OpenAI • 25%
Microsoft • 25%
Official announcements or product releases from the companies
Other • 34%
Llama3-8B • 33%
Llama3-70B • 33%
Publicly available benchmark results from standardized reasoning tasks
UC Berkeley • 33%
Other • 34%
Peking University • 33%
Citation counts from Google Scholar or similar academic citation tracking services