Loading...
Loading...
Browse all stories on DeepNewz
VisitGoogle Unveils 'Infini-attention' for Unlimited Context in LLMs
Apr 20, 2024, 03:52 PM
Google has introduced a groundbreaking technique in machine learning called 'Infini-attention', which enables handling of unlimited context in language models while maintaining a fixed-size memory footprint. This advancement, detailed in their latest papers 'Mixture of Depths' and 'Efficient Infinite Context Transformers with Infini-attention', aims to enhance the processing capabilities of pre-trained language models (LLMs) by applying local attention to input chunks and sequentially retaining relevant information. The development is seen as a significant step towards achieving very long context lengths, sufficient for most industry use cases.
View original story
Markets
No • 50%
Yes • 50%
US Patent and Trademark Office public records
Yes • 50%
No • 50%
Official results released on NLP benchmark platforms such as GLUE or SQuAD
No • 50%
Yes • 50%
Public announcements or credible news sources confirming adoption
OpenAI • 20%
IBM • 20%
Facebook • 20%
Amazon • 20%
Microsoft • 20%
Public announcements or credible tech news
Financial Services • 25%
Natural Language Processing • 25%
Autonomous Vehicles • 25%
Healthcare • 25%
Market analysis reports and tech news outlets
No increase • 25%
Up to 10% increase • 25%
10% to 20% increase • 25%
Over 20% increase • 25%
Financial reports and market analysis