Loading...
Loading...
Browse all stories on DeepNewz
VisitWill LiquidAI's 40B model achieve SOTA on SuperGLUE by end of 2024?
Yes • 50%
No • 50%
Official SuperGLUE benchmark results published on the SuperGLUE leaderboard
MIT Spinoff LiquidAI Debuts State-of-the-Art Non-Transformer AI Models with Context Length Scaling
Sep 30, 2024, 05:14 PM
MIT spinoff LiquidAI has introduced Liquid Foundation Models (LFMs), a new series of language models that represent a significant advancement in AI technology. These models, which include a 1B, 3B, and 40B parameter model, are notable for their state-of-the-art performance despite not being based on the traditional Transformer architecture. The LFMs are designed to have minimal memory footprints, making them suitable for edge deployments. The research team, which includes notable figures such as Joscha Bach and Mikhail Parakhin, has reimagined AI from first principles, resulting in groundbreaking models that outperform traditional architectures. LiquidAI's innovative approach draws inspiration from biological systems, aiming to rethink all parts of the AI pipeline from architecture design to post-training. The models also feature context length scaling and achieve SOTA performance without using GPT.
View original story
Yes • 50%
No • 50%
MMLU • 25%
ARC • 25%
GSM8K • 25%
None by June 30, 2024 • 25%
Top 1 • 25%
Top 5 • 25%
Top 10 • 25%
Below Top 10 • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
NeurIPS • 25%
Other • 25%
CVPR • 25%
ICML • 25%
None of the models achieve the highest score • 25%
1B parameter model • 25%
3B parameter model • 25%
40B parameter model • 25%