Loading...
Loading...
Browse all stories on DeepNewz
VisitWhich LiquidAI model will achieve the highest score on SuperGLUE by end of 2024?
1B parameter model • 25%
3B parameter model • 25%
40B parameter model • 25%
None of the models achieve the highest score • 25%
Official SuperGLUE benchmark results published on the SuperGLUE leaderboard
MIT Spinoff LiquidAI Debuts State-of-the-Art Non-Transformer AI Models with Context Length Scaling
Sep 30, 2024, 05:14 PM
MIT spinoff LiquidAI has introduced Liquid Foundation Models (LFMs), a new series of language models that represent a significant advancement in AI technology. These models, which include a 1B, 3B, and 40B parameter model, are notable for their state-of-the-art performance despite not being based on the traditional Transformer architecture. The LFMs are designed to have minimal memory footprints, making them suitable for edge deployments. The research team, which includes notable figures such as Joscha Bach and Mikhail Parakhin, has reimagined AI from first principles, resulting in groundbreaking models that outperform traditional architectures. LiquidAI's innovative approach draws inspiration from biological systems, aiming to rethink all parts of the AI pipeline from architecture design to post-training. The models also feature context length scaling and achieve SOTA performance without using GPT.
View original story
MMLU • 25%
ARC • 25%
GSM8K • 25%
None by June 30, 2024 • 25%
Top 1 • 25%
Top 5 • 25%
Top 10 • 25%
Below Top 10 • 25%
Yes • 50%
No • 50%
OpenAI's O1 model • 25%
GPT-4 • 25%
Gemini • 25%
Anthropic's Claude • 25%
Google's Gemini • 25%
OpenAI's GPT • 25%
Microsoft's Azure AI • 25%
Other • 25%
Apple's 7B AI model • 25%
Mistral 7B • 25%
Llama 3 8B • 25%
Google's Gemma • 25%
Llama 3.1 405B • 25%
GPT-4o • 25%
Claude Sonnet 3.5 • 25%
Other • 25%
Meta's Llama 3.1-70B • 25%
OpenAI's GPT-4 • 25%
Google's Bard • 25%
Other • 25%
OpenAI o1-preview • 25%
Anthropic Claude 3.5 Sonnet • 25%
OpenAI o1 mini • 25%
Other • 25%
GPT-4o • 25%
InternVL 2 • 25%
NVLM 1.0 • 25%
Other • 25%
Yes • 50%
No • 50%
NeurIPS • 25%
Other • 25%
CVPR • 25%
ICML • 25%