Loading...
Loading...
Browse all stories on DeepNewz
VisitWhich benchmark will LiquidAI's models achieve SOTA performance in by June 30, 2024?
MMLU • 25%
ARC • 25%
GSM8K • 25%
None by June 30, 2024 • 25%
Publicly available benchmark results
LiquidAI Introduces SOTA Liquid Foundation Models: 1B, 3B, 40B
Sep 30, 2024, 04:20 PM
LiquidAI has introduced a new series of Liquid Foundation Models (LFMs), which include 1B, 3B, and 40B parameter models. These models are built on a custom architecture that is not based on traditional Transformer models. The LFMs are designed to be state-of-the-art (SOTA) in performance, with minimal memory footprint and efficient inference, making them suitable for edge deployments. The models are general-purpose sequence models capable of handling text and audio tasks. Key figures involved in this development include Joscha Bach and Mikhail Parakhin. The LFMs have shown better performance in benchmarks such as MMLU, ARC, and GSM8K compared to traditional models in the same parameter range. The models are built from first principles, resulting in groundbreaking performance.
View original story
Meta's Llama 3.1-70B • 25%
OpenAI's GPT-4 • 25%
Google's Bard • 25%
Other • 25%
1B parameter model • 25%
3B parameter model • 25%
40B parameter model • 25%
None of the models achieve the highest score • 25%
Llama 3.1 405B • 25%
GPT-4o • 25%
Claude Sonnet 3.5 • 25%
Other • 25%
Claude 3.5 Sonnet • 33%
GPT-4o • 33%
Google's AI Model • 33%
Google's Gemini • 25%
OpenAI's GPT • 25%
Microsoft's Azure AI • 25%
Other • 25%
Claude 3.5 Sonnet • 33%
GPT-4o • 33%
Gemini • 34%
OpenAI's O1 model • 25%
GPT-4 • 25%
Gemini • 25%
Anthropic's Claude • 25%
OpenAI o1-preview • 25%
Anthropic Claude 3.5 Sonnet • 25%
OpenAI o1 mini • 25%
Other • 25%
Llama 3.1 405B • 25%
GPT-4o • 25%
Claude Sonnet 3.5 • 25%
Other • 25%
Nvidia • 25%
OpenAI • 25%
Anthropic • 25%
Other • 25%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
3B model • 25%
None by end of 2024 • 25%
1B model • 25%
40B model • 25%