Loading...
Loading...
Browse all stories on DeepNewz
VisitCerebras Launches AI Inference Service, 20x Faster with 1,850 Tokens/sec for 8B Model
Aug 27, 2024, 05:22 PM
Cerebras Systems has launched what it claims to be the world's fastest AI inference service, directly challenging Nvidia's dominance in the AI computing sector. The new service, known as Cerebras Inference, utilizes the company's custom wafer-scale AI accelerator chips to achieve significant performance gains. It is capable of processing Llama 3.1 models at impressive speeds: 1,850 tokens per second for the 8B model and 450 tokens per second for the 70B model. Cerebras asserts that its inference tool offers a 20x speed increase over traditional GPU-based systems, and it is priced at 60 cents per million tokens, which is a fifth of the cost compared to hyperscalers. The launch aims to provide a cost-effective and efficient alternative to Nvidia's GPUs, with claims of delivering 100x better price-performance.
View original story
Markets
Yes • 50%
No • 50%
Official announcements from major tech companies or Cerebras Systems
No • 50%
Yes • 50%
Market analysis reports from reputable firms like Gartner or IDC
Yes • 50%
No • 50%
Official announcements from Nvidia
Cerebras significantly outperforms Nvidia • 25%
Nvidia significantly outperforms Cerebras • 25%
Nvidia slightly outperforms Cerebras • 25%
Cerebras slightly outperforms Nvidia • 25%
Benchmark reports from independent testing labs or AI research institutions
Nvidia offers significantly better price-performance • 25%
Cerebras offers significantly better price-performance • 25%
Cerebras offers slightly better price-performance • 25%
Nvidia offers slightly better price-performance • 25%
Benchmark reports from independent testing labs or AI research institutions
Google DeepMind • 25%
OpenAI • 25%
Microsoft Research • 25%
Other • 25%
Official announcements from AI research institutions or Cerebras Systems