Loading...
Loading...
Browse all stories on DeepNewz
VisitAnthropic Launches Prompt Caching with 90% Cost and 80% Latency Reductions
Aug 14, 2024, 04:49 PM
Anthropic has introduced a new feature called prompt caching in its API, currently available in beta. This feature significantly reduces the costs and latency associated with AI model responses. By storing and reusing context, prompt caching can cut API input costs by up to 90% and reduce latency by up to 80%. This development is particularly beneficial for applications involving long, static instructions, as it allows for more efficient processing. The prompt caching feature is designed to improve the performance of large language models (LLMs) and is expected to have a substantial impact on applications such as Retrieval-Augmented Generation (RAG). The pricing model for Anthropic's caching involves charges for cache writes, with a cache lifetime of five minutes that refreshes each time the cached content is used. The feature supports Claude 3 Haiku, Opus, and 3.5 Sonnet.
View original story
Markets
Yes • 50%
No • 50%
Official statements from Anthropic or client testimonials
No • 50%
Yes • 50%
Official announcement from Anthropic or updates on their website
No • 50%
Yes • 50%
Official reports or statements from Anthropic
Less than 70% • 25%
More than 80% • 25%
75% to 80% • 25%
70% to 75% • 25%
Performance metrics published by Anthropic
Other • 25%
Retrieval-Augmented Generation (RAG) • 25%
Chatbots • 25%
Document Summarization • 25%
Performance reports or case studies published by Anthropic or its clients
Claude 3 Haiku • 25%
Opus • 25%
3.5 Sonnet • 25%
Other • 25%
Usage statistics or reports from Anthropic