Loading...
Loading...
Browse all stories on DeepNewz
VisitMistral AI Launches Codestral-22B Code Model with 32K Context Length, Outperforms Larger Models
May 29, 2024, 02:56 PM
Mistral AI has launched its first-ever code model, Codestral-22B, which is designed for code generation tasks. The 22B dense model is trained on more than 80 programming languages and boasts a 32K context length. Codestral-22B outperforms many larger models, including LLaMA 3 70B and DeepSeek Coder 33B, in various benchmarks such as RepoBench and HumanEval. The open-weight model is available on Mistral's API platform through instruct and can be tried for free on Le Chat. Additionally, Mistral has introduced a new Mistral AI Non-Production License (MNPL) to allow developers to use the model for non-commercial purposes and research. Codestral-22B is also integrated with VS Code, has fill-in-the-middle capabilities, and is available on Continue.
View original story
Markets
Yes • 50%
No • 50%
Mistral AI official download statistics
No • 50%
Yes • 50%
Announcements from major coding platforms, Mistral AI press releases
No • 50%
Yes • 50%
Market research reports, industry analysis publications
Yes • 50%
No • 50%
Official API usage statistics from Mistral AI
Yes • 50%
No • 50%
Publicly available user adoption statistics from GitHub and Mistral AI
No • 50%
Yes • 50%
Official results of the 2024 Codex Programming Challenge
No • 50%
Yes • 50%
Official announcements from Mistral AI and major IDEs
No • 50%
Yes • 50%
Benchmark results published by reputable sources like RepoBench and HumanEval
Yes • 50%
No • 50%
Market analysis reports from credible firms
Yes • 50%
No • 50%
RepoBench benchmark results published on official websites or benchmark reports
Yes • 50%
No • 50%
Official announcements from Mistral AI
No • 50%
Yes • 50%
Official announcements from Mistral AI
Yes (6-10 languages) • 33%
No new languages • 34%
Yes (1-5 languages) • 33%
Official announcements from Mistral AI
Yes (3-5 IDEs) • 33%
No new IDEs • 34%
Yes (1-2 IDEs) • 33%
Official integration announcements from Mistral AI and major IDEs
2nd place • 33%
1st place • 33%
3rd place • 34%
Official RepoBench rankings and results
Mistral AI • 25%
Meta (LLaMA 3) • 25%
DeepSeek • 25%
Other • 25%
HumanEval official benchmark results
Mistral AI • 25%
Other • 25%
DeepSeek • 25%
Meta (LLaMA 3) • 25%
RepoBench official benchmark results
Other • 25%
Codestral-22B • 25%
LLaMA 3 70B • 25%
DeepSeek Coder 33B • 25%
GitHub repository statistics
Codestral-75B • 25%
Other • 25%
Codestral-100B • 25%
Codestral-50B • 25%
Official announcements from Mistral AI
Performance • 25%
Context length • 25%
Language support • 25%
Licensing model • 25%
Performance reports and user surveys
Visual Studio • 25%
IntelliJ IDEA • 25%
PyCharm • 25%
Eclipse • 25%
Official releases from IDE providers
DeepSeek Coder 33B • 33%
Codestral-22B • 33%
LLaMA 3 70B • 33%
HumanEval benchmark results published on official websites or benchmark reports
DeepSeek Coder 33B • 33%
Codestral-22B • 33%
LLaMA 3 70B • 33%
Usage statistics published by Mistral AI