Loading...
Loading...
Browse all stories on DeepNewz
VisitWhich feature will be most enhanced in the next Fugaku-LLM version by 2025?
Token handling capacity • 33%
Processing speed • 33%
Energy efficiency • 34%
Technical documentation or official announcements from the development team
Japan's Fugaku-LLM AI Model: 13B Parameters, 400 Tokens
May 11, 2024, 05:24 AM
A significant advancement in artificial intelligence has been achieved with the release of Fugaku-LLM, a large language model developed by a team of Japanese researchers. The model, which consists of 13 billion parameters and handles up to 400 tokens, was trained on the Fugaku supercomputer, leveraging its CPU capabilities. This project marks one of the earliest large language model initiatives in Japan, spearheaded by the founders of Kotoba and led by notable researcher Rio Yokota. The collaborative effort involved top researchers and utilized distributed training methods.
View original story
Performance efficiency • 25%
Scalability • 25%
Language versatility • 25%
Integration capabilities • 25%
Asia • 25%
Europe • 25%
North America • 25%
Other regions • 25%
Healthcare • 25%
Finance • 25%
Automotive • 25%
Telecommunications • 25%
70,000 tokens/s on Llama3 model • 25%
70% reduction in memory usage • 25%
Collaboration on AI with top 3 universities • 25%
New AI hardware release • 25%
Enhanced ZK proof verification speed • 33%
Reduced computational cost for proofs • 33%
New cryptographic security feature • 33%
Yes • 50%
No • 50%
Calculator App Enhancements • 25%
Math Notes Enhancements • 25%
Accessibility Improvements • 25%
Screen Sharing Enhancements • 25%
Google • 25%
Amazon • 25%
Microsoft • 25%
IBM • 25%
No significant response • 25%
Collaboration with Fugaku-LLM's team • 25%
Launch of a competing model • 50%