Loading...
Loading...
Browse all stories on DeepNewz
VisitMost acclaimed feature of Fugaku-LLM by next ISC conference
Performance efficiency • 25%
Scalability • 25%
Language versatility • 25%
Integration capabilities • 25%
Expert reviews, tech blogs, and panel discussions at major tech conferences
Japan's CPU-Trained Fugaku-LLM, a 13B Language Model, Integrated into SambaNova
May 13, 2024, 06:28 AM
The newly released Fugaku-LLM, a large language model with 13 billion parameters and a 400-token limit, has been trained using the Fugaku supercomputer, primarily utilizing CPU resources. This initiative marks one of the first major LLM projects in Japan, spearheaded by the founders of Kotoba and led by notable figures such as Rio Yokota. Developed by a collaborative team including top researchers and entities like CyberAgent and Fujitsu, the model is notable for its size and the use of distributed training methods. Additionally, Fugaku-LLM has been integrated into SambaNova Systems' Samba-1 and is being showcased at ISC24 in booth A11, Hall H.
View original story
Token handling capacity • 33%
Processing speed • 33%
Energy efficiency • 34%
Healthcare • 25%
Automotive • 25%
Finance • 25%
Entertainment • 25%
Launch of a competing model • 50%
Collaboration with Fugaku-LLM's team • 25%
No significant response • 25%
Llama8B • 25%
Qwen 2 • 25%
Nemotron • 25%
Other • 25%
Graphics and Visual Design • 25%
Innovative Gameplay • 25%
Integration of AI • 25%
Community and Multiplayer Experience • 25%
Source citing capabilities • 33%
Image integration in responses • 33%
User interface design • 34%
Language Processing • 25%
Image Recognition • 25%
Data Analysis • 25%
Automated Decision Making • 25%
Real-time co-creation • 25%
Data recall • 25%
Live captions for audio translation • 25%
Other AI features • 25%
Telecommunications • 25%
Healthcare • 25%
Finance • 25%
Automotive • 25%