Loading...
Loading...
Browse all stories on DeepNewz
VisitMeta Introduces 34B Parameter Chameleon Mixed-Modal Models
May 17, 2024, 01:20 AM
Meta has introduced Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. The research, conducted by FAIR last year, presents a stable training recipe for these fully token-based multi-modal early-fusion auto-regressive models. Chameleon, a 34B parameter autoregressive model, was trained on approximately 10 trillion tokens. The models are designed for interleaved text and image understanding and generation, showcasing state-of-the-art performance and versatile capabilities.
View original story
Entertainment • 25%
Customer Service • 25%
Healthcare • 25%
Education • 25%
North America • 25%
Europe • 25%
Asia • 25%
Rest of the World • 25%
Phi-3-mini • 25%
Phi-3 14B • 25%
Llama-3 8B • 25%
GPT-3.5 • 25%
Passenger vehicles • 33%
Commercial trucks • 33%
Public transport • 34%
Healthcare • 25%
Automotive • 25%
Finance • 25%
Entertainment • 25%
Phi-3 series • 25%
Mixtral series • 25%
GPT series • 25%
Llama series • 25%
Healthcare AI • 20%
AI Ethics and Governance • 20%
Advanced AI Research • 20%
AI in Education • 20%
AI in Entertainment • 20%
North America • 25%
Europe • 25%
Asia • 25%
Rest of World • 25%
GPT-4o • 25%
Claude 3 • 25%
Google Bard • 25%
Other • 25%
Content Creation • 25%
Video Game Development • 25%
Film Production • 25%
Other • 25%
New AI Training Platform • 25%
AI Data Security Solution • 25%
Enhanced Language Model Interoperability Tool • 25%
Other • 25%
Smartphone Performance • 25%
Smart Home Integration • 25%
Heath and Fitness Tracking • 25%
Augmented Reality • 25%
Finance • 25%
Healthcare • 25%
Automotive • 25%
Entertainment • 25%