Loading...
Loading...
Browse all stories on DeepNewz
VisitHedra Labs Launches Character-1: New Multimodal Model for Controllable Video Generation, Even Handles Beards
Jun 18, 2024, 04:08 PM
Hedra Labs has launched a new multimodal foundation model, Character-1, that supports controllable video generation, focusing on expressive characters with full motion video and synced sound. The model, developed by a team of ex-Stanford researchers, allows for the generation of dynamic 3D content and human-centric videos. It aims to provide creators with tools for expressive control, making lip sync apps obsolete. The model can animate faces from audio text, enabling users to make any image talk or sing, and can even handle beards. Early access users have praised its capabilities and potential for creative applications.
View original story
Markets
No • 50%
Yes • 50%
Industry rankings and reports from credible sources like Gartner or Forrester
No • 50%
Yes • 50%
Announcements from Hedra Labs or video editing software companies
Yes • 50%
No • 50%
Hedra Labs official announcements or press releases
Virtual influencers • 33%
Animated storytelling • 33%
Lip sync videos • 33%
User application reports from Hedra Labs
Real-time collaboration • 33%
Expanded language support • 33%
Improved beard rendering • 33%
Official feature release notes from Hedra Labs
Educational Institutions • 25%
Content Creators • 25%
Marketing Professionals • 25%
Entertainment Industry • 25%
User demographic reports from Hedra Labs