Loading...
Loading...
Browse all stories on DeepNewz
VisitWill SAM 2 be cited in at least one major academic paper or conference by end of 2024?
Yes • 50%
No • 50%
Academic databases and conference proceedings (e.g., IEEE, ACM)
Meta Unveils SAM 2 for Real-Time Object Segmentation in Images and Videos Under Apache 2.0
Jul 30, 2024, 12:52 AM
Meta has unveiled the Segment Anything Model 2 (SAM 2), a groundbreaking model for real-time, promptable object segmentation in both images and videos. SAM 2, an enhancement over its predecessor SAM 1, generalizes from image segmentation to video segmentation and is available under the Apache 2.0 license. The model is designed to track objects and create video effects, making it a valuable tool for video editors. Meta has also released a comprehensive dataset of approximately 51,000 videos and 600,000 masklets (spatio-temporal masks) to support the model. SAM 2 also boasts improved zero shot accuracy. This release underscores Meta's commitment to advancing open-source AI technologies, with contributions from Nikhil Ravi.
View original story
Yes, at NeurIPS 2024 • 25%
Yes, at ICML 2024 • 25%
Yes, at both NeurIPS and ICML • 25%
No, it will not be featured in any major AI conferences • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
Video Editing • 25%
Other • 25%
Augmented Reality • 25%
Image Editing • 25%
TikTok • 25%
Other • 25%
Instagram • 25%
Snapchat • 25%