Loading...
Loading...
Browse all stories on DeepNewz
VisitWill Nomic AI's semantic search tool be integrated into The Met's public website by September 30, 2024?
Yes • 50%
No • 50%
Official announcement from The Met or Nomic AI, or visible on the Met's public website
Jina AI and Nomic AI Unveil Superior Multimodal Embedding Models for The Met's 250,000 Artworks
Jun 5, 2024, 03:44 PM
Jina AI and Nomic AI have released new state-of-the-art multimodal embedding models that outperform OpenAI CLIP in text-image retrieval. Jina AI's Jina CLIP v1 includes ONNX weights, making it compatible with Transformers.js v3 and capable of running with WebGPU acceleration. Nomic AI's Nomic-Embed-Vision integrates text embeddings into a multimodal space, allowing for high-quality image, text, and multimodal tasks. This model also outperforms OpenAI CLIP and text-embedding-3-small. Nomic Embed Vision supports 8k context length and outperforms JinaAI_ CLIP. Additionally, Nomic AI's embeddings have been used to create a semantic search tool for The Met's collection of 250,000 artworks, enabling efficient and precise searches over large datasets using databases like MongoDB and weaviate_io. This tool is the first ever of its kind.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
Jina AI's Jina CLIP v1 • 33%
OpenAI's CLIP • 33%
Nomic AI's Nomic-Embed-Vision • 33%
OpenAI • 33%
Nomic AI • 33%
Jina AI • 33%