Loading...
Loading...
Browse all stories on DeepNewz
VisitWhich AI model will win the Best AI Innovation Award 2024?
Jina AI's Jina CLIP v1 • 33%
Nomic AI's Nomic-Embed-Vision • 33%
OpenAI's CLIP • 33%
Official announcement from the awarding organization
Jina AI and Nomic AI Unveil Superior Multimodal Embedding Models for The Met's 250,000 Artworks
Jun 5, 2024, 03:44 PM
Jina AI and Nomic AI have released new state-of-the-art multimodal embedding models that outperform OpenAI CLIP in text-image retrieval. Jina AI's Jina CLIP v1 includes ONNX weights, making it compatible with Transformers.js v3 and capable of running with WebGPU acceleration. Nomic AI's Nomic-Embed-Vision integrates text embeddings into a multimodal space, allowing for high-quality image, text, and multimodal tasks. This model also outperforms OpenAI CLIP and text-embedding-3-small. Nomic Embed Vision supports 8k context length and outperforms JinaAI_ CLIP. Additionally, Nomic AI's embeddings have been used to create a semantic search tool for The Met's collection of 250,000 artworks, enabling efficient and precise searches over large datasets using databases like MongoDB and weaviate_io. This tool is the first ever of its kind.
View original story
Llama8B-related • 25%
GPT-5-related • 25%
Gemini-related • 25%
Other • 25%
Falcon 2 • 33%
Meta's Llama 3 • 33%
OpenAI's models • 34%
Google • 25%
Microsoft • 25%
OpenAI • 25%
Meta Platforms • 25%
GPT-4o • 25%
Claude 3 • 25%
Google Bard • 25%
Other • 25%
Falcon 2 • 33%
Meta's Llama 3 • 33%
OpenAI's models • 34%
OpenAI • 25%
Google DeepMind • 25%
Anthropic • 25%
Microsoft • 25%
Google AI • 25%
OpenAI • 25%
Microsoft MAI-1 • 25%
Anthropic • 25%
Apple • 33%
Microsoft • 33%
NVIDIA • 33%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
OpenAI • 33%
Nomic AI • 33%
Jina AI • 33%