Trusted by leading companies
How multimodal search works
Every content type is transformed into a unified vector space, enabling cross-modal search in milliseconds.
Your content library
Search every content type
From images to video to audio – one unified search API for all your media.
Text-to-image search
Find images with natural language. Describe what you want, get matches instantly.
Visual similarity
Find visually similar content via vector embeddings. Images, thumbnails, product photos.
Video & audio search
Index and search video transcripts, audio files, and rich media with text docs.
Hybrid retrieval
Combine keyword and vector search. Tune the semantic ratio to fit your use case.
Multi-embedder support
Different embedding models per content type. Composite embedders mix multiple sources.
Metadata filtering
Filter by tags, dimensions, duration, format, or any custom attribute.
Works with multimodal embedding providers
Native support for models that understand images, video, and text.
5 models from 4 providers
1 model
Cohere
1 model
Voyage AI
1 model
Jina AI
2 models
Custom
Any provider
Meilisearch is compatible with any model offering a REST API and tool calling capabilities.
Built for every use case
From e-commerce visual search to enterprise media libraries, multimodal search powers discovery across industries.
E-commerce
Let shoppers search by photo or description. Find products visually similar to what they already love.