Want more control over your search setup? Discover our flexible infrastructure pricing.

Go to homeMeilisearch's logo
Back to articles

Vector storage is coming to Meilisearch to empower search through AI

We're thrilled to release vector storage for Meilisearch to bring fast, relevant search to AI-powered applications.

05 Jul 20233 min read
Laurent Cazanove
Laurent CazanoveDeveloper Experience Engineer@StriftCodes
Vector storage is coming to Meilisearch to empower search through AI
Share the article

Vector search enables efficient retrieval of objects sharing similar characteristics. This AI-powered search technique uses embedding vectors. These vectors are mathematical representations of objects generated by machine learning models (like LLMs). Starting with the 1.3 release, Meilisearch supports storing and searching vectors.

Meilisearch v1.3 is out! Read the release notes

Vector search might be the dawn of a new era for search. Its use cases are numerous. In ecommerce, it enables offering recommendations for similar products. It also allows building multi-modal search, like image or audio search. Building upon conversational AI technologies enables the creation of Q&A applications. Combining vector search with user-provided information like geolocation and search history can power even more contextual search experiences.

Vector search is also the foundation for semantic search, which aims to understand the query’s meaning. Conversely, traditional lexical search only matches keywords. With semantic search, a warm clothes query could give results like gloves, coat, and more results related to winter clothing.

Vector search unlocks a world of new capabilities for search. Take a look at what some users have already implemented:

Meilisearch is available as a LangChain vector store.

Getting Started with Meilisearch Vector Store

Starting with v1.3, you can use Meilisearch as a vector store. Meilisearch allows you to store vector embeddings alongside your documents conveniently. You will need to create the vector embeddings using your third-party tool of choice (Hugging Face, OpenAI).

First, spin up a Meilisearch instance. You can install Meilisearch locally or create a Meilisearch Cloud account.

Then, enable the vector store experimental feature:

curl 
  -X PATCH 'http://localhost:7700/experimental-features/' 
  -H 'Content-Type: application/json'  
  --data-binary '{
    "vectorStore": true
  }'

This guide uses `curl` to make HTTP requests to communicate with Meilisearch. In practice, we recommend using our SDKs instead.

Meilisearch now accepts a _vector field in your documents. Use it to store the vector embeddings corresponding to your document in this field.

curl -X POST -H 'content-type: application/json' 
     'localhost:7700/indexes/songs/documents' 
     --data-binary '[
         { "id": 0, "_vectors": [0, 0.8, -0.2], "title": "Across The Universe" },
         { "id": 1, "_vectors": [1, -0.2, 0], "title": "All Things Must Pass" },
         { "id": 2, "_vectors": [0.5, 3, 1], "title": "And Your Bird Can Sing" }
     ]'

After storing your vectorized documents in Meilisearch, you can query them using the search or multi-search route. To do this, you need to compute the vector of your query (using a third-party tool) and send it in the vector field.

curl -X POST -H 'content-type: application/json' 
   'localhost:7700/indexes/songs/search' 
   --data-binary '{ "vector": [0, 1, 2] }'

When using vector search, returned documents include a semanticScore field:

{
  "hits": [
    { "id": 0, "_vectors": [0, 0.8, -0.2], "title": "Across The Universe", "_semanticScore": 0.6754 },
    { "id": 1, "_vectors": [1, -0.2, 0], "title": "All Things Must Pass", "_semanticScore": 0.7546 },
    { "id": 2, "_vectors": [0.5, 3, 1], "title": "And Your Bird Can Sing", "_semanticScore": 0.78 }
  ],
  "query": "",
  "vector": [0, 1, 2],
  "processingTimeMs": 0,
  "limit": 20,
  "offset": 0,
  "estimatedTotalHits": 2
}

This API is experimental. You can help us improve it by sharing your feedback in this Github discussion.

Vector search is our first step towards semantic search. But our long-term goal is to provide hybrid search—combining the benefits of full-text and semantic search to provide the most relevant search experience. Clément Renault, founder and CTO of Meilisearch, shared his thoughts on Github about exploring semantic search — read for a founder’s perspective. We can’t wait to share more with you!

Drop your email below to learn more about our progress with AI-powered search. We’ll keep you posted on all updates regarding vector search & semantic search.

Get Meilisearch AI updates straight to your inbox 💌

* indicates required

Email Address *

Marketing Permissions

Please select all the ways you would like to hear from Meilisearch:

Email

You can unsubscribe at any time.

We use Mailchimp to send emails.

We’re excited to walk our first steps toward semantic search. We can’t wait to hear your thoughts on integrating Meilisearch as a vector store. You can give your feedback in this Github discussion.

You can stay in the loop by subscribing to our newsletter. To learn more about Meilisearch's future and help shape it, look at our roadmap and participate in our Product Discussions.

For anything else, join our developers community on Discord.

I’ll see you there.

What is agentic RAG? How it works, benefits, challenges & more

What is agentic RAG? How it works, benefits, challenges & more

Discover what agentic RAG is, how it works, the benefits, the challenges, the drawbacks, common tools used in agentic RAG pipelines & much more.

Ilia Markov
Ilia Markov12 Sept 2025
From RAG to riches: Building a practical workflow with Meilisearch’s all-in-one tool

From RAG to riches: Building a practical workflow with Meilisearch’s all-in-one tool

Walk through a practical RAG workflow with Meilisearch – query rewriting, hybrid retrieval, and LLM response generation—simplified by a single, low-latency platform.

Luis Serrano
Luis Serrano11 Sept 2025
Adaptive RAG explained: What to know in 2025

Adaptive RAG explained: What to know in 2025

Learn how adaptive RAG improves retrieval accuracy by dynamically adjusting to user intent, query type, and context—ideal for real-world AI applications.

Ilia Markov
Ilia Markov10 Sept 2025