# Documentation
Source: https://www.meilisearch.com/docs/home
Discover our guides, examples, and APIs to build fast and relevant search experiences with Meilisearch.
## Overview
Get an overview of Meilisearch features and philosophy.
See how Meilisearch compares to alternatives.
Use Meilisearch with your favorite language and framework.
## Use case demos
Take at look at example applications built with Meilisearch.
Search through multiple Eloquent models with Laravel.
Browse millions of products in our Nuxt 3 e-commerce demo app.
Search across the TMDB movies databases using Next.js.
# Which embedder should I choose?
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/choose_an_embedder
General guidance on how to choose the embedder best suited for projects using AI-powered search.
Meilisearch officially supports many different embedders, such as OpenAI, Hugging Face, and Ollama, as well as the majority of embedding generators with a RESTful API.
This article contains general guidance on how to choose the embedder best suited for your project.
## When in doubt, choose OpenAI
OpenAI returns relevant search results across different subjects and datasets. It is suited for the majority of applications and Meilisearch actively supports and improves OpenAI functionality with every new release.
In the majority of cases, and especially if this is your first time working with LLMs and AI-powered search, choose OpenAI.
## If you are already using a specific AI service, choose the REST embedder
If you are already using a specific model from a compatible embedder, choose Meilisearch's REST embedder. This ensures you continue building upon tooling and workflows already in place with minimal configuration necessary.
## If dealing with non-textual content, choose the user-provided embedder
Meilisearch does not support searching images, audio, or any other content not presented as text. This limitation applies to both queries and documents. For example, Meilisearch's built-in embedder sources cannot search using an image instead of text. They also cannot use text to search for images without attached textual metadata.
In these cases, you will have to supply your own embeddings.
## Only choose Hugging Face when self-hosting small static datasets
Although it returns very relevant search results, the Hugging Face embedder must run directly in your server. This may lead to lower performance and extra costs when you are hosting Meilisearch in a service like DigitalOcean or AWS.
That said, Hugging Face can be a good embedder for datasets under 10k documents that you don't plan to update often.
Meilisearch Cloud does not support embedders with `{"source": "huggingFace"}`.
To implement Hugging Face embedders in the Cloud, use [HuggingFace inference points with the REST embedder](/guides/embedders/huggingface).
# Configure a REST embedder
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/configure_rest_embedder
Create Meilisearch embedders using any provider with a REST API
You can integrate any text embedding generator with Meilisearch if your chosen provider offers a public REST API.
The process of integrating a REST embedder with Meilisearch varies depending on the provider and the way it structures its data. This guide shows you where to find the information you need, then walks you through configuring your Meilisearch embedder based on the information you found.
## Find your embedder provider's documentation
Each provider requires queries to follow a specific structure.
Before beginning to create your embedder, locate your provider's documentation for embedding creation. This should contain the information you need regarding API requests, request headers, and responses.
For example, [Mistral's embeddings documentation](https://docs.mistral.ai/api/#tag/embeddings) is part of their API reference. In the case of [Cloudflare's Workers AI](https://developers.cloudflare.com/workers-ai/models/bge-base-en-v1.5/#Parameters), expected input and response are tied to your chosen model.
## Set up the REST source and URL
Open your text editor and create an embedder object. Give it a name and set its source to `"rest"`:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest"
}
}
```
Next, configure the URL Meilisearch should use to contact the embedding provider:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL"
}
}
```
Setting an embedder name, a `source`, and a `url` is mandatory for all REST embedders.
## Configure the data Meilisearch sends to the provider
Meilisearch's `request` field defines the structure of the input it will send to the provider. The way you must fill this field changes for each provider.
For example, Mistral expects two mandatory parameters: `model` and `input`. It also accepts one optional parameter: `encoding_format`. Cloudflare instead only expects a single field, `text`.
### Choose a model
In many cases, your provider requires you to explicitly set which model you want to use to create your embeddings. For example, in Mistral, `model` must be a string specifying a valid Mistral model.
Update your embedder object adding this field and its value:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME"
}
}
}
```
In Cloudflare's case, the model is part of the API route itself and doesn't need to be specified in your `request`.
### The embedding prompt
The prompt corresponds to the data that the provider will use to generate your document embeddings. Its specific name changes depending on the provider you chose. In Mistral, this is the `input` field. In Cloudflare, it's called `text`.
Most providers accept either a string or an array of strings. A single string will generate one request per document in your database:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": "{{text}}"
}
}
}
```
`{{text}}` indicates Meilisearch should replace the contents of a field with your document data, as indicated in the embedder's [`documentTemplate`](/reference/api/settings/update-embedders).
An array of strings allows Meilisearch to send up to 10 documents in one request, reducing the number of API calls to the provider:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": [
"{{text}}",
"{{..}}"
]
}
}
}
```
When using array prompts, the first item must be `{{text}}`. If you want to send multiple documents in a single request, the second array item must be `{{..}}`. When using `"{{..}}"`, it must be present in both `request` and `response`.
When using other embedding providers, `input` might be called something else, like `text` or `prompt`:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"text": "{{text}}"
}
}
}
```
### Provide other request fields
You may add as many fields to the `request` object as you need. Meilisearch will include them when querying the embeddings provider.
For example, Mistral allows you to optionally configure an `encoding_format`. Set it by declaring this field in your embedder's `request`:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": ["{{text}}", "{{..}}"],
"encoding_format": "float"
}
}
}
```
## The embedding response
You must indicate where Meilisearch can find the document embeddings in the provider's response. Consult your provider's API documentation, paying attention to where it places the embeddings.
Cloudflare's embeddings are located in an array inside `response.result.data`. Describe the full path to the embedding array in your embedder's `response`. The first array item must be `"{{embedding}}"`:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"text": "{{text}}"
},
"response": {
"result": {
"data": ["{{embedding}}"]
}
}
}
}
```
If the response contains multiple embeddings, use `"{{..}}"` as its second value:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": [
"{{text}}",
"{{..}}"
]
},
"response": {
"data": [
{
"embedding": "{{embedding}}"
},
"{{..}}"
]
}
}
}
```
When using `"{{..}}"`, it must be present in both `request` and `response`.
It is possible the response contains a single embedding outside of an array. Use `"{{embedding}}"` as its value:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": "{{text}}"
},
"response": {
"data": {
"text": "{{embedding}}"
}
}
}
}
```
It is also possible the response is a single item or array not nested in an object:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": [
"{{text}}",
"{{..}}"
]
},
"response": [
"{{embedding}}",
"{{..}}"
]
}
}
```
The prompt data type does not necessarily match the response data type. For example, Cloudflare always returns an array of embeddings, even if the prompt in your request was a string.
Meilisearch silently ignores `response` fields not pointing to an `"{{embedding}}"` value.
## The embedding header
Your provider might also request you to add specific headers to your request. For example, Azure's AI services require an `api-key` header containing an API key.
Add the `headers` field to your embedder object:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"text": "{{text}}"
},
"response": {
"result": {
"data": ["{{embedding}}"]
}
},
"headers": {
"FIELD_NAME": "FIELD_VALUE"
}
}
}
```
By default, Meilisearch includes a `Content-Type` header. It may also include an authorization bearer token, if you have supplied an API key.
## Configure remainder of the embedder
`source`, `request`, `response`, and `header` are the only fields specific to REST embedders.
Like other remote embedders, you're likely required to supply an `apiKey`:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": ["{{text}}", "{{..}}"],
"encoding_format": "float"
},
"response": {
"data": [
{
"embedding": "{{embedding}}"
},
"{{..}}"
]
},
"apiKey": "PROVIDER_API_KEY",
}
}
```
You should also set a `documentTemplate`. Good templates are short and include only highly relevant document data:
```json theme={null}
{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": ["{{text}}", "{{..}}"],
"encoding_format": "float"
},
"response": {
"data": [
{
"embedding": "{{embedding}}"
},
"{{..}}"
]
},
"apiKey": "PROVIDER_API_KEY",
"documentTemplate": "SHORT_AND_RELEVANT_DOCUMENT_TEMPLATE"
}
}
```
## Update your index settings
Now the embedder object is complete, update your index settings:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/embedders' \
-H 'Content-Type: application/json' \
--data-binary '{
"EMBEDDER_NAME": {
"source": "rest",
"url": "PROVIDER_URL",
"request": {
"model": "MODEL_NAME",
"input": ["{{text}}", "{{..}}"]
},
"response": {
"data": [
{ "embedding": "{{embedding}}" },
"{{..}}"
]
},
"apiKey": "PROVIDER_API_KEY",
"documentTemplate": "SHORT_AND_RELEVANT_DOCUMENT_TEMPLATE"
}
}'
```
## Conclusion
In this guide you have seen a few examples of how to configure a REST embedder in Meilisearch. Though it used Mistral and Cloudflare, the general steps remain the same for all providers:
1. Find the provider's REST API documentation
2. Identify the embedding creation request parameters
3. Include parameters in your embedder's `request`
4. Identify the embedding creation response
5. Reproduce the path to the returned embeddings in your embedder's `response`
6. Add any required HTTP headers to your embedder's `header`
7. Update your index settings with the new embedder
# Differences between full-text and AI-powered search
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/difference_full_text_ai_search
Meilisearch offers two types of search: full-text search and AI-powered search. This article explains their differences and intended use cases.
Meilisearch offers two types of search: full-text search and AI-powered search. This article explains their differences and intended use cases.
## Full-text search
This is Meilisearch's default search type. When performing a full-text search, Meilisearch checks the indexed documents for acceptable matches to a set of search terms. It is a fast and reliable search method.
For example, when searching for `"pink sandals"`, full-text search will only return clothing items explicitly mentioning these two terms. Searching for `"pink summer shoes for girls"` is likely to return fewer and less relevant results.
## AI-powered search
AI-powered search is Meilisearch's newest search method. It returns results based on a query's meaning and context.
AI-powered search uses LLM providers such as OpenAI and Hugging Face to generate vector embeddings representing the meaning and context of both query terms and documents. It then compares these vectors to find semantically similar search results.
When using AI-powered search, Meilisearch returns both full-text and semantic results by default. This is also called hybrid search.
With AI-powered search, searching for `"pink sandals"` will be more efficient, but queries for `"cute pink summer shoes for girls"` will still return relevant results including light-colored open shoes.
## Use cases
Full-text search is a reliable choice that works well in most scenarios. It is fast, less resource-intensive, and requires no extra configuration. It is best suited for situations where you need precise matches to a query and your users are familiar with the relevant keywords.
AI-powered search combines the flexibility of semantic search with the performance of full-text search. Most searches, whether short and precise or long and vague, will return very relevant search results. In most cases, AI-powered search will offer your users the best search experience, but will require extra configuration. AI-powered search may also entail extra costs if you use a third-party service such as OpenAI to generate vector embeddings.
# Document template best practices
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/document_template_best_practices
This guide shows you what to do and what to avoid when writing a `documentTemplate`.
When using AI-powered search, Meilisearch generates prompts by filling in your embedder's `documentTemplate` with each document's data. The better your prompt is, the more relevant your search results.
This guide shows you what to do and what to avoid when writing a `documentTemplate`.
## Sample document
Take a look at this document from a database of movies:
```json theme={null}
{
"id": 2,
"title": "Ariel",
"overview": "Taisto Kasurinen is a Finnish coal miner whose father has just committed suicide and who is framed for a crime he did not commit. In jail, he starts to dream about leaving the country and starting a new life. He escapes from prison but things don't go as planned...",
"genres": [
"Drama",
"Crime",
"Comedy"
],
"poster": "https://image.tmdb.org/t/p/w500/ojDg0PGvs6R9xYFodRct2kdI6wC.jpg",
"release_date": 593395200
}
```
## Do not use the default `documentTemplate`
Use a custom `documentTemplate` value in your embedder configuration.
The default `documentTemplate` includes all searchable fields with non-`null` values. In most cases, this adds noise and more information than the embedder needs to provide relevant search results.
## Only include highly relevant information
Take a look at your document and identify the most relevant fields. A good `documentTemplate` for the sample document could be:
```
"A movie called {{doc.title}} about {{doc.overview}}"
```
In the sample document, `poster` and `id` contain data that has little semantic importance and can be safely excluded. The data in `genres` and `release_date` is very useful for filters, but say little about this specific film.
This leaves two relevant fields: `title` and `overview`.
## Keep prompts short
For the best results, keep prompts somewhere between 15 and 45 words:
```
"A movie called {{doc.title}} about {{doc.overview | truncatewords: 20}}"
```
In the sample document, the `overview` alone is 49 words. Use Liquid's [`truncate`](https://shopify.github.io/liquid/filters/truncate/) or [`truncatewords`](https://shopify.github.io/liquid/filters/truncatewords/) to shorten it.
Short prompts do not have enough information for the embedder to properly understand the query context. Long prompts instead provide too much information and make it hard for the embedder to identify what is truly relevant about a document.
## Add guards for missing fields
Some documents might not contain all the fields you expect. If your template directly references a missing field, Meilisearch will throw an error when indexing documents.
To prevent this, use Liquid’s `if` statements to add guards around fields:
```
{% if doc.title %}
A movie called {{ doc.title }}
{% endif %}
```
This ensures the template only tries to include data that already exists in a document. If a field is missing, the embedder still receives a valid and useful prompt without errors.
## Conclusion
In this article you saw the main steps to generating prompts that lead to relevant AI-powered search results:
* Do not use the default `documentTemplate`
* Only include relevant data
* Truncate long fields
* Add guards for missing fields
# Getting started with AI-powered search
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/getting_started_with_ai_search
AI-powered search uses LLMs to retrieve search results. This tutorial shows you how to configure an OpenAI embedder and perform your first search.
[AI-powered search](https://meilisearch.com/solutions/vector-search), sometimes also called vector search or hybrid search, uses [large language models (LLMs)](https://en.wikipedia.org/wiki/Large_language_model) to retrieve search results based on the meaning and context of a query.
This tutorial will walk you through configuring AI-powered search in your Meilisearch project. You will see how to set up an embedder with OpenAI, generate document embeddings, and perform your first search.
## Requirements
* A running Meilisearch project
* An [OpenAI API key](https://platform.openai.com/api-keys)
* A command-line console
## Create a new index
First, create a new Meilisearch project. If this is your first time using Meilisearch, follow the [quick start](/learn/getting_started/cloud_quick_start) then come back to this tutorial.
Next, create a `kitchenware` index and add [this kitchenware products dataset](/assets/datasets/kitchenware.json) to it. It will take Meilisearch a few moments to process your request, but you can continue to the next step while your data is indexing.
## Generate embeddings with OpenAI
In this step, you will configure an OpenAI embedder. Meilisearch uses **embedders** to translate documents into **embeddings**, which are mathematical representations of a document's meaning and context.
Open a blank file in your text editor. You will only use this file to build your embedder one step at a time, so there's no need to save it if you plan to finish the tutorial in one sitting.
### Choose an embedder name
In your blank file, create your `embedder` object:
```json theme={null}
{
"products-openai": {}
}
```
`products-openai` is the name of your embedder for this tutorial. You can name embedders any way you want, but try to keep it simple, short, and easy to remember.
### Choose an embedder source
Meilisearch relies on third-party services to generate embeddings. These services are often referred to as the embedder source.
Add a new `source` field to your embedder object:
```json theme={null}
{
"products-openai": {
"source": "openAi"
}
}
```
Meilisearch supports several embedder sources. This tutorial uses OpenAI because it is a good option that fits most use cases.
### Choose an embedder model
Models supply the information required for embedders to process your documents.
Add a new `model` field to your embedder object:
```json theme={null}
{
"products-openai": {
"source": "openAi",
"model": "text-embedding-3-small"
}
}
```
Each embedder service supports different models targeting specific use cases. `text-embedding-3-small` is a cost-effective model for general usage.
### Create your API key
Log into OpenAI, or create an account if this is your first time using it. Generate a new API key using [OpenAI's web interface](https://platform.openai.com/api-keys).
Add the `apiKey` field to your embedder:
```json theme={null}
{
"products-openai": {
"source": "openAi",
"model": "text-embedding-3-small",
"apiKey": "OPEN_AI_API_KEY",
}
}
```
Replace `OPEN_AI_API_KEY` with your own API key.
You may use any key tier for this tutorial. Use at least [Tier 2 keys](https://platform.openai.com/docs/guides/rate-limits/usage-tiers?context=tier-two) in production environments.
### Design a prompt template
Meilisearch embedders only accept textual input, but documents can be complex objects containing different types of data. This means you must convert your documents into a single text field. Meilisearch uses [Liquid](https://shopify.github.io/liquid/basics/introduction/), an open-source templating language to help you do that.
A good template should be short and only include the most important information about a document. Add the following `documentTemplate` to your embedder:
```json theme={null}
{
"products-openai": {
"source": "openAi",
"model": "text-embedding-3-small",
"apiKey": "OPEN_AI_API_KEY",
"documentTemplate": "An object used in a kitchen named '{{doc.name}}'"
}
}
```
This template starts by giving the general context of the document: `An object used in a kitchen`. Then it adds the information that is specific to each document: `doc` represents your document, and you can access any of its attributes using dot notation. `name` is an attribute with values such as `wooden spoon` or `rolling pin`. Since it is present in all documents in this dataset and describes the product in few words, it is a good choice to include in the template.
### Create the embedder
Your embedder object is ready. Send it to Meilisearch by updating your index settings:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/kitchenware/settings/embedders' \
-H 'Content-Type: application/json' \
--data-binary '{
"products-openai": {
"source": "openAi",
"apiKey": "OPEN_AI_API_KEY",
"model": "text-embedding-3-small",
"documentTemplate": "An object used in a kitchen named '\''{{doc.name}}'\''"
}
}'
```
Replace `MEILISEARCH_URL` with the address of your Meilisearch project, and `OPEN_AI_API_KEY` with your [OpenAI API key](https://platform.openai.com/api-keys).
Meilisearch and OpenAI will start processing your documents and updating your index. This may take a few moments, but once it's done you are ready to perform an AI-powered search.
## Perform an AI-powered search
AI-powered searches are very similar to basic text searches. You must query the `/search` endpoint with a request containing both the `q` and the `hybrid` parameters:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/kitchenware/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "kitchen utensils made of wood",
"hybrid": {
"embedder": "products-openai"
}
}'
```
For this tutorial, `hybrid` is an object with a single `embedder` field.
Meilisearch will then return an equal mix of semantic and full-text matches.
## Conclusion
Congratulations! You have created an index, added a small dataset to it, and activated AI-powered search. You then used OpenAI to generate embeddings out of your documents, and performed your first AI-powered search.
## Next steps
Now you have a basic overview of the basic steps required for setting up and performing AI-powered searches, you might want to try and implement this feature in your own application.
For practical information on implementing AI-powered search with other services, consult our [guides section](/guides/embedders/openai). There you will find specific instructions for embedders such as [LangChain](/guides/langchain) and [Cloudflare](/guides/embedders/cloudflare).
For more in-depth information, consult the API reference for [embedder settings](/reference/api/settings/get-embedders) and [the `hybrid` search parameter](/reference/api/search/search-with-post#body-hybrid-one-of-1).
# Image search with multimodal embeddings
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/image_search_with_multimodal_embeddings
This article shows you the main steps for performing multimodal text-to-image searches
This guide shows the main steps to search through a database of images using Meilisearch's experimental multimodal embeddings.
## Requirements
* A database of images
* A Meilisearch project
* Access to a multimodal embedding provider (for example, [VoyageAI multimodal embeddings](https://docs.voyageai.com/reference/multimodal-embeddings-api))
## Enable multimodal embeddings
First, enable the `multimodal` experimental feature:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/experimental-features/' \
-H 'Content-Type: application/json' \
--data-binary '{
"multimodal": true
}'
```
You may also enable multimodal in your Meilisearch Cloud project's general settings, under "Experimental features".
## Configure a multimodal embedder
Much like other embedders, multimodal embedders must set their `source` to `rest` and explicitly declare their `url`. Depending on your chosen provider, you may also have to specify `apiKey`.
All multimodal embedders must contain an `indexingFragments` field and a `searchFragments` field. Fragments are sets of embeddings built out of specific parts of document data.
Fragments must follow the structure defined by the REST API of your chosen provider.
### `indexingFragments`
Use `indexingFragments` to tell Meilisearch how to send document data to the provider's API when generating document embeddings.
For example, when using VoyageAI's multimodal model, an indexing fragment might look like this:
```json theme={null}
"indexingFragments": {
"TEXTUAL_FRAGMENT_NAME": {
"value": {
"content": [
{
"type": "text",
"text": "A document named {{doc.title}} described as {{doc.description}}"
}
]
}
},
"IMAGE_FRAGMENT_NAME": {
"value": {
"content": [
{
"type": "image_url",
"image_url": "{{doc.poster_url}}"
}
]
}
}
}
```
The example above requests Meilisearch to create two sets of embeddings during indexing: one for the textual description of an image, and another for the actual image.
Any JSON string value appearing in a fragment is handled as a Liquid template, where you interpolate document data present in `doc`. In `IMAGE_FRAGMENT_NAME`, that's `image_url` which outputs the plain URL string in the document field `poster_url`. In `TEXT_FRAGMENT_NAME`, `text` contains a longer string contextualizing two document fields, `title` and `description`.
### `searchFragments`
Use `searchFragments` to tell Meilisearch how to send search query data to the chosen provider's REST API when converting them into embeddings:
```json theme={null}
"searchFragments": {
"USER_TEXT_FRAGMENT": {
"value": {
"content": [
{
"type": "text",
"text": "{{q}}"
}
]
}
},
"USER_SUBMITTED_IMAGE_FRAGMENT": {
"value": {
"content": [
{
"type": "image_base64",
"image_base64": "data:{{media.image.mime}};base64,{{media.image.data}}"
}
]
}
}
}
```
In this example, two modes of search are configured:
1. A textual search based on the `q` parameter, which will be embedded as text
2. An image search based on [data url](https://developer.mozilla.org/en-US/docs/Web/URI/Reference/Schemes/data) rebuilt from the `image.mime` and `image.data` field in the `media` field of the query
Search fragments have access to data present in the query parameters `media` and `q`.
Each semantic search query for this embedder should match exactly one search fragment of this embedder, so the fragments should each have at least one disambiguating field
### Complete embedder configuration
Your embedder should look similar to this example with all fragments and embedding provider data:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings' \
-H 'Content-Type: application/json' \
--data-binary '{
"embedders": {
"MULTIMODAL_EMBEDDER_NAME": {
"source": "rest",
"url": "https://api.voyageai.com/v1/multimodal-embeddings",
"apiKey": "VOYAGE_API_KEY",
"indexingFragments": {
"TEXTUAL_FRAGMENT_NAME": {
"value": {
"content": [
{
"type": "text",
"text": "A document named {{doc.title}} described as {{doc.description}}"
}
]
}
},
"IMAGE_FRAGMENT_NAME": {
"value": {
"content": [
{
"type": "image_url",
"image_url": "{{doc.poster_url}}"
}
]
}
}
},
"searchFragments": {
"USER_TEXT_FRAGMENT": {
"value": {
"content": [
{
"type": "text",
"text": "{{q}}"
}
]
}
},
"USER_SUBMITTED_IMAGE_FRAGMENT": {
"value": {
"content": [
{
"type": "image_base64",
"image_base64": "data:{{media.image.mime}};base64,{{media.image.data}}"
}
]
}
}
},
"request": {
"inputs": ["{{fragment}}", "{{..}}"],
"model": "voyage-multimodal-3"
},
"response": {
"data": [
{ "embedding": "{{embedding}}" },
"{{..}}"
]
}
}
}
}'
```
Since the `source` of this embedder is `rest`, you must also specify a `request` and a `response` fields. These respectively instruct Meilisearch on how to structure the request sent to the embeddings provider, and where to find the embeddings in the provider's response.
## Add documents
Once your embedder is configured, you can [add documents to your index](/learn/getting_started/cloud_quick_start) with the [`/documents` endpoint](/reference/api/documents/list-documents-with-get).
During indexing, Meilisearch will automatically generate multimodal embeddings for each document using the configured `indexingFragments`.
## Perform searches
The final step is to perform searches using different types of content.
### Use text to search for images
Use the following search query to retrieve a mix of documents with images matching the description, documents with and documents containing the specified keywords:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "a mountain sunset with snow",
"hybrid": {
"embedder": "MULTIMODAL_EMBEDDER_NAME"
}
}'
```
### Use an image to search for images
You can also use an image to search for other, similar images:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"media": {
"image": {
"mime": "image/jpeg",
"data": ""
}
},
"hybrid": {
"embedder": "MULTIMODAL_EMBEDDER_NAME"
}
}'
```
In most cases you will need a GUI interface that allows users to submit their images and converts these images to Base64 format. Creating this is outside the scope of this guide.
## Conclusion
With multimodal embedders you can:
1. Configure Meilisearch to embed both images and queries
2. Add image documents — Meilisearch automatically generates embeddings
3. Accept text or image input from users
4. Run hybrid searches using a mix of textual and input from other types of media, or run pure semantic semantic searches using only non-textual input
# Image search with user-provided embeddings
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/image_search_with_user_provided_embeddings
This article shows you the main steps for performing multimodal text-to-image searches
This article shows you the main steps for performing multimodal searches where you can use text to search through a database of images with no associated metadata.
## Requirements
* A database of images
* A Meilisearch project
* An embedding generation provider you can install locally
## Configure your local embedding generation pipeline
First, set up a system that sends your images to your chosen embedding generation provider, then integrates the returned embeddings into your dataset.
The exact procedure depends heavily on your specific setup, but should include these main steps:
1. Choose a provider you can run locally
2. Choose a model that supports both image and text input
3. Send your images to the embedding generation provider
4. Add the returned embeddings to the `_vector` field for each image in your database
In most cases your system should run these steps periodically or whenever you update your database.
## Configure a user-provided embedder
Configure the `embedder` index setting, settings its source to `userProvided`:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings' \
-H 'Content-Type: application/json' \
--data-binary '{
"embedders": {
"EMBEDDER_NAME": {
"source": "userProvided",
"dimensions": MODEL_DIMENSIONS
}
}
}'
```
Replace `EMBEDDER_NAME` with the name you wish to give your embedder. Replace `MODEL_DIMENSIONS` with the number of dimensions of your chosen model.
## Add documents to Meilisearch
Next, use [the `/documents` endpoint](/reference/api/documents/add-or-replace-documents) to upload the vectorized images.
In most cases, you should automate this step so Meilisearch is up to date with your primary database.
## Set up pipeline for vectorizing queries
Since you are using a `userProvided` embedder, you must also generate the embeddings for the search query. This process should be similar to generating embeddings for your images:
1. Receive user query from your front-end
2. Send query to your local embedding generation provider
3. Perform search using the returned query embedding
## Vector search with user-provided embeddings
Once you have the query's vector, pass it to the `vector` search parameter to perform a semantic AI-powered search:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"vector": VECTORIZED_QUERY,
"hybrid": { "embedder": "EMBEDDER_NAME" }
}'
```
Replace `VECTORIZED_QUERY` with the embedding generated by your provider and `EMBEDDER_NAME` with your embedder.
If your images have any associated metadata, you may perform a hybrid search by including the original `q`:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"vector": VECTORIZED_QUERY,
"hybrid": { "embedder": "EMBEDDER_NAME" },
"q": "QUERY"
}'
```
## Conclusion
You have seen the main steps for implementing image search with Meilisearch:
1. Prepare a pipeline that converts your images into vectors
2. Index the vectorized images with Meilisearch
3. Prepare a pipeline that converts your users' queries into vectors
4. Perform searches using the converted queries
# Retrieve related search results
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/retrieve_related_search_results
This guide shows you how to use the similar documents endpoint to create an AI-powered movie recommendation workflow.
# Retrieve related search results
This guide shows you how to use the [similar documents endpoint](/reference/api/similar-documents/get-similar-documents-with-post) to create an AI-powered movie recommendation workflow.
First, you will create an embedder and add documents to your index. You will then perform a search, and use the top result's primary key to retrieve similar movies in your database.
## Prerequisites
* A running Meilisearch project
* A [tier >=2](https://platform.openai.com/docs/guides/rate-limits#usage-tiers) OpenAI API key
## Create a new index
Create an index called `movies` and add this `movies.json` dataset to it. If necessary, consult the [getting started](/learn/getting_started/cloud_quick_start) for more instructions on index creation.
Each document in the dataset represents a single movie and has the following structure:
* `id`: a unique identifier for each document in the database
* `title`: the title of the movie
* `overview`: a brief summary of the movie's plot
* `genres`: an array of genres associated with the movie
* `poster`: a URL to the movie's poster image
* `release_date`: the release date of the movie, represented as a Unix timestamp
## Configure an embedder
Next, use the Cloud UI to configure an OpenAI embedder:
You may also use the `/settings/embedders` API subroute to configure your embedder:
Replace `MEILISEARCH_URL`, `MEILISEARCH_API_KEY`, and `OPENAI_API_KEY` with the corresponding values in your application.
Meilisearch will start generating the embeddings for all movies in your dataset. Use the returned `taskUid` to [track the progress of this task](/learn/async/asynchronous_operations). Once it is finished, you are ready to start searching.
## Perform a hybrid search
With your documents added and all embeddings generated, you can perform a search:
This request returns a list of movies. Pick the top result and take note of its primary key in the `id` field. In this case, it's the movie "Batman" with `id` 192.
## Return similar documents
Pass "Batman"'s `id` to your index's [`/similar` route](/reference/api/similar-documents/get-similar-documents-with-post), specifying `movies-text` as your embedder:
Meilisearch will return a list of the 20 documents most similar to the movie you chose. You may then choose to display some of these similar results to your users, pointing them to other movies that may also interest them.
## Conclusion
Congratulations! You have successfully built an AI-powered movie search and recommendation system using Meilisearch by:
* Setting up a Meilisearch project and configured it for AI-powered search
* Implementing hybrid search combining keyword and semantic search capabilities
* Integrating Meilisearch's similarity search for movie recommendations
In a real-life application, you would now start integrating this workflow into a front end, like the one in this [official Meilisearch blog post](https://www.meilisearch.com/blog/add-ai-powered-search-to-react).
# Use AI-powered search with user-provided embeddings
Source: https://www.meilisearch.com/docs/learn/ai_powered_search/search_with_user_provided_embeddings
This guide shows how to perform AI-powered searches with user-generated embeddings instead of relying on a third-party tool.
This guide shows how to perform AI-powered searches with user-generated embeddings instead of relying on a third-party tool.
## Requirements
* A Meilisearch project
## Configure a custom embedder
Configure the `embedder` index setting, settings its source to `userProvided`:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings' \
-H 'Content-Type: application/json' \
--data-binary '{
"embedders": {
"EMBEDDER_NAME": {
"source": "userProvided",
"dimensions": MODEL_DIMENSIONS
}
}
}'
```
Embedders with `source: userProvided` are incompatible with `documentTemplate` and `documentTemplateMaxBytes`.
## Add documents to Meilisearch
Next, use [the `/documents` endpoint](/reference/api/documents/list-documents-with-get?utm_campaign=vector-search\&utm_source=docs\&utm_medium=vector-search-guide) to upload vectorized documents. Place vector data in your documents' `_vectors` field:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/documents' \
-H 'Content-Type: application/json' \
--data-binary '[
{ "id": 0, "_vectors": { "EMBEDDER_NAME": [0, 0.8, -0.2]}, "text": "frying pan" },
{ "id": 1, "_vectors": { "EMBEDDER_NAME": [1, -0.2, 0]}, "text": "baking dish" }
]'
```
## Vector search with user-provided embeddings
When using a custom embedder, you must vectorize both your documents and user queries.
Once you have the query's vector, pass it to the `vector` search parameter to perform an AI-powered search:
```bash cURL theme={null}
curl -X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"vector": [0, 1, 2],
"hybrid": {
"embedder": "EMBEDDER_NAME"
}
}'
```
```python Python theme={null}
client.index('books').search('',{
"vector": [0, 1, 2],
"hybrid": {
"embedder": "EMBEDDER_NAME"
}
})
```
```rust Rust theme={null}
let results = index
.search()
.with_vector(&[0.0, 1.0, 2.0])
.with_hybrid("EMBEDDER_NAME", 1.0)
.execute()
.await
.unwrap();
```
`vector` must be an array of numbers indicating the search vector. You must generate these yourself when using vector search with user-provided embeddings.
`vector` can be used together with [other search parameters](/reference/api/search/search-with-post?utm_campaign=vector-search\&utm_source=docs\&utm_medium=vector-search-guide), including [`filter`](/reference/api/search/search-with-post#body-filter) and [`sort`](/reference/api/search/search-with-post#body-sort):
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"vector": [0, 1, 2],
"filter": "price < 10",
"sort": ["price:asc"],
"hybrid": {
"embedder": "EMBEDDER_NAME"
}
}'
```
# Analytics metrics reference
Source: https://www.meilisearch.com/docs/learn/analytics/analytics_metrics_reference
This reference describes the metrics you can find in the Meilisearch Cloud analytics interface.
## Total searches
Total number of searches made during the specified period. Multi-search and federated search requests count as a single search.
## Total users
Total number of users who performed a search in the specified period.
Include the [user ID](/learn/analytics/bind_events_user) in your search request headers for the most accurate metrics. If search requests do not provide any user ID, the total amount of unique users will increase, as each request is assigned to a unique user ID.
## No result rate
Percentage of searches that did not return any results.
## Click-through rate
The ratio between the number of times users clicked on a result and the number of times Meilisearch showed that result. Since users will click on results that potentially match what they were looking for, a higher number indicates better relevancy.
Meilisearch does not have access to this information by default. You must [configure your application to submit click events](/learn/analytics/configure_analytics_events) to Meilisearch if you want to track it in the Meilisearch Cloud interface.
## Average click position
The average list position of clicked search results. A lower number means users have clicked on the first search results and indicates good relevancy.
Meilisearch does not have access to this information by default. You must [configure your application to submit click events](/learn/analytics/configure_analytics_events) to Meilisearch if you want to track it in the Meilisearch Cloud interface.
## Conversion
The percentage of searches resulting in a conversion event in your application. Conversion events vary depending on your application and indicate a user has performed a specific desired action. For example, a conversion for an e-commerce website might mean a user has bought a product.
You must explicitly [configure your application to send conversion](/learn/analytics/configure_analytics_events) events when conditions are met.
It is not possible to associate multiple `conversion` events with the same query.
## Search requests
Total number of search requests within the specified time period.
## Search latency
The amount of time between a user making a search request and Meilisearch Cloud returning search results. A lower number indicates users receive search results more quickly.
## Most searched queries
Most common query terms users have used while searching.
## Searches without results
Most common query terms that did not return any search results.
## Countries with most searches
List of countries that generate the largest amount of search requests.
# Bind search analytics events to a user
Source: https://www.meilisearch.com/docs/learn/analytics/bind_events_user
This guide shows you how to manually differentiate users across search analytics using the X-MS-USER-ID HTTP header.
This article refers to a new version of the Meilisearch Cloud analytics that is being rolled out in November 2025. Some features described here may not yet be available to your account. Contact support for more information.
## Requirements
* A Meilisearch Cloud project
* A method for identifying users
* A pipeline for submitting analytics events
## Assign user IDs to search requests
You can assign user IDs to search requests by including an `X-MS-USER-ID` header with your query:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer DEFAULT_SEARCH_API_KEY' \
-H 'X-MS-USER-ID: MEILISEARCH_USER_ID' \
--data-binary '{}'
```
Replace `SEARCH_USER_ID` with any value that uniquely identifies that user. This may be an authenticated user's ID when running searches from your own back end, or a hash of the user's IP address.
Assigning user IDs to search requests is optional. If a Meilisearch Cloud search request does not have an ID, Meilisearch will automatically generate one.
## Assign user IDs to analytics events
You can assign a user ID to analytics `/events` in two ways: HTTP headers or including it in the event payload.
If using HTTP headers, include an `X-MS-USER-ID` header with your query:
```bash cURL theme={null}
curl \
-X POST 'https://PROJECT_URL/events' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer DEFAULT_SEARCH_API_KEY' \
-H 'X-MS-USER-ID: SEARCH_USER_ID' \
--data-binary '{
"eventType": "click",
"eventName": "Search Result Clicked",
"indexUid": "products",
"objectId": "0",
"position": 0
}'
```
If you prefer to the event in your payload, include a `userId` field with your request:
Replace `SEARCH_USER_ID` with any value that uniquely identifies that user. This may be an authenticated user's ID when running searches from your own back end, or a hash of the user's IP address.
It is mandatory to specify a user ID when sending analytics events.
## Conclusion
In this guide you have seen how to bind analytics events to specific users by specifying an HTTP header for the search request, and either an HTTP header or a `userId` field for the analytics event.
# Configure Meilisearch Cloud analytics events
Source: https://www.meilisearch.com/docs/learn/analytics/configure_analytics_events
By default, Meilisearch Cloud analytics tracks metrics such as number of users and latency. Follow this guide to track advanced events such as user conversion and click-through rates.
This article refers to a new version of the Meilisearch Cloud analytics that is being rolled out in November 2025. Some features described here may not yet be available to your account. Contact support for more information.
## Requirements
You must have a [Meilisearch Cloud](https://meilisearch.com/cloud) account to access search analytics.
## Configure click-through rate and average click position
To track click-through rate and average click position, Meilisearch Cloud needs to know when users click on search results.
Every time a user clicks on a search result, your application must send a `click` event to the `POST` endpoint of Meilisearch Cloud's `/events` route:
```bash cURL theme={null}
curl \
-X POST 'https://PROJECT_URL/events' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer DEFAULT_SEARCH_API_KEY' \
--data-binary '{
"eventType": "click",
"eventName": "Search Result Clicked",
"indexUid": "products",
"userId": "SEARCH_USER_ID",
"queryUid": "019a01b7-a1c2-7782-a410-bb1274c81393",
"objectId": "0",
"objectName": "DOCUMENT_DESCRIPTION",
"position": 0
}'
```
You must explicitly submit a `userId` associated with the event. This can be any arbitrary string you use to identify the user, such as their profile ID in your application or their hashed IP address. You may submit user IDs directly on the event payload, or setting a `X-MS-USER-ID` request header.
Specifying a `queryUid` is optional but recommended as it ensures Meilisearch correctly associates the search query with the event. You can find the query UID in the [`metadata` field present in Meilisearch Cloud's search query responses](/reference/api/headers#search-metadata).
For more information, consult the [analytics events endpoint reference](/learn/analytics/events_endpoint).
## Configure conversion rate
To track conversion rate, first identify what should count as a conversion for your application. For example, in a web shop a conversion might be a user finalizing the checkout process.
Once you have established what is a conversion in your application, configure it to send a `conversion` event to the `POST` endpoint of Meilisearch Cloud analytics route:
```bash cURL theme={null}
curl \
-X POST 'https://PROJECT_URL/events' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer DEFAULT_SEARCH_API_KEY' \
--data-binary '{
"eventType": "conversion",
"eventName": "Product Added To Cart",
"indexUid": "products",
"userId": "SEARCH_USER_ID",
"objectId": "0",
"objectName": "DOCUMENT_DESCRIPTION",
"position": 0
}'
```
You must explicitly submit a `userId` associated with the event. This can be any arbitrary string you can use to identify the user, such as their profile ID in your application or their hashed IP address. You may submit user IDs directly on the event payload, or setting a `X-MS-USER-ID` request header.
Specifying a `queryUid` is optional but recommended as it ensures Meilisearch correctly associates the search query with the event. You can find the query UID in the `metadata` field present in Meilisearch Cloud's search query response.
It is not possible to associate multiple `conversion` events with the same query.
For more information, consult the [analytics events endpoint reference](/learn/analytics/events_endpoint).
# Analytics events endpoint
Source: https://www.meilisearch.com/docs/learn/analytics/events_endpoint
Use `/events` to submit analytics events such as `click` and `conversion` to Meilisearch Cloud.
This article refers to a new version of the Meilisearch Cloud analytics that is being rolled out in November 2025. Some features described here may not yet be available to your account. Contact support for more information.
## Send an event
Send an analytics event to Meilisearch Cloud.
### Body
| Name | Type | Default value | Description |
| :----------- | :------ | :------------ | :------------------------------------------------------------------------------ |
| `eventType` | String | N/A | The event type, such as `click` or `conversion`, required |
| `eventName` | String | N/A | A string describing the event, required |
| `indexUid` | String | N/A | The name of the index of the clicked document, required |
| `queryUid` | String | N/A | The [search query's UID](/reference/api/headers#search-metadata) |
| `objectId` | String | N/A | The clicked document's primary key value |
| `objectName` | String | N/A | A string describing the document |
| `position` | Integer | N/A | An integer indicating the clicked document's position in the search result list |
| `userId` | String | N/A | An arbitrary string identifying the user who performed the action |
```json theme={null}
{
"eventType": "click",
"eventName": "Search Result Clicked",
"indexUid": "products",
"objectId": "0",
"position": 0
}
```
You must provide a string identifying your user if you want Meilisearch Cloud to track conversion and click events.
You may do that in two ways:
* Specify the user ID in the payload, using the `userId` field
* Specify the user ID with the `X-MS-USER-ID` header with your `/events` and search requests
#### Example
##### Response: `201 Created`
# Migrate to the November 2025 Meilisearch Cloud analytics
Source: https://www.meilisearch.com/docs/learn/analytics/migrate_analytics_monitoring
Follow this guide to ensure your Meilisearch Cloud analytics configuration is up to date after the November 2025 release.
## Analytics and monitoring are always active
Analytics and monitoring are now active in all Meilisearch Cloud projects. Basic functionality requires no extra configuration. Tracking user conversion, clickthrough, and clicked result position must instead be explicitly configured.
## Update URLs in your application
Meilisearch no longer requires `edge.meilisearch.com` to track search analytics. Update your application so all API requests, including click and conversion events, point to your project URL:
```sh theme={null}
curl \
-X POST 'https://PROJECT_URL/indexes/products/search' \
-H 'Content-Type: application/json' \
--data-binary '{ "q": "green socks" }'
```
`edge.meilisearch.com` was deprecated on February 28, 2026 and is no longer functional. You must update all API requests to use your project URL. If you created any custom API keys using the previous URL, you will also need to replace them.
# Tasks and asynchronous operations
Source: https://www.meilisearch.com/docs/learn/async/asynchronous_operations
Meilisearch uses a task queue to handle asynchronous operations. This in-depth guide explains tasks, their uses, and how to manage them using Meilisearch's API.
Many operations in Meilisearch are processed **asynchronously**. These API requests are not handled immediately—instead, Meilisearch places them in a queue and processes them in the order they were received.
## Which operations are asynchronous?
Every operation that might take a long time to be processed is handled asynchronously. Processing operations asynchronously allows Meilisearch to handle resource-intensive tasks without impacting search performance.
Currently, these are Meilisearch's asynchronous operations:
* Creating an index
* Updating an index
* Swapping indexes
* Deleting an index
* Updating index settings
* Adding documents to an index
* Updating documents in an index
* Deleting documents from an index
* Canceling a task
* Deleting a task
* Creating a dump
* Creating snapshots
## Understanding tasks
When an API request triggers an asynchronous process, Meilisearch creates a task and places it in a [task queue](#task-queue).
### Task objects
Tasks are objects containing information that allow you to track their progress and troubleshoot problems when things go wrong.
A [task object](/reference/api/async-task-management/get-task) includes data not present in the original request, such as when the request was enqueued, the type of request, and an error code when the task fails:
```json theme={null}
{
"uid": 1,
"indexUid": "movies",
"status": "enqueued",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 67493,
"indexedDocuments": null
},
"error": null,
"duration": null,
"enqueuedAt": "2021-08-10T14:29:17.000000Z",
"startedAt": null,
"finishedAt": null
}
```
For a comprehensive description of each task object field, consult the [task API reference](/reference/api/async-task-management/get-task).
#### Summarized task objects
When you make an API request for an asynchronous operation, Meilisearch returns a [summarized version](/reference/api/async-task-management/get-task) of the full `task` object.
```json theme={null}
{
"taskUid": 0,
"indexUid": "movies",
"status": "enqueued",
"type": "indexCreation",
"enqueuedAt": "2021-08-11T09:25:53.000000Z"
}
```
Use the summarized task's `taskUid` to [track the progress of a task](/reference/api/async-task-management/get-task).
#### Task `status`
Tasks always contain a field indicating the task's current `status`. This field has one of the following possible values:
* **`enqueued`**: the task has been received and will be processed soon
* **`processing`**: the task is being processed
* **`succeeded`**: the task has been successfully processed
* **`failed`**: a failure occurred when processing the task. No changes were made to the database
* **`canceled`**: the task was canceled
`succeeded`, `failed`, and `canceled` tasks are finished tasks. Meilisearch keeps them in the task database but has finished processing these tasks. It is possible to [configure a webhook](/reference/api/webhooks/list-webhooks) to notify external services when a task is finished.
`enqueued` and `processing` tasks are unfinished tasks. Meilisearch is either processing them or will do so in the future.
#### Global tasks
Some task types are not associated with a particular index but apply to the entire instance. These tasks are called global tasks. Global tasks always display `null` for the `indexUid` field.
Meilisearch considers the following task types as global:
* `dumpCreation`
* `taskCancelation`
* `taskDeletion`
* `indexSwap`
* `snapshotCreation`
In a protected instance, your API key must have access to all indexes (`"indexes": [*]`) to view global tasks.
### Task queue
After creating a task, Meilisearch places it in a queue. Enqueued tasks are processed one at a time, following the order in which they were requested.
When the task queue reaches its limit (about 10GiB), it will throw a `no_space_left_on_device` error. Users will need to delete tasks using the [delete tasks endpoint](/reference/api/async-task-management/delete-tasks) to continue write operations.
#### Task queue priority
Meilisearch considers certain tasks high-priority and always places them at the front of the queue.
The following types of tasks are always processed as soon as possible in this order:
1. `taskCancelation`
2. `upgradeDatabase`
3. `taskDeletion`
4. `indexCompaction`
5. `export`
6. `snapshotCreation`
7. `dumpCreation`
All other tasks are processed in the order they were enqueued.
## Task workflow
When you make a [request for an asynchronous operation](#which-operations-are-asynchronous), Meilisearch processes all tasks following the same steps:
1. Meilisearch creates a task, puts it in the task queue, and returns a [summarized `task` object](/reference/api/async-task-management/get-task). Task `status` set to `enqueued`
2. When your task reaches the front of the queue, Meilisearch begins working on it. Task `status` set to `processing`
3. Meilisearch finishes the task. Status set to `succeeded` if task was successfully processed, or `failed` if there was an error
**Terminating a Meilisearch instance in the middle of an asynchronous operation is completely safe** and will never adversely affect the database.
### Task batches
Meilisearch processes tasks in batches, grouping tasks for the best possible performance. In most cases, batching should be transparent and have no impact on the overall task workflow. Use [the `/batches` route](/reference/api/async-task-management/list-batches) to obtain more information on batches and how they are processing your tasks.
### Canceling tasks
You can cancel a task while it is `enqueued` or `processing` by using [the cancel tasks endpoint](/reference/api/async-task-management/cancel-tasks). Doing so changes a task's `status` to `canceled`.
Tasks are not canceled when you terminate a Meilisearch instance. Meilisearch discards all progress made on `processing` tasks and resets them to `enqueued`. Task handling proceeds as normal once the instance is relaunched.
### Deleting tasks
[Finished tasks](#task-status) remain visible in [the task list](/reference/api/async-task-management/list-tasks). To delete them manually, use the [delete tasks route](/reference/api/async-task-management/delete-tasks).
Meilisearch stores up to 1M tasks in the task database. If enqueuing a new task would exceed this limit, Meilisearch automatically tries to delete the oldest 100K finished tasks. If there are no finished tasks in the database, Meilisearch does not delete anything and enqueues the new task as usual.
#### Examples
Suppose you add a new document to your instance using the [add documents endpoint](/reference/api/documents/add-or-replace-documents) and receive a `taskUid` in response.
When you query the [get task endpoint](/reference/api/async-task-management/get-task) using this value, you see that it has been `enqueued`:
```json theme={null}
{
"uid": 1,
"indexUid": "movies",
"status": "enqueued",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 67493,
"indexedDocuments": null
},
"error": null,
"duration": null,
"enqueuedAt": "2021-08-10T14:29:17.000000Z",
"startedAt": null,
"finishedAt": null
}
```
Later, you check the task's progress one more time. It was successfully processed and its `status` changed to `succeeded`:
```json theme={null}
{
"uid": 1,
"indexUid": "movies",
"status": "succeeded",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 67493,
"indexedDocuments": 67493
},
"error": null,
"duration": "PT1S",
"enqueuedAt": "2021-08-10T14:29:17.000000Z",
"startedAt": "2021-08-10T14:29:18.000000Z",
"finishedAt": "2021-08-10T14:29:19.000000Z"
}
```
Had the task failed, the response would have included a detailed `error` object:
```json theme={null}
{
"uid": 1,
"indexUid": "movies",
"status": "failed",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 67493,
"indexedDocuments": 0
},
"error": {
"message": "Document does not have a `:primaryKey` attribute: `:documentRepresentation`.",
"code": "internal",
"type": "missing_document_id",
"link": "https://docs.meilisearch.com/errors#missing-document-id"
},
"duration": "PT1S",
"enqueuedAt": "2021-08-10T14:29:17.000000Z",
"startedAt": "2021-08-10T14:29:18.000000Z",
"finishedAt": "2021-08-10T14:29:19.000000Z"
}
```
If the task had been [canceled](/reference/api/async-task-management/cancel-tasks) while it was `enqueued` or `processing`, it would have the `canceled` status and a non-`null` value for the `canceledBy` field.
After a task has been [deleted](/reference/api/async-task-management/delete-tasks), trying to access it returns a [`task_not_found`](/reference/errors/error_codes#task_not_found) error.
# Filtering tasks
Source: https://www.meilisearch.com/docs/learn/async/filtering_tasks
This guide shows you how to use query parameters to filter tasks and obtain a more readable list of asynchronous operations.
Querying the [get tasks endpoint](/reference/api/async-task-management/list-tasks) returns all tasks that have not been deleted. This unfiltered list may be difficult to parse in large projects.
This guide shows you how to use query parameters to filter tasks and obtain a more readable list of asynchronous operations.
Filtering batches with [the `/batches` route](/reference/api/async-task-management/list-batches) follows the same rules as filtering tasks. Keep in mind that many `/batches` parameters such as `uids` target the tasks included in batches, instead of the batches themselves.
## Requirements
* a command-line terminal
* a running Meilisearch project
## Filtering tasks with a single parameter
Use the get tasks endpoint to fetch all `canceled` tasks:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks?statuses=failed'
```
```javascript JS theme={null}
client.tasks.getTasks({ statuses: ['failed', 'canceled'] })
```
```python Python theme={null}
client.get_tasks({'statuses': ['failed', 'canceled']})
```
```php PHP theme={null}
$client->getTasks((new TasksQuery())->setStatuses(['failed', 'canceled']));
```
```java Java theme={null}
TasksQuery query = new TasksQuery().setStatuses(new String[] {"failed", "canceled"});
client.getTasks(query);
```
```ruby Ruby theme={null}
client.get_tasks(statuses: ['failed', 'canceled'])
```
```go Go theme={null}
client.GetTasks(&meilisearch.TasksQuery{
Statuses: []meilisearch.TaskStatus{
meilisearch.TaskStatusFailed,
meilisearch.TaskStatusCanceled,
},
})
```
```csharp C# theme={null}
await client.GetTasksAsync(new TasksQuery { Statuses = new List { TaskInfoStatus.Failed, TaskInfoStatus.Canceled } });
```
```rust Rust theme={null}
let mut query = TasksQuery::new(&client);
let tasks = query
.with_statuses(["failed"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
client.getTasks(params: TasksQuery(statuses: [.failed, .canceled])) { result in
switch result {
case .success(let taskResult):
print(taskResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.getTasks(
params: TasksQuery(
statuses: ['failed', 'canceled'],
),
);
```
Use a comma to separate multiple values and fetch both `canceled` and `failed` tasks:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks?statuses=failed,canceled'
```
```rust Rust theme={null}
let mut query = TasksQuery::new(&client);
let tasks = query
.with_statuses(["failed", "canceled"])
.execute()
.await
.unwrap();
```
You may filter tasks based on `uid`, `status`, `type`, `indexUid`, `canceledBy`, or date. Consult the [API reference](/reference/api/async-task-management/list-tasks) for a full list of task filtering parameters.
## Combining filters
Use the ampersand character (`&`) to combine filters, equivalent to a logical `AND`:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks?indexUids=movies&types=documentAdditionOrUpdate,documentDeletion&statuses=processing'
```
```javascript JS theme={null}
client.tasks.getTasks({
indexUids: ['movies'],
types: ['documentAdditionOrUpdate','documentDeletion'],
statuses: ['processing']
})
```
```python Python theme={null}
client.get_tasks(
{
'indexUids': 'movies',
'types': ['documentAdditionOrUpdate', 'documentDeletion'],
'statuses': ['processing'],
}
)
```
```php PHP theme={null}
$client->getTasks(
(new TasksQuery())
->setStatuses(['processing'])
->setUids(['movies'])
->setTypes(['documentAdditionOrUpdate', 'documentDeletion'])
);
```
```java Java theme={null}
TasksQuery query =
new TasksQuery()
.setStatuses(new String[] {"processing"})
.setTypes(new String[] {"documentAdditionOrUpdate", "documentDeletion"})
.setIndexUids(new String[] {"movies"});
client.getTasks(query);
```
```ruby Ruby theme={null}
client.get_tasks(index_uids: ['movies'], types: ['documentAdditionOrUpdate', 'documentDeletion'], statuses: ['processing'])
```
```go Go theme={null}
client.GetTasks(&meilisearch.TasksQuery{
IndexUIDS: []string{"movie"},
Types: []meilisearch.TaskType{
meilisearch.TaskTypeDocumentAdditionOrUpdate,
meilisearch.TaskTypeDocumentDeletion,
},
Statuses: []meilisearch.TaskStatus{
meilisearch.TaskStatusProcessing,
},
})
```
```csharp C# theme={null}
var query = new TasksQuery { IndexUids = new List { "movies" }, Types = new List { TaskInfo.DocumentAdditionOrUpdate, TaskInfo.DocumentDeletion }, Statuses = new List { TaskInfoStatus.Processing } };
await client.GetTasksAsync(query);
```
```rust Rust theme={null}
let mut query = TasksQuery::new(&client);
let tasks = query
.with_index_uids(["movies"])
.with_types(["documentAdditionOrUpdate","documentDeletion"])
.with_statuses(["processing"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
client.getTasks(params: TasksQuery(indexUids: "movies", types: ["documentAdditionOrUpdate", "documentDeletion"], statuses: ["processing"])) { result in
switch result {
case .success(let taskResult):
print(taskResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.getTasks(
params: TasksQuery(
indexUids: ['movies'],
types: ['documentAdditionOrUpdate', 'documentDeletion'],
statuses: ['processing'],
),
);
```
This code sample returns all tasks in the `movies` index that have the type `documentAdditionOrUpdate` or `documentDeletion` and have the `status` of `processing`.
**`OR` operations between different filters are not supported.** For example, you cannot view tasks which have a type of `documentAddition` **or** a status of `failed`.
# Managing the task database
Source: https://www.meilisearch.com/docs/learn/async/paginating_tasks
Meilisearch uses a task queue to handle asynchronous operations. This document describes how to navigate long task queues with filters and pagination.
By default, Meilisearch returns a list of 20 tasks for each request when you query the [get tasks endpoint](/reference/api/async-task-management/list-tasks). This guide shows you how to navigate the task list using query parameters.
Paginating batches with [the `/batches` route](/reference/api/async-task-management/list-batches) follows the same rules as paginating tasks.
## Configuring the number of returned tasks
Use the `limit` parameter to change the number of returned tasks:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks?limit=2&from=10
```
```javascript JS theme={null}
client.tasks.getTasks({ limit: 2, from: 10 })
```
```python Python theme={null}
client.get_tasks({
'limit': 2,
'from': 10
})
```
```php PHP theme={null}
$taskQuery = (new TasksQuery())->setLimit(2)->setFrom(10));
$client->getTasks($taskQuery);
```
```java Java theme={null}
TasksQuery query = new TasksQuery()
.setLimit(2)
.setFrom(10);
client.index("movies").getTasks(query);
```
```ruby Ruby theme={null}
client.tasks(limit: 2, from: 10)
```
```go Go theme={null}
client.GetTasks(&meilisearch.TasksQuery{
Limit: 2,
From: 10,
});
```
```csharp C# theme={null}
ResourceResults taskResult = await client.GetTasksAsync(new TasksQuery { Limit = 2, From = 10 });
```
```rust Rust theme={null}
let mut query = TasksSearchQuery::new(&client)
.with_limit(2)
.with_from(10)
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
client.getTasks(params: TasksQuery(limit: 2, from: 10)) { result in
switch result {
case .success(let taskResult):
print(taskResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.getTasks(params: TasksQuery(limit: 2, from: 10));
```
Meilisearch will return a batch of tasks. Each batch of returned tasks is often called a "page" of tasks, and the size of that page is determined by `limit`:
```json theme={null}
{
"results": [
…
],
"total": 50,
"limit": 2,
"from": 10,
"next": 8
}
```
It is possible none of the returned tasks are the ones you are looking for. In that case, you will need to use the [get all tasks request response](/reference/api/async-task-management/list-tasks) to navigate the results.
## Navigating the task list with `from` and `next`
Use the `next` value included in the response to your previous query together with `from` to fetch the next set of results:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks?limit=2&from=8
```
```javascript JS theme={null}
client.tasks.getTasks({ limit: 2, from: 8 })
```
```python Python theme={null}
client.get_tasks({
'limit': 2,
'from': 8
})
```
```php PHP theme={null}
$taskQuery = (new TasksQuery())->setLimit(2)->setFrom(8));
$client->getTasks($taskQuery);
```
```java Java theme={null}
TasksQuery query = new TasksQuery()
.setLimit(2)
.setFrom(8);
client.index("movies").getTasks(query);
```
```ruby Ruby theme={null}
client.tasks(limit: 2, from: 8)
```
```go Go theme={null}
client.GetTasks(&meilisearch.TasksQuery{
Limit: 2,
From: 8,
});
```
```csharp C# theme={null}
ResourceResults taskResult = await client.GetTasksAsync(new TasksQuery { Limit = 2, From = 8 });
```
```rust Rust theme={null}
let mut query = TasksSearchQuery::new(&client)
.with_limit(2)
.from(8)
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
client.getTasks(params: TasksQuery(limit: 2, from: 8)) { result in
switch result {
case .success(let taskResult):
print(taskResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.getTasks(params: TasksQuery(limit: 2, from: 8));
```
This will return a new batch of tasks:
```json theme={null}
{
"results": [
…
],
"total": 50,
"limit": 2,
"from": 8,
"next": 6
}
```
When the value of `next` is `null`, you have reached the final set of results.
Use `from` and `limit` together with task filtering parameters to navigate filtered task lists.
# Using task webhooks
Source: https://www.meilisearch.com/docs/learn/async/task_webhook
Learn how to use webhooks to react to changes in your Meilisearch database.
This guide teaches you how to configure a single webhook via instance options to notify a URL when Meilisearch completes a [task](/learn/async/asynchronous_operations).
If you are using Meilisearch Cloud or need to configure multiple webhooks, use the [`/webhooks` API route](/reference/api/webhooks) instead.
## Requirements
* a command-line console
* a self-hosted Meilisearch instance
* a server configured to receive `POST` requests with an ndjson payload
## Configure the webhook URL
Restart your Meilisearch instance and provide the webhook URL to `--task-webhook-URL`:
```sh theme={null}
meilisearch --task-webhook-url http://localhost:8000
```
You may also define the webhook URL with environment variables or in the configuration file with `MEILI_TASK_WEBHOOK_URL`.
## Optional: configure an authorization header
Depending on your setup, you may need to provide an authorization header. Provide it to `task-webhook-authorization-header`:
```sh theme={null}
meilisearch \
--task-webhook-url http://localhost:8000 \
--task-webhook-authorization-header Bearer aSampleMasterKey
```
## Test the webhook
A common asynchronous operation is adding or updating documents to an index. The following example adds a test document to our `movies` index:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movies/documents' \
-H 'Content-Type: application/json' \
--data-binary '[
{
"id": 287947,
"title": "Shazam",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"release_date": "2019-03-23"
}
]'
```
```javascript JS theme={null}
client.index('movies').addDocuments([{
id: 287947,
title: 'Shazam',
poster: 'https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg',
overview: 'A boy is given the ability to become an adult superhero in times of need with a single magic word.',
release_date: '2019-03-23'
}])
```
```python Python theme={null}
client.index('movies').add_documents([{
'id': 287947,
'title': 'Shazam',
'poster': 'https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg',
'overview': 'A boy is given the ability to become an adult superhero in times of need with a single magic word.',
'release_date': '2019-03-23'
}], skip_creation=True)
```
```php PHP theme={null}
$client->index('movies')->addDocuments([
[
'id' => 287947,
'title' => 'Shazam',
'poster' => 'https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg',
'overview' => 'A boy is given the ability to become an adult superhero in times of need with a single magic word.',
'release_date' => '2019-03-23'
]
]);
```
```java Java theme={null}
client.index("movies").addDocuments("[{"
+ "\"id\": 287947,"
+ "\"title\": \"Shazam\","
+ "\"poster\": \"https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg\","
+ "\"overview\": \"A boy is given the ability to become an adult superhero in times of need with a single magic word.\","
+ "\"release_date\": \"2019-03-23\""
+ "}]"
);
```
```ruby Ruby theme={null}
client.index('movies').add_documents([
{
id: 287947,
title: 'Shazam',
poster: 'https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg',
overview: 'A boy is given the ability to become an adult superhero in times of need with a single magic word.',
release_date: '2019-03-23'
}
])
```
```go Go theme={null}
documents := []map[string]interface{}{
{
"id": 287947,
"title": "Shazam",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"release_date": "2019-03-23",
},
}
options := &meilisearch.DocumentOptions{SkipCreation: false}
client.Index("movies").AddDocuments(documents, options)
```
```csharp C# theme={null}
var movie = new[]
{
new Movie
{
Id = "287947",
Title = "Shazam",
Poster = "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
Overview = "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
ReleaseDate = "2019-03-23"
}
};
await index.AddDocumentsAsync(movie);
```
```rust Rust theme={null}
let task: TaskInfo = client
.index("movies")
.add_or_replace(&[
Movie {
id: 287947,
title: "Shazam".to_string(),
poster: "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg".to_string(),
overview: "A boy is given the ability to become an adult superhero in times of need with a single magic word.".to_string(),
release_date: "2019-03-23".to_string(),
}
], None)
.await
.unwrap();
```
```swift Swift theme={null}
let documentJsonString = """
[
{
"reference_number": 287947,
"title": "Shazam",
"poster": "https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg",
"overview": "A boy is given the ability to become an adult superhero in times of need with a single magic word.",
"release_date": "2019-03-23"
}
]
"""
let documents: Data = documentJsonString.data(using: .utf8)!
client.index("movies").addDocuments(documents: documents) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movies').addDocuments([
{
'id': 287947,
'title': 'Shazam',
'poster':
'https://image.tmdb.org/t/p/w1280/xnopI5Xtky18MPhK40cZAGAOVeV.jpg',
'overview':
'A boy is given the ability to become an adult superhero in times of need with a single magic word.',
'release_date': '2019-03-23'
}
]);
```
When Meilisearch finishes indexing this document, it will send a `POST` request the URL you configured with `--task-webhook-url`. The request body will be one or more task objects in [ndjson](https://github.com/ndjson/ndjson-spec) format:
```ndjson theme={null}
{"uid":4,"indexUid":"movies","status":"succeeded","type":"documentAdditionOrUpdate","canceledBy":null,"details.receivedDocuments":1,"details.indexedDocuments":1,"duration":"PT0.001192S","enqueuedAt":"2022-08-04T12:28:15.159167Z","startedAt":"2022-08-04T12:28:15.161996Z","finishedAt":"2022-08-04T12:28:15.163188Z"}
```
If Meilisearch has batched multiple tasks, it will only trigger the webhook once all tasks in a batch are finished. In this case, the response payload will include all tasks, each separated by a new line:
```ndjson theme={null}
{"uid":4,"indexUid":"movies","status":"succeeded","type":"documentAdditionOrUpdate","canceledBy":null,"details.receivedDocuments":1,"details.indexedDocuments":1,"duration":"PT0.001192S","enqueuedAt":"2022-08-04T12:28:15.159167Z","startedAt":"2022-08-04T12:28:15.161996Z","finishedAt":"2022-08-04T12:28:15.163188Z"}
{"uid":5,"indexUid":"movies","status":"succeeded","type":"documentAdditionOrUpdate","canceledBy":null,"details.receivedDocuments":1,"details.indexedDocuments":1,"duration":"PT0.001192S","enqueuedAt":"2022-08-04T12:28:15.159167Z","startedAt":"2022-08-04T12:28:15.161996Z","finishedAt":"2022-08-04T12:28:15.163188Z"}
{"uid":6,"indexUid":"movies","status":"succeeded","type":"documentAdditionOrUpdate","canceledBy":null,"details.receivedDocuments":1,"details.indexedDocuments":1,"duration":"PT0.001192S","enqueuedAt":"2022-08-04T12:28:15.159167Z","startedAt":"2022-08-04T12:28:15.161996Z","finishedAt":"2022-08-04T12:28:15.163188Z"}
```
# Working with tasks
Source: https://www.meilisearch.com/docs/learn/async/working_with_tasks
In this tutorial, you'll use the Meilisearch API to add documents to an index, and then monitor its status.
[Many Meilisearch operations are processed asynchronously](/learn/async/asynchronous_operations) in a task. Asynchronous tasks allow you to make resource-intensive changes to your Meilisearch project without any downtime for users.
In this tutorial, you'll use the Meilisearch API to add documents to an index, and then monitor its status.
## Requirements
* a running Meilisearch project
* a command-line console
## Adding a task to the task queue
Operations that require indexing, such as adding and updating documents or changing an index's settings, will always generate a task.
Start by creating an index, then add a large number of documents to this index:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movies/documents'\
-H 'Content-Type: application/json' \
--data-binary @movies.json
```
```javascript JS theme={null}
const movies = require('./movies.json')
client.index('movies').addDocuments(movies).then((res) => console.log(res))
```
```python Python theme={null}
import json
json_file = open('movies.json', encoding='utf-8')
movies = json.load(json_file)
client.index('movies').add_documents(movies)
```
```php PHP theme={null}
$moviesJson = file_get_contents('movies.json');
$movies = json_decode($moviesJson);
$client->index('movies')->addDocuments($movies);
```
```java Java theme={null}
import com.meilisearch.sdk;
import org.json.JSONArray;
import java.nio.file.Files;
import java.nio.file.Path;
Path fileName = Path.of("movies.json");
String moviesJson = Files.readString(fileName);
Client client = new Client(new Config("MEILISEARCH_URL", "masterKey"));
Index index = client.index("movies");
index.addDocuments(moviesJson);
```
```ruby Ruby theme={null}
require 'json'
movies_json = File.read('movies.json')
movies = JSON.parse(movies_json)
client.index('movies').add_documents(movies)
```
```go Go theme={null}
import (
"encoding/json"
"os"
)
file, _ := os.ReadFile("movies.json")
var movies interface{}
json.Unmarshal([]byte(file), &movies)
client.Index("movies").AddDocuments(&movies, nil)
```
```csharp C# theme={null}
// Make sure to add this using to your code
using System.IO;
var jsonDocuments = await File.ReadAllTextAsync("movies.json");
await client.Index("movies").AddDocumentsJsonAsync(jsonDocuments);
```
```rust Rust theme={null}
use meilisearch_sdk::{
indexes::*,
client::*,
search::*,
settings::*
};
use serde::{Serialize, Deserialize};
use std::{io::prelude::*, fs::File};
use futures::executor::block_on;
fn main() { block_on(async move {
let client = Client::new("MEILISEARCH_URL", Some("masterKey"));
// reading and parsing the file
let mut file = File::open("movies.json")
.unwrap();
let mut content = String::new();
file
.read_to_string(&mut content)
.unwrap();
let movies_docs: Vec = serde_json::from_str(&content)
.unwrap();
// adding documents
client
.index("movies")
.add_documents(&movies_docs, None)
.await
.unwrap();
})}
```
```swift Swift theme={null}
let path = Bundle.main.url(forResource: "movies", withExtension: "json")!
let documents: Data = try Data(contentsOf: path)
client.index("movies").addDocuments(documents: documents) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
// import 'dart:io';
// import 'dart:convert';
final json = await File('movies.json').readAsString();
await client.index('movies').addDocumentsJson(json);
```
Instead of processing your request immediately, Meilisearch will add it to a queue and return a summarized task object:
```json theme={null}
{
"taskUid": 0,
"indexUid": "movies",
"status": "enqueued",
"type": "documentAdditionOrUpdate",
"enqueuedAt": "2021-08-11T09:25:53.000000Z"
}
```
The summarized task object is confirmation your request has been accepted. It also gives you information you can use to monitor the status of your request, such as the `taskUid`.
You can add documents to a new Meilisearch Cloud index using the Cloud interface. To get the `taskUid` of this task, visit the "Task" overview and look for a "Document addition or update" task associated with your newly created index.
## Monitoring task status
Meilisearch processes tasks in the order they were added to the queue. You can check the status of a task using the Meilisearch Cloud interface or the Meilisearch API.
### Monitoring task status in the Meilisearch Cloud interface
Log into your [Meilisearch Cloud](https://meilisearch.com/cloud) account and navigate to your project. Click the "Tasks" link in the project menu:
This will lead you to the task overview, which shows a list of all batches enqueued, processing, and completed in your project:
All Meilisearch tasks are processed in batches. When the batch containing your task changes its `status` to `succeeded`, Meilisearch has finished processing your request.
If the `status` changes to `failed`, Meilisearch was not able to fulfill your request. Check the object's `error` field for more information.
### Monitoring task status with the Meilisearch API
Use the `taskUid` from your request's response to check the status of a task:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks/1'
```
```javascript JS theme={null}
client.tasks.getTask(1)
```
```python Python theme={null}
client.get_task(1)
```
```php PHP theme={null}
$client->getTask(1);
```
```java Java theme={null}
client.getTask(1);
```
```ruby Ruby theme={null}
client.task(1)
```
```go Go theme={null}
client.GetTask(1);
```
```csharp C# theme={null}
TaskInfo task = await client.GetTaskAsync(1);
```
```rust Rust theme={null}
let task: Task = client
.get_task(1)
.await
.unwrap();
```
```swift Swift theme={null}
client.getTask(taskUid: 1) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.getTask(1);
```
This will return the full task object:
```json theme={null}
{
"uid": 4,
"indexUid" :"movie",
"status": "succeeded",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
…
},
"error": null,
"duration": "PT0.001192S",
"enqueuedAt": "2022-08-04T12:28:15.159167Z",
"startedAt": "2022-08-04T12:28:15.161996Z",
"finishedAt": "2022-08-04T12:28:15.163188Z"
}
```
If the task is still `enqueued` or `processing`, wait a few moments and query the database once again. You may also [set up a webhook listener](/reference/api/webhooks/list-webhooks).
When `status` changes to `succeeded`, Meilisearch has finished processing your request.
If the task `status` changes to `failed`, Meilisearch was not able to fulfill your request. Check the task object's `error` field for more information.
## Conclusion
You have seen what happens when an API request adds a task to the task queue, and how to check the status of that task. Consult the [task API reference](/reference/api/async-task-management/list-tasks) and the [asynchronous operations explanation](/learn/async/asynchronous_operations) for more information on how tasks work.
# Chat tooling reference
Source: https://www.meilisearch.com/docs/learn/chat/chat_tooling_reference
An exhaustive reference of special chat tools supported by Meilisearch
When creating your conversational search agent, you may be able to extend the model's capabilities with a number of tools. This page lists Meilisearch-specific tools that may improve user experience.
This is an experimental feature. Use the Meilisearch Cloud UI or the experimental features endpoint to activate it:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/experimental-features/' \
-H 'Content-Type: application/json' \
--data-binary '{
"chatCompletions": true
}'
```
## Meilisearch chat tools
For the best user experience, configure all following tools.
1. **Handle progress updates** by displaying search status to users during streaming
2. **Append conversation messages** as requested to maintain context for future requests
3. **Display source documents** to users for transparency and verification
4. **Use `call_id`** to associate progress updates with their corresponding source results
These special tools are handled internally by Meilisearch and are not forwarded to the LLM provider. They serve as a communication mechanism between Meilisearch and your application to provide enhanced user experience features.
### `_meiliSearchProgress`
This tool reports real-time progress of internal search operations. When declared, Meilisearch will call this function whenever search operations are performed in the background.
**Purpose**: Provides transparency about search operations and reduces perceived latency by showing users what's happening behind the scenes.
**Arguments**:
* `call_id`: Unique identifier to track the search operation
* `function_name`: Name of the internal function being executed (e.g., "\_meiliSearchInIndex")
* `function_parameters`: JSON-encoded string containing search parameters like `q` (query) and `index_uid`
**Example Response**:
```json theme={null}
{
"function": {
"name": "_meiliSearchProgress",
"arguments": "{\"call_id\":\"89939d1f-6857-477c-8ae2-838c7a504e6a\",\"function_name\":\"_meiliSearchInIndex\",\"function_parameters\":\"{\\\"index_uid\\\":\\\"movies\\\",\\\"q\\\":\\\"search engine\\\"}\"}"
}
}
```
### `_meiliAppendConversationMessage`
Since the `/chats/{workspace}/chat/completions` endpoint is stateless, this tool helps maintain conversation context by requesting the client to append internal messages to the conversation history.
**Purpose**: Maintains conversation context for better response quality in subsequent requests by preserving tool calls and results.
**Arguments**:
* `role`: Message author role ("user" or "assistant")
* `content`: Message content (for tool results)
* `tool_calls`: Array of tool calls made by the assistant
* `tool_call_id`: ID of the tool call this message responds to
**Example Response**:
```json theme={null}
{
"function": {
"name": "_meiliAppendConversationMessage",
"arguments": "{\"role\":\"assistant\",\"tool_calls\":[{\"id\":\"call_ijAdM42bixq9lAF4SiPwkq2b\",\"type\":\"function\",\"function\":{\"name\":\"_meiliSearchInIndex\",\"arguments\":\"{\\\"index_uid\\\":\\\"movies\\\",\\\"q\\\":\\\"search engine\\\"}\"}}]}"
}
}
```
### `_meiliSearchSources`
This tool provides the source documents that were used by the LLM to generate responses, enabling transparency and allowing users to verify information sources.
**Purpose**: Shows users which documents were used to generate responses, improving trust and enabling source verification.
**Arguments**:
* `call_id`: Matches the `call_id` from `_meiliSearchProgress` to associate queries with results
* `documents`: JSON object containing the source documents with only displayed attributes
**Example Response**:
```json theme={null}
{
"function": {
"name": "_meiliSearchSources",
"arguments": "{\"call_id\":\"abc123\",\"documents\":[{\"id\":197302,\"title\":\"The Sacred Science\",\"overview\":\"Diabetes. Prostate cancer...\",\"genres\":[\"Documentary\",\"Adventure\",\"Drama\"]}]}"
}
}
```
### Sample OpenAI tool declaration
Include these tools in your request's `tools` array to enable enhanced functionality:
```json theme={null}
{
…
"tools": [
{
"type": "function",
"function": {
"name": "_meiliSearchProgress",
"description": "Provides information about the current Meilisearch search operation",
"parameters": {
"type": "object",
"properties": {
"call_id": {
"type": "string",
"description": "The call ID to track the sources of the search"
},
"function_name": {
"type": "string",
"description": "The name of the function we are executing"
},
"function_parameters": {
"type": "string",
"description": "The parameters of the function we are executing, encoded in JSON"
}
},
"required": ["call_id", "function_name", "function_parameters"],
"additionalProperties": false
},
"strict": true
}
},
{
"type": "function",
"function": {
"name": "_meiliAppendConversationMessage",
"description": "Append a new message to the conversation based on what happened internally",
"parameters": {
"type": "object",
"properties": {
"role": {
"type": "string",
"description": "The role of the messages author, either `role` or `assistant`"
},
"content": {
"type": "string",
"description": "The contents of the `assistant` or `tool` message. Required unless `tool_calls` is specified."
},
"tool_calls": {
"type": ["array", "null"],
"description": "The tool calls generated by the model, such as function calls",
"items": {
"type": "object",
"properties": {
"function": {
"type": "object",
"description": "The function that the model called",
"properties": {
"name": {
"type": "string",
"description": "The name of the function to call"
},
"arguments": {
"type": "string",
"description": "The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."
}
}
},
"id": {
"type": "string",
"description": "The ID of the tool call"
},
"type": {
"type": "string",
"description": "The type of the tool. Currently, only function is supported"
}
}
}
},
"tool_call_id": {
"type": ["string", "null"],
"description": "Tool call that this message is responding to"
}
},
"required": ["role", "content", "tool_calls", "tool_call_id"],
"additionalProperties": false
},
"strict": true
}
},
{
"type": "function",
"function": {
"name": "_meiliSearchSources",
"description": "Provides sources of the search",
"parameters": {
"type": "object",
"properties": {
"call_id": {
"type": "string",
"description": "The call ID to track the original search associated to those sources"
},
"documents": {
"type": "object",
"description": "The documents associated with the search (call_id). Only the displayed attributes of the documents are returned"
}
},
"required": ["call_id", "documents"],
"additionalProperties": false
},
"strict": true
}
}
]
}
```
# What is conversational search?
Source: https://www.meilisearch.com/docs/learn/chat/conversational_search
Conversational search allows people to make search queries using natural languages.
Conversational search is an AI-powered search feature that allows users to ask questions in everyday language and receive answers based on the information in Meilisearch's indexes.
## When to use conversational vs traditional search
Use conversational search when:
* Users need easy-to-read answers to specific questions
* You are handling informational-dense content, such as knowledge bases
* Natural language interaction improves user experience
Use traditional search when:
* Users need to browse multiple options, such as an ecommerce website
* Approximate answers are not acceptable
* Your users need very quick responses
Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments.
## Conversational search user workflow
### Traditional search workflow
1. User enters keywords
2. Meilisearch returns matching documents
3. User reviews results to find answers
### Conversational search workflow
1. User asks a question in natural language
2. Meilisearch retrieves relevant documents
3. AI generates a direct answer based on those documents
## Implementation strategies
### Retrieval Augmented Generation (RAG)
In the majority of cases, you should use the [`/chats` route](/reference/api/chats/update-chat) to build a Retrieval Augmented Generation (RAG) pipeline. RAGs excel when working with unstructured data and emphasise high-quality responses.
Meilisearch's chat completions API consolidates RAG creation into a single process:
1. **Query understanding**: automatically transforms questions into search parameters
2. **Hybrid retrieval**: combines keyword and semantic search for better relevancy
3. **Answer generation**: uses your chosen LLM to generate responses
4. **Context management**: maintains conversation history by constantly pushing the full conversation to the dedicated tool
Follow the [chat completions tutorial](/learn/chat/getting_started_with_chat) for information on how to implement a RAG with Meilisearch.
### Model Context Protocol (MCP)
An alternative method is using a Model Context Protocol (MCP) server. MCPs are designed for broader uses that go beyond answering questions, but can be useful in contexts where having up-to-date data is more important than comprehensive answers.
Follow the [dedicated MCP guide](/guides/ai/mcp) if you want to implement it in your application.
# Getting started with conversational search
Source: https://www.meilisearch.com/docs/learn/chat/getting_started_with_chat
This article walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application.
To successfully implement a conversational search interface you must follow three steps: configure indexes for chat usage, create a chat workspaces, and build a chat interface.
## Prerequisites
Before starting, ensure you have:
* A [secure](/learn/security/basic_security) Meilisearch >= v1.15.1 project
* An API key from an LLM provider
* At least one index with searchable content
## Setup
### Enable the chat completions feature
First, enable the chat completions experimental feature:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/experimental-features/' \
-H 'Content-Type: application/json' \
--data-binary '{
"chatCompletions": true
}'
```
Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments.
### Find your chat API key
When Meilisearch runs with a master key on an instance created after v1.15.1, it automatically generates a "Default Chat API Key" with `chatCompletions` and `search` permissions on all indexes. Check if you have the key using:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/keys' \
-H 'Authorization: Bearer MASTER_KEY'
```
```javascript JS theme={null}
const client = new MeiliSearch({ host: 'MEILISEARCH_URL', apiKey: 'masterKey' })
client.getKeys()
```
```python Python theme={null}
client = Client('MEILISEARCH_URL', 'masterKey')
client.get_keys()
```
```php PHP theme={null}
$client = new Client('MEILISEARCH_URL', 'masterKey');
$client->getKeys();
```
```java Java theme={null}
Client client = new Client(new Config("MEILISEARCH_URL", "masterKey"));
client.getKeys();
```
```ruby Ruby theme={null}
client = MeiliSearch::Client.new('MEILISEARCH_URL', 'masterKey')
client.keys
```
```go Go theme={null}
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
client.GetKeys(nil);
```
```csharp C# theme={null}
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "masterKey");
var keys = await client.GetKeysAsync();
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("MASTER_KEY")); let keys = client .get_keys() .await .unwrap();
```
```swift Swift theme={null}
client = try MeiliSearch(host: "MEILISEARCH_URL", apiKey: "masterKey")
client.getKeys { result in
switch result {
case .success(let keys):
print(keys)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
var client = MeiliSearchClient('MEILISEARCH_URL', 'masterKey');
await client.getKeys();
```
Look for the key with the description "Default Chat API Key".
#### Troubleshooting: Missing default chat API key
If your instance does not have a Default Chat API Key, create one manually:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/keys' \
-H 'Authorization: Bearer MEILISEARCH_KEY' \
-H 'Content-Type: application/json' \
--data-binary '{
"name": "Chat API Key",
"description": "API key for chat completions",
"actions": ["search", "chatCompletions"],
"indexes": ["*"],
"expiresAt": null
}'
```
## Configure your indexes
After activating the `/chats` route and obtaining an API key with chat permissions, configure the `chat` settings for each index you want to be searchable via chat UI:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings' \
-H 'Authorization: Bearer MEILISEARCH_KEY' \
-H 'Content-Type: application/json' \
--data-binary '{
"chat": {
"description": "A comprehensive database of TYPE_OF_DOCUMENT containing titles, descriptions, genres, and release dates to help users searching for TYPE_OF_DOCUMENT",
"documentTemplate": "{% for field in fields %}{% if field.is_searchable and field.value != nil %}{{ field.name }}: {{ field.value }}\n{% endif %}{% endfor %}",
"documentTemplateMaxBytes": 400
}
}'
```
* `description` gives the initial context of the conversation to the LLM. A good description improves relevance of the chat's answers
* `documentTemplate` defines the document data Meilisearch sends to the AI provider. This template outputs all searchable fields in your documents, which may not be ideal if your documents have many fields. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance
* `documentTemplateMaxBytes` establishes a size limit for the document templates. Documents bigger than 400 bytes are truncated to ensure a good balance between speed and relevancy
## Configure a chat completions workspace
The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. Each workspace can:
* Use different embedding providers (OpenAI, Azure OpenAI, Mistral, vLLM)
* Establish separate conversation contexts via baseline prompts
* Access a specific set of indexes
For example, you may have one workspace for publicly visible data, and another for data only available for logged in users.
Create a workspace setting your LLM provider as its `source`:
```bash OpenAI theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
-H 'Authorization: Bearer MEILISEARCH_KEY' \
-H 'Content-Type: application/json' \
--data-binary '{
"source": "openAi",
"apiKey": "PROVIDER_API_KEY",
"baseUrl": "PROVIDER_API_URL",
"prompts": {
"system": "You are a helpful assistant. Answer questions based only on the provided context."
}
}'
```
```bash Azure OpenAI theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
-H 'Authorization: Bearer MEILISEARCH_KEY' \
-H 'Content-Type: application/json' \
--data-binary '{
"source": "azureOpenAi",
"apiKey": "PROVIDER_API_KEY",
"baseUrl": "PROVIDER_API_URL",
"prompts": {
"system": "You are a helpful assistant. Answer questions based only on the provided context."
}
}'
```
```bash Mistral theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
-H 'Authorization: Bearer MEILISEARCH_KEY' \
-H 'Content-Type: application/json' \
--data-binary '{
"source": "mistral",
"apiKey": "PROVIDER_API_KEY",
"prompts": {
"system": "You are a helpful assistant. Answer questions based only on the provided context."
}
}'
```
```bash vLLM theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
-H 'Authorization: Bearer MEILISEARCH_KEY' \
-H 'Content-Type: application/json' \
--data-binary '{
"source": "vLlm",
"baseUrl": "PROVIDER_API_URL",
"prompts": {
"system": "You are a helpful assistant. Answer questions based only on the provided context."
}
}'
```
Which fields are mandatory will depend on your chosen provider `source`. In most cases, you will have to provide an `apiKey` to access the provider.
`baseUrl` indicates the URL Meilisearch queries when users submit questions to your chat interface. This is only mandatory for Azure OpenAI and vLLM sources.
`prompts.system` gives the conversational search bot the baseline context of your users and their questions. [The `prompts` object accepts a few other fields](/reference/api/chats/update-chat) that provide more information to improve how the agent uses the information it finds via Meilisearch. In real-life scenarios filling these fields would improve the quality of conversational search results.
## Send your first chat completions request
You have finished configuring your conversational search agent. To test everything is working as expected, send a streaming `curl` query to the chat completions API route:
```bash cURL theme={null}
curl -N \
-X POST 'MEILISEARCH_URL/chats/WORKSPACE_NAME/chat/completions' \
-H 'Authorization: Bearer MEILISEARCH_API_KEY' \
-H 'Content-Type: application/json' \
--data-binary '{
"model": "PROVIDER_MODEL_UID",
"messages": [
{
"role": "user",
"content": "USER_PROMPT"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "_meiliSearchProgress",
"description": "Reports real-time search progress to the user"
}
},
{
"type": "function",
"function": {
"name": "_meiliSearchSources",
"description": "Provides sources and references for the information"
}
}
]
}'
```
* `model` is mandatory and must indicate a model supported by your chosen `source`
* `messages` contains the messages exchanged between the conversational search agent and the user
* `tools` sets up two optional but highly [recommended tools](/learn/chat/chat_tooling_reference):
* `_meiliSearchProgress`: shows users what searches are being performed
* `_meiliSearchSources`: displays the actual documents used to generate responses
If Meilisearch returns a stream of data containing the chat agent response, you have correctly configured Meilisearch for conversational search:
```sh theme={null}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo","choices":[{"index":0,"delta":{"content":"Meilisearch"},"finish_reason":null}]}
```
If Meilisearch returns an error, consult the [troubleshooting section](#troubleshooting) to understand diagnose and fix the issues you encountered.
## Next steps
In this article, you have seen how to activate the chats completion route, prepare your indexes to serve as a base for your AI agent, and performed your first conversational search.
In most cases, that is only the beginning of adding conversational search to your application. Next, you are most likely going to want to add a graphical user interface to your application.
### Building a chat interface using the OpenAI SDK
Meilisearch's chat endpoint was designed to be OpenAI-compatible. This means you can use the official OpenAI SDK in any supported programming language, even if your provider is not OpenAI.
Integrating Meilisearch and the OpenAI SDK with JavaScript would look like this:
```javascript theme={null}
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'MEILISEARCH_URL/chats/WORKSPACE_NAME',
apiKey: 'PROVIDER_API_KEY',
});
const completion = await client.chat.completions.create({
model: 'PROVIDER_MODEL_UID',
messages: [{ role: 'user', content: 'USER_PROMPT' }]
});
for await (const chunk of completion) {
console.log(chunk.choices[0]?.delta?.content || '');
}
```
Take particular note of the last lines, which output the streamed responses to the browser console. In a real-life application, you would instead print the response chunks to the user interface.
## Troubleshooting
### Common issues and solutions
#### Empty reply from server (curl error 52)
**Causes:**
* Meilisearch not started with a master key
* Experimental features not enabled
* Missing authentication in requests
**Solution:**
1. Restart Meilisearch with a master key: `meilisearch --master-key yourKey`
2. Enable experimental features (see setup instructions above)
3. Include Authorization header in all requests
#### "Invalid API key" error
**Cause:** Using the wrong type of API key
**Solution:**
* Use either the master key or the "Default Chat API Key"
* Don't use search or admin API keys for chat endpoints
* Find your chat key with the [list keys endpoint](/reference/api/keys/list-api-keys)
#### "Socket connection closed unexpectedly"
**Cause:** Usually means the OpenAI API key is missing or invalid in workspace settings
**Solution:**
1. Check workspace configuration:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
-H "Authorization: Bearer MEILISEARCH_KEY"
```
2. Update with valid API key:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \
-H "Authorization: Bearer MEILISEARCH_KEY" \
-H "Content-Type: application/json" \
--data-binary '{ "apiKey": "your-valid-api-key" }'
```
#### Chat not searching the database
**Cause:** Missing Meilisearch tools in the request
**Solution:**
* Include `_meiliSearchProgress` and `_meiliSearchSources` tools in your request
* Ensure indexes have proper chat descriptions configured
# Configuring index settings
Source: https://www.meilisearch.com/docs/learn/configuration/configuring_index_settings
This tutorial shows how to check and change an index setting using the Meilisearch Cloud interface.
This tutorial will show you how to check and change an index setting using the [Meilisearch Cloud](https://cloud.meilisearch.com/projects/) interface.
## Requirements
* an active [Meilisearch Cloud](https://cloud.meilisearch.com/projects/) account
* a Meilisearch Cloud project with at least one index
## Accessing a project's index settings
Log into your Meilisearch account and navigate to your project. Then, click on "Indexes":
Find the index you want to configure and click on its "Settings" button:
## Checking a setting's current value
Using the menu on the left-hand side, click on "Attributes":
The first setting is "Searchable attributes" and lists all attributes in your dataset's documents:
Clicking on other settings will show you similar interfaces that allow visualizing and editing all Meilisearch index settings.
## Updating a setting
All documents include a primary key attribute. In most cases, this attribute does not contain information relevant for searches, so you can improve your application's search by explicitly removing it from the searchable attributes list.
Find your primary key, then click on the bin icon:
Meilisearch will display a pop-up window asking you to confirm you want to remove the attribute from the searchable attributes list. Click on "Yes, remove attribute":
Most updates to an index's settings will cause Meilisearch to re-index all its data. Wait a few moments until this operation is complete. You are not allowed to update any index settings during this time.
Once Meilisearch finishes indexing, the primary key will no longer appear in the searchable attributes list:
If you deleted the wrong attribute, click on "Add attributes" to add it back to the list. You may also click on "Reset to default", which will bring back the searchable list to its original state when you first added your first document to this index:
## Conclusion
You have used the Meilisearch Cloud interface to check the value of an index setting. This revealed an opportunity to improve your project's performance, so you updated this index setting to make your application better and more responsive.
This tutorial used the "Searchable attributes" setting, but the procedure is the same no matter which index setting you are editing.
## What's next
If you prefer to access the settings API directly through your console, you can also [configure index settings using the Meilisearch Cloud API](/learn/configuration/configuring_index_settings_api).
For a comprehensive reference of all index settings, consult the [settings API reference](/reference/api/settings/list-all-settings).
# Configuring index settings with the Meilisearch API
Source: https://www.meilisearch.com/docs/learn/configuration/configuring_index_settings_api
This tutorial shows how to check and change an index setting using the Meilisearch API.
This tutorial shows how to check and change an index setting using one of the setting subroutes of the Meilisearch API.
If you are Meilisearch Cloud user, you may also [configure index settings using the Meilisearch Cloud interface](/learn/configuration/configuring_index_settings).
## Requirements
* a new [Meilisearch Cloud](https://cloud.meilisearch.com/projects/) project or a self-hosted Meilisearch instance with at least one index
* a command-line terminal with `curl` installed
## Getting the value of a single index setting
Start by checking the value of the searchable attributes index setting.
Use the `GET` endpoint of the `/settings/searchable-attributes` subroute, replacing `INDEX_NAME` with your index:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/searchable-attributes'
```
```ruby Ruby theme={null}
client.index('INDEX_NAME').searchable_attributes
```
```rust Rust theme={null}
let searchable_attributes: Vec = index
.get_searchable_attributes()
.await
.unwrap();
```
Depending on your setup, you might also need to replace `localhost:7700` with the appropriate address and port.
You should receive a response immediately:
```json theme={null}
[
"*"
]
```
If this is a new index, you should see the default value, \["\*"]. This indicates Meilisearch looks through all document attributes when searching.
## Updating an index setting
All documents include a primary key attribute. In most cases, this attribute does not contain any relevant data, so you can improve your application search experience by explicitly removing it from your searchable attributes list.
Use the `PUT` endpoint of the `/settings/searchable-attributes` subroute, replacing `INDEX_NAME` with your index and the sample attributes `"title"` and `"overview"` with attributes present in your dataset:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/searchable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"title",
"overview"
]'
```
```rust Rust theme={null}
let task = index
.set_searchable_attributes(["title", "overview"])
.await
.unwrap();
```
This time, Meilisearch will not process your request immediately. Instead, you will receive a summarized task object while the search engine works on updating your index setting as soon as it has enough resources:
```json theme={null}
{
"taskUid": 1,
"indexUid": "INDEX_NAME",
"status": "enqueued",
"type": "settingsUpdate",
"enqueuedAt": "2021-08-11T09:25:53.000000Z"
}
```
Processing the index setting change might take some time, depending on how many documents you have in your index. Wait a few seconds and use the task object's `taskUid` to monitor the status of your request:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks/TASK_UID'
```
```rust Rust theme={null}
let task_status = index.get_task(&task).await.unwrap();
```
Meilisearch will respond with a task object:
```json theme={null}
{
"uid": 1,
"indexUid": "INDEX_NAME",
"status": "succeeded",
"type": "settingsUpdate",
…
}
```
If `status` is `enqueued` or `processed`, wait a few more moments and check the task status again. If `status` is `failed`, make sure you have used a valid index and attributes, then try again.
If task `status` is `succeeded`, you successfully updated your index's searchable attributes. Use the subroute to check the new setting's value:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/searchable-attributes'
```
```ruby Ruby theme={null}
client.index('INDEX_NAME').searchable_attributes
```
```rust Rust theme={null}
let searchable_attributes: Vec = index
.get_searchable_attributes()
.await
.unwrap();
```
Meilisearch should return an array with the new values:
```json theme={null}
[
"title",
"overview"
]
```
## Conclusion
You have used the Meilisearch API to check the value of an index setting. This revealed an opportunity to improve your project's performance, so you updated this index setting to make your application better and more responsive.
This tutorial used the searchable attributes setting, but the procedure is the same no matter which index setting you are editing.
For a comprehensive reference of all index settings, consult the [settings API reference](/reference/api/settings/list-all-settings).
# Exporting and importing dumps
Source: https://www.meilisearch.com/docs/learn/data_backup/dumps
Dumps are data backups containing all data related to a Meilisearch instance. They are often useful when migrating to a new Meilisearch release.
A [dump](/learn/data_backup/snapshots_vs_dumps#dumps) is a compressed file containing an export of your Meilisearch instance. Use dumps to migrate to new Meilisearch versions. This tutorial shows you how to create and import dumps.
Creating a dump is also referred to as exporting it. Launching Meilisearch with a dump is referred to as importing it.
## Creating a dump
### Creating a dump in Meilisearch Cloud
**You cannot manually export dumps in Meilisearch Cloud**. To [migrate your project to the most recent Meilisearch release](/learn/update_and_migration/updating), use the Cloud interface:
If you need to create a dump for reasons other than upgrading, contact the support team via the Meilisearch Cloud interface or the [official Meilisearch Discord server](https://discord.meilisearch.com).
### Creating a dump in a self-hosted instance
To create a dump, use the [create a dump endpoint](/reference/api/backups/create-dump):
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/dumps'
```
```javascript JS theme={null}
client.createDump()
```
```python Python theme={null}
client.create_dump()
```
```php PHP theme={null}
$client->createDump();
```
```java Java theme={null}
client.createDump();
```
```ruby Ruby theme={null}
client.create_dump
```
```go Go theme={null}
resp, err := client.CreateDump()
```
```csharp C# theme={null}
await client.CreateDumpAsync();
```
```rust Rust theme={null}
client
.create_dump()
.await
.unwrap();
```
```swift Swift theme={null}
client.createDump { result in
switch result {
case .success(let dumpStatus):
print(dumpStatus)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.createDump();
```
This will return a [summarized task object](/reference/api/async-task-management/get-task) that you can use to check the status of your dump.
```json theme={null}
{
"taskUid": 1,
"indexUid": null,
"status": "enqueued",
"type": "dumpCreation",
"enqueuedAt": "2022-06-21T16:10:29.217688Z"
}
```
The dump creation process is an asynchronous task that takes time proportional to the size of your database. Replace `1` with the `taskUid` returned by the previous command:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks/1'
```
```javascript JS theme={null}
client.tasks.getTask(1)
```
```python Python theme={null}
client.get_task(1)
```
```php PHP theme={null}
$client->getTask(1);
```
```java Java theme={null}
client.getTask(1);
```
```ruby Ruby theme={null}
client.task(1)
```
```go Go theme={null}
client.GetTask(1);
```
```csharp C# theme={null}
TaskInfo task = await client.GetTaskAsync(1);
```
```rust Rust theme={null}
let task: Task = client
.get_task(1)
.await
.unwrap();
```
```swift Swift theme={null}
client.getTask(taskUid: 1) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.getTask(1);
```
This should return an object with detailed information about the dump operation:
```json theme={null}
{
"uid": 1,
"indexUid": null,
"status": "succeeded",
"type": "dumpCreation",
"canceledBy": null,
"details": {
"dumpUid": "20220621-161029217"
},
"error": null,
"duration": "PT0.025872S",
"enqueuedAt": "2022-06-21T16:10:29.217688Z",
"startedAt": "2022-06-21T16:10:29.218297Z",
"finishedAt": "2022-06-21T16:10:29.244169Z"
}
```
All indexes of the current instance are exported along with their documents and settings and saved as a single `.dump` file. The dump also includes any tasks registered before Meilisearch starts processing the dump creation task.
Once the task `status` changes to `succeeded`, find the dump file in [the dump directory](/learn/self_hosted/configure_meilisearch_at_launch#dump-directory). By default, this folder is named `dumps` and can be found in the same directory where you launched Meilisearch.
If a dump file is visible in the file system, the dump process was successfully completed. **Meilisearch will never create a partial dump file**, even if you interrupt an instance while it is generating a dump.
Since the `key` field depends on the master key, it is not propagated to dumps. If a malicious user ever gets access to your dumps, they will not have access to your instance's API keys.
## Importing a dump
Import a dump by launching a Meilisearch instance with the [`--import-dump` configuration option](/learn/self_hosted/configure_meilisearch_at_launch#import-dump):
```bash theme={null}
./meilisearch --import-dump /dumps/20200813-042312213.dump
```
Depending on the size of your dump file, importing it might take a significant amount of time. You will only be able to access Meilisearch and its API once this process is complete.
Meilisearch imports all data in the dump file. If you have already added data to your instance, existing indexes with the same `uid` as an index in the dump file will be overwritten.
Do not use dumps to migrate from a new Meilisearch version to an older release. Doing so might lead to unexpected behavior.
# Exporting and using Snapshots
Source: https://www.meilisearch.com/docs/learn/data_backup/snapshots
Snapshots are exact copies of Meilisearch databases. They are often useful for periodical backups.
A [snapshot](/learn/data_backup/snapshots_vs_dumps#snapshots) is an exact copy of the Meilisearch database. Snapshots are useful as quick backups, but cannot be used to migrate to a new Meilisearch release.
This tutorial shows you how to schedule snapshot creation to ensure you always have a recent backup of your instance ready to use. You will also see how to start Meilisearch from this snapshot.
Meilisearch Cloud does not support snapshots.
## Scheduling periodic snapshots
It is good practice to create regular backups of your Meilisearch data. This ensures that you can recover from critical failures quickly in case your Meilisearch instance becomes compromised.
Use the [`--schedule-snapshot` configuration option](/learn/self_hosted/configure_meilisearch_at_launch#schedule-snapshot-creation) to create snapshots at regular time intervals:
```bash theme={null}
meilisearch --schedule-snapshot
```
The first snapshot is created on launch. You will find it in the [snapshot directory](/learn/self_hosted/configure_meilisearch_at_launch#snapshot-destination), `/snapshots`. Meilisearch will then create a new snapshot every 24 hours until you terminate your instance.
Meilisearch **automatically overwrites** old snapshots during snapshot creation. Only the most recent snapshot will be present in the folder at any given time.
In cases where your database is updated several times a day, it might be better to modify the interval between each new snapshot:
```bash theme={null}
meilisearch --schedule-snapshot=3600
```
This instructs Meilisearch to create a new snapshot once every hour.
If you need to generate a single snapshot without relaunching your instance, use [the `/snapshots` route](/reference/api/backups/create-snapshot).
## Starting from a snapshot
To import snapshot data into your instance, launch Meilisearch using `--import-snapshot`:
```bash theme={null}
meilisearch --import-snapshot mySnapShots/data.ms.snapshot
```
Because snapshots are exact copies of your database, starting a Meilisearch instance from a snapshot is much faster than adding documents manually or starting from a dump.
For security reasons, Meilisearch will never overwrite an existing database. By default, Meilisearch will throw an error when importing a snapshot if there is any data in your instance.
You can change this behavior by specifying [`--ignore-snapshot-if-db-exists=true`](/learn/self_hosted/configure_meilisearch_at_launch#ignore-dump-if-db-exists). This will cause Meilisearch to launch with the existing database and ignore the dump without throwing an error.
# Snapshots and dumps
Source: https://www.meilisearch.com/docs/learn/data_backup/snapshots_vs_dumps
Meilisearch offers two types of backups: snapshots and dumps. Snapshots are mainly intended as a safeguard, while dumps are useful when migrating Meilisearch.
This article explains Meilisearch's two backup methods: snapshots and dumps.
## Snapshots
A snapshot is an exact copy of the Meilisearch database, located by default in `./data.ms`. [Use snapshots for quick and efficient backups of your instance](/learn/data_backup/snapshots).
The documents in a snapshot are already indexed and ready to go, greatly increasing import speed. However, snapshots are not compatible between different versions of Meilisearch. Snapshots are also significantly bigger than dumps.
In short, snapshots are a safeguard: if something goes wrong in an instance, you're able to recover and relaunch your database quickly. You can also schedule periodic snapshot creation.
## Dumps
A dump isn't an exact copy of your database like a snapshot. Instead, it is closer to a blueprint which Meilisearch can later use to recreate a whole instance from scratch.
Importing a dump requires Meilisearch to re-index all documents. This process uses a significant amount of time and memory proportional to the size of the database. Compared to the snapshots, importing a dump is a slow and inefficient operation.
At the same time, dumps are not bound to a specific Meilisearch version. This means dumps are ideal for migrating your data when you upgrade Meilisearch.
Use dumps to transfer data from an old Meilisearch version into a more recent release. Do not transfer data from a new release into a legacy Meilisearch version.
For example, you can import a dump from Meilisearch v1.2 into v1.6 without any problems. Importing a dump generated in v1.7 into a v1.2 instance, however, can lead to unexpected behavior.
## Snapshots VS dumps
Both snapshots and dumps are data backups, but they serve different purposes.
Snapshots are highly efficient, but not portable between different versions of Meilisearch. **Use snapshots for periodic data backups.**
Dumps are portable between different Meilisearch versions, but not very efficient. **Use dumps when updating to a new Meilisearch release.**
# Concatenated and split queries
Source: https://www.meilisearch.com/docs/learn/engine/concat
When a query contains several terms, Meilisearch looks for both individual terms and their combinations.
## Concatenated queries
When your search contains several words, Meilisearch applies a concatenation algorithm to it.
When searching for multiple words, a search is also done on the concatenation of those words. When concatenation is done on a search query containing multiple words, it will concatenate the words following each other. Thus, the first and third words will not be concatenated without the second word.
### Example
A search on `The news paper` will also search for the following concatenated queries:
* `Thenews paper`
* `the newspaper`
* `Thenewspaper`
This concatenation is done on a **maximum of 3 words**.
## Split queries
When you do a search, it **applies the splitting algorithm to every word** (*string separated by a space*).
This consists of finding the most interesting place to separate the words and to create a parallel search query with this proposition.
This is achieved by finding the best frequency of the separate words in the dictionary of all words in the dataset. It will look out that both words have a minimum of interesting results, and not just one of them.
Split words are not considered as multiple words in a search query because they must stay next to each other.
### Example
On a search on `newspaper`, it will split into `news` and `paper` and not into `new` and `spaper`.
A document containing `news` and `paper` separated by other words will not be relevant to the search.
# Data types
Source: https://www.meilisearch.com/docs/learn/engine/datatypes
Learn about how Meilisearch handles different data types: strings, numerical values, booleans, arrays, and objects.
This article explains how Meilisearch handles the different types of data in your dataset.
**The behavior described here concerns only Meilisearch's internal processes** and can be helpful in understanding how the tokenizer works. Document fields remain unchanged for most practical purposes not related to Meilisearch's inner workings.
## String
String is the primary type for indexing data in Meilisearch. It enables to create the content in which to search. Strings are processed as detailed below.
String tokenization is the process of **splitting a string into a list of individual terms that are called tokens.**
A string is passed to a tokenizer and is then broken into separate string tokens. A token is a **word**.
### Tokenization
Tokenization relies on two main processes to identifying words and separating them into tokens: separators and dictionaries.
#### Separators
Separators are characters that indicate where one word ends and another word begins. In languages using the Latin alphabet, for example, words are usually delimited by white space. In Japanese, word boundaries are more commonly indicated in other ways, such as appending particles like `に` and `で` to the end of a word.
There are two kinds of separators in Meilisearch: soft and hard. Hard separators signal a significant context switch such as a new sentence or paragraph. Soft separators only delimit one word from another but do not imply a major change of subject.
The list below presents some of the most common separators in languages using the Latin alphabet:
* **Soft spaces** (distance: 1): whitespaces, quotes, `'-' | '_' | '\'' | ':' | '/' | '\\' | '@' | '"' | '+' | '~' | '=' | '^' | '*' | '#'`
* **Hard spaces** (distance: 8): `'.' | ';' | ',' | '!' | '?' | '(' | ')' | '[' | ']' | '{' | '}'| '|'`
For more separators, including those used in other writing systems like Cyrillic and Thai, [consult this exhaustive list](https://docs.rs/charabia/0.8.3/src/charabia/separators.rs.html#16-62).
#### Dictionaries
For the tokenization process, dictionaries are lists of groups of characters which should be considered as single term. Dictionaries are particularly useful when identifying words in languages like Japanese, where words are not always marked by separator tokens.
Meilisearch comes with a number of general-use dictionaries for its officially supported languages. When working with documents containing many domain-specific terms, such as a legal documents or academic papers, providing a [custom dictionary](/reference/api/settings/get-dictionary) may improve search result relevancy.
### Distance
Distance plays an essential role in determining whether documents are relevant since [one of the ranking rules is the **proximity** rule](/learn/relevancy/relevancy). The proximity rule sorts the results by increasing distance between matched query terms. Then, two words separated by a soft space are closer and thus considered **more relevant** than two words separated by a hard space.
After the tokenizing process, each word is indexed and stored in the global dictionary of the corresponding index.
### Examples
To demonstrate how a string is split by space, let's say you have the following string as an input:
```
"Bruce Willis,Vin Diesel"
```
In the example above, the distance between `Bruce` and `Willis` is equal to **1**. The distance between `Vin` and `Diesel` is also **1**. However, the distance between `Willis` and `Vin` is equal to **8**. The same calculations apply to `Bruce` and `Diesel` (10), `Bruce` and `Vin` (9), and `Willis` and `Diesel` (9).
Let's see another example. Given two documents:
```json theme={null}
[
{
"movie_id": "001",
"description": "Bruce.Willis"
},
{
"movie_id": "002",
"description": "Bruce super Willis"
}
]
```
When making a query on `Bruce Willis`, `002` will be the first document returned, and `001` will be the second one. This will happen because the proximity distance between `Bruce` and `Willis` is equal to **2** in the document `002`, whereas the distance between `Bruce` and `Willis` is equal to **8** in the document `001` since the full-stop character `.` is a hard space.
## Numeric
A numeric type (`integer`, `float`) is converted to a human-readable decimal number string representation. Numeric types can be searched as they are converted to strings.
You can add [custom ranking rules](/learn/relevancy/custom_ranking_rules) to create an ascending or descending sorting rule on a given attribute that has a numeric value in the documents.
You can also create [filters](/learn/filtering_and_sorting/filter_search_results). The `>`, `>=`, `<`, `<=`, and `TO` relational operators apply only to numerical values.
## Boolean
A Boolean value, which is either `true` or `false`, is received and converted to a lowercase human-readable text (`true` and `false`). Booleans can be searched as they are converted to strings.
## `null`
The `null` type can be pushed into Meilisearch but it **won't be taken into account for indexing**.
## Array
An array is an ordered list of values. These values can be of any type: number, string, boolean, object, or even other arrays.
Meilisearch flattens arrays and concatenates them into strings. Non-string values are converted as described in this article's previous sections.
### Example
The following input:
```json theme={null}
[
[
"Bruce Willis",
"Vin Diesel"
],
"Kung Fu Panda"
]
```
Will be processed as if all elements were arranged at the same level:
```json theme={null}
"Bruce Willis. Vin Diesel. Kung Fu Panda."
```
Once the above array has been flattened, it will be parsed exactly as explained in the [string example](/learn/engine/datatypes#examples).
## Objects
When a document field contains an object, Meilisearch flattens it and brings the object's keys and values to the root level of the document itself.
Keep in mind that the flattened objects represented here are an intermediary snapshot of internal processes. When searching, the returned document will keep its original structure.
In the example below, the `patient_name` key contains an object:
```json theme={null}
{
"id": 0,
"patient_name": {
"forename": "Imogen",
"surname": "Temult"
}
}
```
During indexing, Meilisearch uses dot notation to eliminate nested fields:
```json theme={null}
{
"id": 0,
"patient_name.forename": "Imogen",
"patient_name.surname": "Temult"
}
```
Using dot notation, no information is lost when flattening nested objects, regardless of nesting depth.
Imagine that the example document above includes an additional object, `address`, containing home and work addresses, each of which are objects themselves. After flattening, the document would look like this:
```json theme={null}
{
"id": 0,
"patient_name.forename": "Imogen",
"patient_name.surname": "Temult",
"address.home.street": "Largo Isarco, 2",
"address.home.postcode": "20139",
"address.home.city": "Milano",
"address.work.street": "Ca' Corner Della Regina, 2215",
"address.work.postcode": "30135",
"address.work.city": "Venezia"
}
```
Meilisearch's internal flattening process also eliminates nesting in arrays of objects. In this case, values are grouped by key. Consider the following document:
```json theme={null}
{
"id": 0,
"patient_name": "Imogen Temult",
"appointments": [
{
"date": "2022-01-01",
"doctor": "Jester Lavorre",
"ward": "psychiatry"
},
{
"date": "2019-01-01",
"doctor": "Dorian Storm"
}
]
}
```
After flattening, it would look like this:
```json theme={null}
{
"id": 0,
"patient_name": "Imogen Temult",
"appointments.date": [
"2022-01-01",
"2019-01-01"
],
"appointments.doctor": [
"Jester Lavorre",
"Dorian Storm"
],
"appointments.ward": [
"psychiatry"
]
}
```
Once all objects inside a document have been flattened, Meilisearch will continue processing it as described in the previous sections. For example, arrays will be flattened, and numeric and boolean values will be turned into strings.
### Nested document querying and subdocuments
Meilisearch has no concept of subdocuments and cannot perform nested document querying. In the previous example, the relationship between an appointment's date and doctor is lost when flattening the `appointments` array:
```json theme={null}
…
"appointments.date": [
"2022-01-01",
"2019-01-01"
],
"appointments.doctor": [
"Jester Lavorre",
"Dorian Storm"
],
…
```
This may lead to unexpected behavior during search. The following dataset shows two patients and their respective appointments:
```json theme={null}
[
{
"id": 0,
"patient_name": "Imogen Temult",
"appointments": [
{
"date": "2022-01-01",
"doctor": "Jester Lavorre"
}
]
},
{
"id": 1,
"patient_name": "Caleb Widowgast",
"appointments": [
{
"date": "2022-01-01",
"doctor": "Dorian Storm"
},
{
"date": "2023-01-01",
"doctor": "Jester Lavorre"
}
]
}
]
```
The following query returns patients `0` and `1`:
```sh theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/clinic_patients/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "",
"filter": "(appointments.date = 2022-01-01 AND appointments.doctor = '\''Jester Lavorre'\'')"
}'
```
Meilisearch is unable to only return patients who had an appointment with `Jester Lavorre` in `2022-01-01`. Instead, it returns patients who had an appointment with `Jester Lavorre`, and patients who had an appointment in `2022-01-01`.
The best way to work around this limitation is reformatting your data. The above example could be fixed by merging appointment data in a new `appointmentsMerged` field so the relationship between appointment and doctor remains intact:
```json theme={null}
[
{
"id": 0,
"patient_name": "Imogen Temult",
"appointmentsMerged": [
"2022-01-01 Jester Lavorre"
]
},
{
"id": 1,
"patient_name": "Caleb Widowgast",
"appointmentsMerged": [
"2023-01-01 Jester Lavorre"
"2022-01-01 Dorian Storm"
]
}
]
```
### Updating object fields
Object fields cannot be partially updated. Updating an object field with either the `PUT` or `POST` routes with an object fully replaces that value and removes any omitted subfields. Dot notation is also not supported when updating a document.
## Possible tokenization issues
Even if it behaves exactly as expected, the tokenization process may lead to counterintuitive results in some cases, such as:
```
"S.O.S"
"George R. R. Martin"
10,3
```
For the two strings above, the full stops `.` will be considered as hard spaces.
`10,3` will be broken into two strings—`10` and `3`—instead of being processed as a numeric type.
# Prefix search
Source: https://www.meilisearch.com/docs/learn/engine/prefix
Prefix search is a core part of Meilisearch's design and allows users to receive results even when their query only contains a single letter.
In Meilisearch, **you can perform a search with only a single letter as your query**. This is because we follow the philosophy of **prefix search**.
Prefix search is when document sorting starts by comparing the search query against the beginning of each word in your dataset. All documents with words that match the query term are added to the [bucket sort](https://en.wikipedia.org/wiki/Bucket_sort), before the [ranking rules](/learn/relevancy/ranking_rules) are applied sequentially.
In other words, prefix search means that it's not necessary to type a word in its entirety to find documents containing that word—you can just type the first one or two letters.
Prefix search is only performed on the last word in a search query—prior words must be typed out fully to get accurate results.
Searching by prefix (rather than using complete words) has a significant impact on search time. The shorter the query term, the more possible matches in the dataset.
### Example
Given a set of words in a dataset:
`film` `cinema` `movies` `show` `harry` `potter` `shine` `musical`
query: `s`:
response:
* `show`
* `shine`
but not
* `movies`
* `musical`
query: `sho`:
response:
* `show`
Meilisearch also handles typos while performing the prefix search. You can [read more about the typo rules on the dedicated page](/learn/relevancy/typo_tolerance_settings).
We also [apply splitting and concatenating on search queries](/learn/engine/concat).
# Storage
Source: https://www.meilisearch.com/docs/learn/engine/storage
Learn about how Meilisearch stores and handles data in its LMDB storage engine.
Meilisearch is in many ways a database: it stores indexed documents along with the data needed to return relevant search results.
## Database location
Meilisearch creates the database the moment you first launch an instance. By default, you can find it inside a `data.ms` folder located in the same directory as the `meilisearch` binary.
The database location can change depending on a number of factors, such as whether you have configured a different database path with the [`--db-path` instance option](/learn/self_hosted/configure_meilisearch_at_launch#database-path), or if you're using an OS virtualization tool like [Docker](https://docker.com).
## LMDB
Creating a database from scratch and managing it is hard work. It would make no sense to try and reinvent the wheel, so Meilisearch uses a storage engine under the hood. This allows the Meilisearch team to focus on improving search relevancy and search performance while abstracting away the complicated task of creating, reading, and updating documents on disk and in memory.
Our storage engine is called [Lightning Memory-Mapped Database](http://www.lmdb.tech/doc/) (LMDB for short). LMDB is a transactional key-value store written in C that was developed for OpenLDAP and has ACID properties. Though we considered other options, such as [Sled](https://github.com/spacejam/sled) and [RocksDB](https://rocksdb.org/), we chose LMDB because it provided us with the best combination of performance, stability, and features.
### Memory mapping
LMDB stores its data in a [memory-mapped file](https://en.wikipedia.org/wiki/Memory-mapped_file). All data fetched from LMDB is returned straight from the memory map, which means there is no memory allocation or memory copy during data fetches.
All documents stored on disk are automatically loaded in memory when Meilisearch asks for them. This ensures LMDB will always make the best use of the RAM available to retrieve the documents.
For best performance, Meilisearch works optimally when the full dataset fits in RAM. In practice, however, we consistently observe that a **RAM‑to‑disk ratio around 1/3 does not materially impact performance**, and for many workloads even \~1/10 works well. The effective memory requirement is highly use‑case‑dependent and varies with search and indexing pressure. RAM can be increased later to unlock more performance, but Meilisearch will not crash simply because the dataset size on disk exceeds the available RAM.
Disk latency is also important for performance: using a **low‑latency disk** (for example, an NVMe SSD) will give better results than a **high‑latency disk** (for example, HDD, NFS, or other network‑mounted storage).
### Understanding LMDB
The choice of LMDB comes with certain pros and cons, especially regarding database size and memory usage. We summarize the most important aspects of LMDB here, but check out this [blog post by LMDB's developers](https://www.symas.com/post/understanding-lmdb-database-file-sizes-and-memory-utilization) for more in-depth information.
#### Database size
When deleting documents from a Meilisearch index, you may notice disk space usage remains the same. This happens because LMDB internally marks that space as free, but does not make it available for the operating system at large. This design choice leads to better performance, as there is no need for periodic compaction operations. As a result, disk space occupied by LMDB (and thus by Meilisearch) tends to increase over time. It is not possible to calculate the precise maximum amount of space a Meilisearch instance can occupy.
#### Memory usage
Since LMDB is memory mapped, it is the operating system that manages the real memory allocated (or not) to Meilisearch.
Thus, if you run Meilisearch as a standalone program on a server, LMDB will use the maximum RAM it can use. More RAM means more of the dataset stays in cache and fewer reads hit disk, but a [RAM‑to‑disk ratio of around 1/3 does not materially impact performance](#memory-mapping) for most workloads.
On the other hand, if you run Meilisearch along with other programs, the OS will manage memory based on everyone's needs. This makes Meilisearch's memory usage quite flexible when used in development.
**Virtual Memory != Real Memory**
Virtual memory is the disk space a program requests from the OS. It is not the memory that the program will actually use.
Meilisearch will always demand a certain amount of space to use as a [memory map](#memory-mapping). This space will be used as virtual memory, but the amount of real memory (RAM) used will be much smaller.
## Measured disk usage
The following measurements were taken using movies.json an 8.6 MB JSON dataset containing 19,553 documents.
After indexing, the dataset size in LMDB is about 122MB.
| Raw JSON | Meilisearch database size on disk | RAM usage | Virtual memory usage |
| :------- | :-------------------------------- | :-------- | :------------------- |
| 9.1 MB | 224 MB | ≃ 305 MB | 205 Gb (memory map) |
This means the database is using **305 MB of RAM and 224 MB of disk space.** Note that [virtual memory](https://www.enterprisestorageforum.com/hardware/virtual-memory/) **refers only to disk space allocated by your computer for Meilisearch—it does not mean that it's actually in use by the database.** See [Memory Usage](#memory-usage) for more details.
These metrics are highly dependent on the machine that is running Meilisearch. Running this test on significantly underpowered machines is likely to give different results.
It is important to note that **there is no reliable way to predict the final size of a database**. This is true for just about any search engine on the market—we're just the only ones saying it out loud.
Database size is affected by a large number of criteria, including settings, relevancy rules, use of facets, the number of different languages present, and more.
# Filter expression reference
Source: https://www.meilisearch.com/docs/learn/filtering_and_sorting/filter_expression_reference
The `filter` search parameter expects a filter expression. Filter expressions are made of attributes, values, and several operators.
The `filter` search parameter expects a filter expression. Filter expressions are made of attributes, values, and several operators.
`filter` expects a **filter expression** containing one or more **conditions**. A filter expression can be written as a string, array, or mix of both.
## Data types
Filters accept numeric and string values. Empty fields or fields containing an empty array will be ignored.
Filters do not work with [`NaN`](https://en.wikipedia.org/wiki/NaN) and infinite values such as `inf` and `-inf` as they are [not supported by JSON](https://en.wikipedia.org/wiki/JSON#Data_types). It is possible to filter infinite and `NaN` values if you parse them as strings, except when handling [`_geo` fields](/learn/filtering_and_sorting/geosearch#preparing-documents-for-location-based-search).
For best results, enforce homogeneous typing across fields, especially when dealing with large numbers. Meilisearch does not enforce a specific schema when indexing data, but the filtering engine may coerce the type of `value`. This can lead to undefined behavior, such as when big floating-point numbers are coerced into integers.
## Conditions
Conditions are a filter's basic building blocks. They are written in the `attribute OPERATOR value` format, where:
* `attribute` is the attribute of the field you want to filter on
* `OPERATOR` can be `=`, `!=`, `>`, `>=`, `<`, `<=`, `TO`, `EXISTS`, `IN`, `NOT`, `AND`, or `OR`
* `value` is the value the `OPERATOR` should look for in the `attribute`
### Examples
A basic condition requesting movies whose `genres` attribute is equal to `horror`:
```
genres = horror
```
String values containing whitespace must be enclosed in single or double quotes:
```
director = 'Jordan Peele'
director = "Tim Burton"
```
## Filter operators
### Equality (`=`)
The equality operator (`=`) returns all documents containing a specific value for a given attribute:
```
genres = action
```
When operating on strings, `=` is case-insensitive.
The equality operator does not return any results for `null` and empty arrays.
### Inequality (`!=`)
The inequality operator (`!=`) returns all documents not selected by the equality operator. When operating on strings, `!=` is case-insensitive.
The following expression returns all movies without the `action` genre:
```
genres != action
```
### Comparison (`>`, `<`, `>=`, `<=`)
The comparison operators (`>`, `<`, `>=`, `<=`) select documents satisfying a comparison. Comparison operators apply to both numerical and string values.
The expression below returns all documents with a user rating above 85:
```
rating.users > 85
```
String comparisons resolve in lexicographic order: symbols followed by numbers followed by letters in alphabetic order. The expression below returns all documents released after the first day of 2004:
```
release_date > 2004-01-01
```
### `TO`
`TO` is equivalent to `>= AND <=`. The following expression returns all documents with a rating of 80 or above but below 90:
```
rating.users 80 TO 89
```
### `EXISTS`
The `EXISTS` operator checks for the existence of a field. Fields with empty or `null` values count as existing.
The following expression returns all documents containing the `release_date` field:
```
release_date EXISTS
```
The negated form of the above expression can be written in two equivalent ways:
```
release_date NOT EXISTS
NOT release_date EXISTS
```
#### Vector filters
When using AI-powered search, you may also use `EXISTS` to filter documents containing vector data:
* `_vectors EXISTS`: matches all documents with an embedding
* `_vectors.{embedder_name} EXISTS`: matches all documents with an embedding for the given embedder
* `_vectors.{embedder_name}.userProvided EXISTS`: matches all documents with a user-provided embedding on the given embedder
* `_vectors.{embedder_name}.documentTemplate EXISTS`: matches all documents with an embedding generated from a document template. Excludes user-provided embeddings
* `_vectors.{embedder_name}.regenerate EXISTS`: matches all documents with an embedding scheduled for regeneration
* `_vectors.{embedder_name}.fragments.{fragment_name} EXISTS`: matches all documents with an embedding generated from the given multimodal fragment. Excludes user-provided embeddings
`_vectors` is only compatible with the `EXISTS` operator.
### `IS EMPTY`
The `IS EMPTY` operator selects documents in which the specified attribute exists but contains empty values. The following expression only returns documents with an empty `overview` field:
```
overview IS EMPTY
```
`IS EMPTY` matches the following JSON values:
* `""`
* `[]`
* `{}`
Meilisearch does not treat `null` values as empty. To match `null` fields, use the [`IS NULL`](#is-null) operator.
Use `NOT` to build the negated form of `IS EMPTY`:
```
overview IS NOT EMPTY
NOT overview IS EMPTY
```
### `IS NULL`
The `IS NULL` operator selects documents in which the specified attribute exists but contains a `null` value. The following expression only returns documents with a `null` `overview` field:
```
overview IS NULL
```
Use `NOT` to build the negated form of `IS NULL`:
```
overview IS NOT NULL
NOT overview IS NULL
```
### `IN`
`IN` combines equality operators by taking an array of comma-separated values delimited by square brackets. It selects all documents whose chosen field contains at least one of the specified values.
The following expression returns all documents whose `genres` includes either `horror`, `comedy`, or both:
```
genres IN [horror, comedy]
genres = horror OR genres = comedy
```
The negated form of the above expression can be written as:
```
genres NOT IN [horror, comedy]
NOT genres IN [horror, comedy]
```
### `CONTAINS`
`CONTAINS` filters results containing partial matches to the specified string pattern, similar to a [SQL `LIKE`](https://dev.mysql.com/doc/refman/8.4/en/string-comparison-functions.html#operator_like).
The following expression returns all dairy products whose names contain `"kef"`:
```
dairy_products.name CONTAINS kef
```
The negated form of the above expression can be written as:
```
dairy_products.name NOT CONTAINS kef
NOT dairy_product.name CONTAINS kef
```
This is an experimental feature. Use the experimental features endpoint to activate it:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/experimental-features/' \
-H 'Content-Type: application/json' \
--data-binary '{
"containsFilter": true
}'
```
### `STARTS WITH`
`STARTS WITH` filters results whose values start with the specified string pattern.
The following expression returns all dairy products whose name start with `"kef"`:
```
dairy_products.name STARTS WITH kef
```
The negated form of the above expression can be written as:
```
dairy_products.name NOT STARTS WITH kef
NOT dairy_product.name STARTS WITH kef
```
### `NOT`
The negation operator (`NOT`) selects all documents that do not satisfy a condition. It has higher precedence than `AND` and `OR`.
The following expression will return all documents whose `genres` does not contain `horror` and documents with a missing `genres` field:
```
NOT genres = horror
```
## Filter expressions
You can build filter expressions by grouping basic conditions using `AND` and `OR`. Filter expressions can be written as strings, arrays, or a mix of both.
### Filter expression grouping operators
#### `AND`
`AND` connects two conditions and only returns documents that satisfy both of them. `AND` has higher precedence than `OR`.
The following expression returns all documents matching both conditions:
```
genres = horror AND director = 'Jordan Peele'
```
#### `OR`
`OR` connects two conditions and returns results that satisfy at least one of them.
The following expression returns documents matching either condition:
```
genres = horror OR genres = comedy
```
### Creating filter expressions with string operators and parentheses
Meilisearch reads string expressions from left to right. You can use parentheses to ensure expressions are correctly parsed.
For instance, if you want your results to only include `comedy` and `horror` documents released after March 1995, the parentheses in the following query are mandatory:
```
(genres = horror OR genres = comedy) AND release_date > 795484800
```
Failing to add these parentheses will cause the same query to be parsed as:
```
genres = horror OR (genres = comedy AND release_date > 795484800)
```
Translated into English, the above expression will only return comedies released after March 1995 or horror movies regardless of their `release_date`.
When creating an expression with a field name or value identical to a filter operator such as `AND` or `NOT`, you must wrap it in quotation marks: `title = "NOT" OR title = "AND"`.
### Creating filter expressions with arrays
Array expressions establish logical connectives by nesting arrays of strings. **Array filters can have a maximum depth of two.** Expressions with three or more levels of nesting will throw an error.
Outer array elements are connected by an `AND` operator. The following expression returns `horror` movies directed by `Jordan Peele`:
```
["genres = horror", "director = 'Jordan Peele'"]
```
Inner array elements are connected by an `OR` operator. The following expression returns either `horror` or `comedy` films:
```
[["genres = horror", "genres = comedy"]]
```
Inner and outer arrays can be freely combined. The following expression returns both `horror` and `comedy` movies directed by `Jordan Peele`:
```
[["genres = horror", "genres = comedy"], "director = 'Jordan Peele'"]
```
### Combining arrays and string operators
You can also create filter expressions that use both array and string syntax.
The following filter is written as a string and only returns movies not directed by `Jordan Peele` that belong to the `comedy` or `horror` genres:
```
"(genres = comedy OR genres = horror) AND director != 'Jordan Peele'"
```
You can write the same filter mixing arrays and strings:
```
[["genres = comedy", "genres = horror"], "NOT director = 'Jordan Peele'"]
```
# Filter search results
Source: https://www.meilisearch.com/docs/learn/filtering_and_sorting/filter_search_results
In this guide you will see how to configure and use Meilisearch filters in a hypothetical movie database.
In this guide you will see how to configure and use Meilisearch filters in a hypothetical movie database.
## Configure index settings
Suppose you have a collection of movies called `movie_ratings` containing the following fields:
```json theme={null}
[
{
"id": 458723,
"title": "Us",
"director": "Jordan Peele",
"release_date": 1552521600,
"genres": [
"Thriller",
"Horror",
"Mystery"
],
"rating": {
"critics": 86,
"users": 73
},
},
…
]
```
If you want to filter results based on an attribute, you must first add it to the `filterableAttributes` list:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/movie_ratings/settings/filterable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"genres",
"director",
"release_date",
"ratings"
]'
```
```javascript JS theme={null}
client.index('movies')
.updateFilterableAttributes([
'director',
'genres'
])
```
```python Python theme={null}
client.index('movies').update_filterable_attributes([
'director',
'genres',
])
```
```php PHP theme={null}
$client->index('movies')->updateFilterableAttributes(['director', 'genres']);
```
```java Java theme={null}
client.index("movies").updateFilterableAttributesSettings(new String[]
{
"genres",
"director"
});
```
```ruby Ruby theme={null}
client.index('movies').update_filterable_attributes([
'director',
'genres'
])
```
```go Go theme={null}
resp, err := client.Index("movies").UpdateFilterableAttributes(&[]interface{}{
"director",
"genres",
})
```
```csharp C# theme={null}
await client.Index("movies").UpdateFilterableAttributesAsync(new [] { "director", "genres" });
```
```rust Rust theme={null}
let task: TaskInfo = client
.index("movies")
.set_filterable_attributes(["director", "genres"])
.await
.unwrap();
```
```swift Swift theme={null}
client.index("movies").updateFilterableAttributes(["genre", "director"]) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movies').updateFilterableAttributes([
'director',
'genres',
]);
```
**This step is mandatory and cannot be done at search time**. Updating `filterableAttributes` requires Meilisearch to re-index all your data, which will take an amount of time proportionate to your dataset size and complexity.
By default, `filterableAttributes` is empty. Filters do not work without first explicitly adding attributes to the `filterableAttributes` list.
## Use `filter` when searching
After updating the [`filterableAttributes` index setting](/reference/api/settings/get-filterableattributes), you can use `filter` to fine-tune your search results.
`filter` is a search parameter you may use at search time. `filter` accepts [filter expressions](/learn/filtering_and_sorting/filter_expression_reference) built using any attributes present in the `filterableAttributes` list.
The following code sample returns `Avengers` movies released after 18 March 1995:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movie_ratings/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "Avengers",
"filter": "release_date > 795484800"
}'
```
```javascript JS theme={null}
client.index('movie_ratings').search('Avengers', {
filter: 'release_date > 795484800'
})
```
```python Python theme={null}
client.index('movie_ratings').search('Avengers', {
'filter': 'release_date > 795484800'
})
```
```php PHP theme={null}
$client->index('movie_ratings')->search('Avengers', [
'filter' => 'release_date > 795484800'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("Avengers").filter(new String[] {"release_date > \"795484800\""}).build();
client.index("movie_ratings").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('movie_ratings').search('Avengers', { filter: 'release_date > 795484800' })
```
```go Go theme={null}
resp, err := client.Index("movie_ratings").Search("Avengers", &meilisearch.SearchRequest{
Filter: "release_date > \"795484800\"",
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery() { Filter = "release_date > \"795484800\"" };
var movies = await client.Index("movie_ratings").SearchAsync("Avengers", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("movie_ratings")
.search()
.with_query("Avengers")
.with_filter("release_date > 795484800")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "Avengers",
filter: "release_date > 795484800"
)
client.index("movie_ratings").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movie_ratings').search(
'Avengers',
SearchQuery(
filterExpression: Meili.gt(
Meili.attr('release_date'),
DateTime.utc(1995, 3, 18).toMeiliValue(),
),
),
);
```
Use dot notation to filter results based on a document's [nested fields](/learn/engine/datatypes). The following query only returns thrillers with good user reviews:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movie_ratings/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "thriller",
"filter": "rating.users >= 90"
}'
```
```javascript JS theme={null}
client.index('movie_ratings').search('thriller', {
filter: 'rating.users >= 90'
})
```
```python Python theme={null}
client.index('movie_ratings').search('thriller', {
'filter': 'rating.users >= 90'
})
```
```php PHP theme={null}
$client->index('movie_ratings')->search('thriller', [
'filter' => 'rating.users >= 90'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("thriller").filter(new String[] {"rating.users >= 90"}).build();
client.index("movie_ratings").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('movies_ratings').search('thriller', {
filter: 'rating.users >= 90'
})
```
```go Go theme={null}
resp, err := client.Index("movie_ratings").Search("thriller", &meilisearch.SearchRequest{
Filter: "rating.users >= 90",
})
```
```csharp C# theme={null}
var filters = new SearchQuery() { Filter = "rating.users >= 90" };
var movies = await client.Index("movie_ratings").SearchAsync("thriller", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("movie_rating")
.search()
.with_query("thriller")
.with_filter("rating.users >= 90")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "thriller",
filter: "rating.users >= 90"
)
client.index("movie_ratings").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movie_ratings').search(
'thriller',
SearchQuery(
filterExpression: Meili.gte(
//or Meili.attr('rating.users')
//or 'rating.users'.toMeiliAttribute()
Meili.attrFromParts(['rating', 'users']),
Meili.value(90),
),
),
);
```
You can also combine multiple conditions. For example, you can limit your search so it only includes `Batman` movies directed by either `Tim Burton` or `Christopher Nolan`:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movie_ratings/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "Batman",
"filter": "release_date > 795484800 AND (director = \"Tim Burton\" OR director = \"Christopher Nolan\")"
}'
```
```javascript JS theme={null}
client.index('movie_ratings').search('Batman', {
filter: 'release_date > 795484800 AND (director = "Tim Burton" OR director = "Christopher Nolan")'
})
```
```python Python theme={null}
client.index('movie_ratings').search('Batman', {
'filter': 'release_date > 795484800 AND (director = "Tim Burton" OR director = "Christopher Nolan")'
})
```
```php PHP theme={null}
$client->index('movie_ratings')->search('Batman', [
'filter' => 'release_date > 795484800 AND (director = "Tim Burton" OR director = "Christopher Nolan")'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("Batman").filter(new String[] {"release_date > 795484800 AND (director = \"Tim Burton\" OR director = \"Christopher Nolan\")"}).build();
client.index("movie_ratings").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('movie_ratings').search('Batman', {
filter: 'release_date > 795484800 AND (director = "Tim Burton" OR director = "Christopher Nolan")'
})
```
```go Go theme={null}
resp, err := client.Index("movie_ratings").Search("Batman", &meilisearch.SearchRequest{
Filter: "release_date > 795484800 AND (director = \"Tim Burton\" OR director = \"Christopher Nolan\")",
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery() { Filter = "release_date > 795484800 AND (director =
\"Tim Burton\" OR director = \"Christopher Nolan\")" };
var movies = await client.Index("movie_ratings").SearchAsync("Batman", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("movie_ratings")
.search()
.with_query("Batman")
.with_filter(r#"release_date > 795484800 AND (director = "Tim Burton" OR director = "Christopher Nolan")"#)
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "Batman",
filter: "release_date > 795484800 AND (director = \"Tim Burton\" OR director = \"Christopher Nolan\"")
client.index("movie_ratings").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movie_ratings').search(
'Batman',
SearchQuery(
filterExpression: Meili.and([
Meili.attr('release_date')
.gt(DateTime.utc(1995, 3, 18).toMeiliValue()),
Meili.or([
'director'.toMeiliAttribute().eq('Tim Burton'.toMeiliValue()),
'director'
.toMeiliAttribute()
.eq('Christopher Nolan'.toMeiliValue()),
]),
]),
),
);
```
Here, the parentheses are mandatory: without them, the filter would return movies directed by `Tim Burton` and released after 1995 or any film directed by `Christopher Nolan`, without constraints on its release date. This happens because `AND` takes precedence over `OR`.
If you only want recent `Planet of the Apes` movies that weren't directed by `Tim Burton`, you can use this filter:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movie_ratings/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "Planet of the Apes",
"filter": "release_date > 1577884550 AND (NOT director = \"Tim Burton\")"
}' \
```
```javascript JS theme={null}
client.index('movie_ratings').search('Planet of the Apes', {
filter: "release_date > 1577884550 AND (NOT director = \"Tim Burton\")"
})
```
```python Python theme={null}
client.index('movie_ratings').search('Planet of the Apes', {
'filter': 'release_date > 1577884550 AND (NOT director = "Tim Burton"))'
})
```
```php PHP theme={null}
$client->index('movie_ratings')->search('Planet of the Apes', [
'filter' => 'release_date > 1577884550 AND (NOT director = "Tim Burton")'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("Planet of the Apes").filter(new String[] {"release_date > 1577884550 AND (NOT director = \"Tim Burton\")"}).build();
client.index("movie_ratings").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('movie_ratings').search('Planet of the Apes', {
filter: "release_date > 1577884550 AND (NOT director = \"Tim Burton\")"
})
```
```go Go theme={null}
resp, err := client.Index("movie_ratings").Search("Planet of the Apes", &meilisearch.SearchRequest{
Filter: "release_date > 1577884550 AND (NOT director = \"Tim Burton\")",
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery() { Filter = "release_date > 1577884550 AND (NOT director = \"Tim Burton\")" };
var movies = await client.Index("movie_ratings").SearchAsync("Planet of the Apes", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("movie_ratings")
.search()
.with_query("Planet of the Apes")
.with_filter(r#"release_date > 1577884550 AND (NOT director = "Tim Burton")"#)
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "Planet of the Apes",
filter: "release_date > 1577884550 AND (NOT director = \"Tim Burton\"))
client.index("movie_ratings").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movie_ratings').search(
'Planet of the Apes',
SearchQuery(
filterExpression: Meili.and([
Meili.attr('release_date')
.gt(DateTime.utc(2020, 1, 1, 13, 15, 50).toMeiliValue()),
Meili.not(
Meili.attr('director').eq("Tim Burton".toMeiliValue()),
),
]),
),
);
```
```
release_date > 1577884550 AND (NOT director = "Tim Burton" AND director EXISTS)
```
[Synonyms](/learn/relevancy/synonyms) don't apply to filters. Meaning, if you have `SF` and `San Francisco` set as synonyms, filtering by `SF` and `San Francisco` will show you different results.
# Geosearch
Source: https://www.meilisearch.com/docs/learn/filtering_and_sorting/geosearch
Filter and sort search results based on their geographic location.
Meilisearch allows you to filter and sort results based on their geographic location. This can be useful when you only want results within a specific area or when sorting results based on their distance from a specific location.
## Preparing documents for location-based search
To start filtering documents based on their geographic location, you must make sure they contain a valid `_geo` or `_geojson` field. If you also want to sort documents geogeraphically, they must have a valid `_geo` field.
`_geo` and `_geojson` are reserved fields. If you include one of them in your documents, Meilisearch expects its value to conform to a specific format.
When using JSON and NDJSON, `_geo` must contain an object with two keys: `lat` and `lng`. Both fields must contain either a floating point number or a string indicating, respectively, latitude and longitude:
```json theme={null}
{
…
"_geo": {
"lat": 0.0,
"lng": "0.0"
}
}
```
`_geojson` must be an object whose contents follow the [GeoJSON specification](https://geojson.org/):
```json theme={null}
{
…
"_geojson": {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [0.0, 0.0]
}
}
}
```
Meilisearch does not support transmeridian shapes. If your document includes a transmeridian shape, split it into two separate shapes grouped as a `MultiPolygon` or `MultiLine`. Transmeridian shapes are polygons or lines that cross the 180th meridian.
**Meilisearch does not support polygons with holes**. If your polygon consists of an external ring and an inner empty space, Meilisearch ignores the hole and treats the polygon as a solid shape.
### Using `_geo` and `_geojson` together
If your application requires both sorting by distance to a point and filtering by shapes other than a circle or a rectangle, you will need to add both `_geo` and `_geojson` to your documents.
When handling documents with both fields, Meilisearch:
* Ignores `_geojson` values when sorting
* Ignores `_geo` values when filtering with `_geoPolygon`
* Matches both `_geo` and `_geojson` values when filtering with `_geoRadius` and `_geoBoundingBox`
### Examples
Suppose we have a JSON array containing a few restaurants:
```json theme={null}
[
{
"id": 1,
"name": "Nàpiz' Milano",
"address": "Viale Vittorio Veneto, 30, 20124, Milan, Italy",
"type": "pizza",
"rating": 9
},
{
"id": 2,
"name": "Bouillon Pigalle",
"address": "22 Bd de Clichy, 75018 Paris, France",
"type": "french",
"rating": 8
},
{
"id": 3,
"name": "Artico Gelateria Tradizionale",
"address": "Via Dogana, 1, 20123 Milan, Italy",
"type": "ice cream",
"rating": 10
}
]
```
Our restaurant dataset looks like this once we add `_geo` data:
```json theme={null}
[
{
"id": 1,
"name": "Nàpiz' Milano",
"address": "Viale Vittorio Veneto, 30, 20124, Milan, Italy",
"type": "pizza",
"rating": 9,
"_geo": {
"lat": 45.4777599,
"lng": 9.1967508
}
},
{
"id": 2,
"name": "Bouillon Pigalle",
"address": "22 Bd de Clichy, 75018 Paris, France",
"type": "french",
"rating": 8,
"_geo": {
"lat": 48.8826517,
"lng": 2.3352748
}
},
{
"id": 3,
"name": "Artico Gelateria Tradizionale",
"address": "Via Dogana, 1, 20123 Milan, Italy",
"type": "ice cream",
"rating": 10,
"_geo": {
"lat": 45.4632046,
"lng": 9.1719421
}
}
]
```
Trying to index a dataset with one or more documents containing badly formatted `_geo` values will cause Meilisearch to throw an [`invalid_document_geo_field`](/reference/errors/error_codes#invalid_document_geo_field) error. In this case, the update will fail and no documents will be added or modified.
### Using `_geo` with CSV
If your dataset is formatted as CSV, the file header must have a `_geo` column. Each row in the dataset must then contain a column with a comma-separated string indicating latitude and longitude:
```csv theme={null}
"id:number","name:string","address:string","type:string","rating:number","_geo:string"
"1","Nàpiz Milano","Viale Vittorio Veneto, 30, 20124, Milan, Italy","pizzeria",9,"45.4777599,9.1967508"
"2","Bouillon Pigalle","22 Bd de Clichy, 75018 Paris, France","french",8,"48.8826517,2.3352748"
"3","Artico Gelateria Tradizionale","Via Dogana, 1, 20123 Milan, Italy","ice cream",10,"48.8826517,2.3352748"
```
CSV files do not support the `_geojson` attribute.
## Filtering results with `_geoRadius`, `_geoBoundingBox`, and `_geoPolygon`
You can use `_geo` and `_geojson` data to filter queries so you only receive results located within a given geographic area.
### Configuration
To filter results based on their location, you must add `_geo` or `_geojson` to the `filterableAttributes` list:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/restaurants/settings/filterable-attributes' \
-H 'Content-type:application/json' \
--data-binary '["_geo"]'
```
```javascript JS theme={null}
client.index('restaurants')
.updateFilterableAttributes([
'_geo'
])
```
```python Python theme={null}
client.index('restaurants').update_filterable_attributes([
'_geo'
])
```
```php PHP theme={null}
$client->index('restaurants')->updateFilterableAttributes([
'_geo'
]);
```
```java Java theme={null}
Settings settings = new Settings();
settings.setFilterableAttributes(new String[] { "_geo" });
client.index("restaurants").updateSettings(settings);
```
```ruby Ruby theme={null}
client.index('restaurants').update_filterable_attributes(['_geo'])
```
```go Go theme={null}
filterableAttributes := []interface{}{
"_geo",
}
client.Index("restaurants").UpdateFilterableAttributes(&filterableAttributes)
```
```csharp C# theme={null}
List attributes = new() { "_geo" };
TaskInfo result = await client.Index("movies").UpdateFilterableAttributesAsync(attributes);
```
```rust Rust theme={null}
let task: TaskInfo = client
.index("restaurants")
.set_filterable_attributes(&["_geo"])
.await
.unwrap();
```
```swift Swift theme={null}
client.index("restaurants").updateFilterableAttributes(["_geo"]) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('restaurants').updateFilterableAttributes(['_geo']);
```
Meilisearch will rebuild your index whenever you update `filterableAttributes`. Depending on the size of your dataset, this might take a considerable amount of time.
[You can read more about configuring `filterableAttributes` in our dedicated filtering guide.](/learn/filtering_and_sorting/filter_search_results)
### Usage
Use the [`filter` search parameter](/reference/api/search/search-with-post#body-filter) along with `_geoRadius` and `_geoBoundingBox`. These are special filter rules that ensure Meilisearch only returns results located within a specific geographic area. If you are using GeoJSON for your documents, you may also filter results with `_geoPolygon`.
### `_geoRadius`
```
_geoRadius(lat, lng, distance_in_meters, resolution)
```
### `_geoBoundingBox`
```
_geoBoundingBox([LAT, LNG], [LAT, LNG])
```
### `_geoPolygon`
```
_geoPolygon([LAT, LNG], [LAT, LNG], [LAT, LNG], …)
```
### Examples
Using our example dataset, we can search for places to eat near the center of Milan with `_geoRadius`:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/restaurants/search' \
-H 'Content-type:application/json' \
--data-binary '{ "filter": "_geoRadius(45.472735, 9.184019, 2000)" }'
```
```javascript JS theme={null}
client.index('restaurants').search('', {
filter: ['_geoRadius(45.472735, 9.184019, 2000)'],
})
```
```python Python theme={null}
client.index('restaurants').search('', {
'filter': '_geoRadius(45.472735, 9.184019, 2000)'
})
```
```php PHP theme={null}
$client->index('restaurants')->search('', [
'filter' => '_geoRadius(45.472735, 9.184019, 2000)'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("").filter(new String[] {"_geoRadius(45.472735, 9.184019, 2000)"}).build();
client.index("restaurants").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('restaurants').search('', { filter: '_geoRadius(45.472735, 9.184019, 2000)' })
```
```go Go theme={null}
resp, err := client.Index("restaurants").Search("", &meilisearch.SearchRequest{
Filter: "_geoRadius(45.472735, 9.184019, 2000)",
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery() { Filter = "_geoRadius(45.472735, 9.184019, 2000)" };
var restaurants = await client.Index("restaurants").SearchAsync("", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("restaurants")
.search()
.with_filter("_geoRadius(45.472735, 9.184019, 2000)")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
filter: "_geoRadius(45.472735, 9.184019, 2000)"
)
client.index("restaurants").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('restaurants').search(
'',
SearchQuery(
filterExpression: Meili.geoRadius(
(lat: 45.472735, lng: 9.184019),
2000,
),
),
);
```
We also make a similar query using `_geoBoundingBox`:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/restaurants/search' \
-H 'Content-type:application/json' \
--data-binary '{ "filter": "_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])" }'
```
```javascript JS theme={null}
client.index('restaurants').search('', {
filter: ['_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])'],
})
```
```python Python theme={null}
client.index('restaurants').search('Batman', {
'filter': '_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])'
})
```
```php PHP theme={null}
$client->index('restaurants')->search('', [
'filter' => '_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q()("").filter(new String[] {
"_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])"
}).build();
client.index("restaurants").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('restaurants').search('', { filter: ['_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])'] })
```
```go Go theme={null}
client.Index("restaurants").Search("", &meilisearch.SearchRequest{
Filter: "_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])",
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery()
{
Filter = "_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])"
};
var restaurants = await client.Index("restaurants").SearchAsync("restaurants", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("restaurants")
.search()
.with_filter("_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
filter: "_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])"
)
client.index("restaurants").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('restaurants').search(
'',
SearchQuery(
filter:
'_geoBoundingBox([45.494181, 9.214024], [45.449484, 9.179175])',
),
);
```
And with `_geoPolygon`:
```json theme={null}
[
{
"id": 1,
"name": "Nàpiz' Milano",
"address": "Viale Vittorio Veneto, 30, 20124, Milan, Italy",
"type": "pizza",
"rating": 9,
"_geo": {
"lat": 45.4777599,
"lng": 9.1967508
}
},
{
"id": 3,
"name": "Artico Gelateria Tradizionale",
"address": "Via Dogana, 1, 20123 Milan, Italy",
"type": "ice cream",
"rating": 10,
"_geo": {
"lat": 45.4632046,
"lng": 9.1719421
}
}
]
```
It is also possible to combine `_geoRadius`, `_geoBoundingBox`, and `_geoPolygon` with other filters. We can narrow down our previous search so it only includes pizzerias:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/restaurants/search' \
-H 'Content-type:application/json' \
--data-binary '{ "filter": "_geoRadius(45.472735, 9.184019, 2000) AND type = pizza" }'
```
```javascript JS theme={null}
client.index('restaurants').search('', {
filter: ['_geoRadius(45.472735, 9.184019, 2000) AND type = pizza'],
})
```
```python Python theme={null}
client.index('restaurants').search('', {
'filter': '_geoRadius(45.472735, 9.184019, 2000) AND type = pizza'
})
```
```php PHP theme={null}
$client->index('restaurants')->search('', [
'filter' => '_geoRadius(45.472735, 9.184019, 2000) AND type = pizza'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("").filter(new String[] {"_geoRadius(45.472735, 9.184019, 2000) AND type = pizza"}).build();
client.index("restaurants").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('restaurants').search('', { filter: '_geoRadius(45.472735, 9.184019, 2000) AND type = pizza' })
```
```go Go theme={null}
resp, err := client.Index("restaurants").Search("", &meilisearch.SearchRequest{
Filter: "_geoRadius(45.472735, 9.184019, 2000) AND type = pizza",
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery()
{
Filter = new string[] { "_geoRadius(45.472735, 9.184019, 2000) AND type = pizza" }
};
var restaurants = await client.Index("restaurants").SearchAsync("restaurants", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("restaurants")
.search()
.with_filter("_geoRadius(45.472735, 9.184019, 2000) AND type = pizza")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
filter: "_geoRadius(45.472735, 9.184019, 2000) AND type = pizza"
)
client.index("restaurants").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('restaurants').search(
'',
SearchQuery(
filterExpression: Meili.and([
Meili.geoRadius(
(lat: 45.472735, lng: 9.184019),
2000,
),
Meili.attr('type').eq('pizza'.toMeiliValue())
]),
),
);
```
```json theme={null}
[
{
"id": 1,
"name": "Nàpiz' Milano",
"address": "Viale Vittorio Veneto, 30, 20124, Milan, Italy",
"type": "pizza",
"rating": 9,
"_geo": {
"lat": 45.4777599,
"lng": 9.1967508
}
}
]
```
`_geo`, `_geoDistance`, and `_geoPoint` are not valid filter rules. Trying to use any of them with the `filter` search parameter will result in an [`invalid_search_filter`](/reference/errors/error_codes#invalid_search_filter) error.
## Sorting results with `_geoPoint`
### Configuration
Before using geosearch for sorting, you must add the `_geo` attribute to the [`sortableAttributes` list](/learn/filtering_and_sorting/sort_search_results):
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/restaurants/settings/sortable-attributes' \
-H 'Content-type:application/json' \
--data-binary '["_geo"]'
```
```javascript JS theme={null}
client.index('restaurants').updateSortableAttributes([
'_geo'
])
```
```python Python theme={null}
client.index('restaurants').update_sortable_attributes([
'_geo'
])
```
```php PHP theme={null}
$client->index('restaurants')->updateSortableAttributes([
'_geo'
]);
```
```java Java theme={null}
client.index("restaurants").updateSortableAttributesSettings(new String[] {"_geo"});
```
```ruby Ruby theme={null}
client.index('restaurants').update_sortable_attributes(['_geo'])
```
```go Go theme={null}
sortableAttributes := []string{
"_geo",
}
client.Index("restaurants").UpdateSortableAttributes(&sortableAttributes)
```
```csharp C# theme={null}
List attributes = new() { "_geo" };
TaskInfo result = await client.Index("restaurants").UpdateSortableAttributesAsync(attributes);
```
```rust Rust theme={null}
let task: TaskInfo = client
.index("restaurants")
.set_sortable_attributes(&["_geo"])
.await
.unwrap();
```
```swift Swift theme={null}
client.index("restaurants").updateSortableAttributes(["_geo"]) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('restaurants').updateSortableAttributes(['_geo']);
```
It is not possible to sort documents based on the `_geojson` attribute.
### Usage
```
_geoPoint(0.0, 0.0):asc
```
### Examples
The `_geoPoint` sorting function can be used like any other sorting rule. We can order documents based on how close they are to the Eiffel Tower:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/restaurants/search' \
-H 'Content-type:application/json' \
--data-binary '{ "sort": ["_geoPoint(48.8561446,2.2978204):asc"] }'
```
```javascript JS theme={null}
client.index('restaurants').search('', {
sort: ['_geoPoint(48.8561446, 2.2978204):asc'],
})
```
```python Python theme={null}
client.index('restaurants').search('', {
'sort': ['_geoPoint(48.8561446,2.2978204):asc']
})
```
```php PHP theme={null}
$client->index('restaurants')->search('', [
'sort' => ['_geoPoint(48.8561446,2.2978204):asc']
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("").sort(new String[] {"_geoPoint(48.8561446,2.2978204):asc"}).build();
client.index("restaurants").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('restaurants').search('', { sort: ['_geoPoint(48.8561446, 2.2978204):asc'] })
```
```go Go theme={null}
resp, err := client.Index("restaurants").Search("", &meilisearch.SearchRequest{
Sort: []string{
"_geoPoint(48.8561446,2.2978204):asc",
},
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery()
{
Sort = new string[] { "_geoPoint(48.8561446,2.2978204):asc" }
};
var restaurants = await client.Index("restaurants").SearchAsync("", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("restaurants")
.search()
.with_sort(&["_geoPoint(48.8561446, 2.2978204):asc"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "",
sort: ["_geoPoint(48.8561446, 2.2978204):asc"]
)
client.index("restaurants").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('restaurants').search(
'', SearchQuery(sort: ['_geoPoint(48.8561446, 2.2978204):asc']));
```
With our restaurants dataset, the results look like this:
```json theme={null}
[
{
"id": 2,
"name": "Bouillon Pigalle",
"address": "22 Bd de Clichy, 75018 Paris, France",
"type": "french",
"rating": 8,
"_geo": {
"lat": 48.8826517,
"lng": 2.3352748
}
},
{
"id": 3,
"name": "Artico Gelateria Tradizionale",
"address": "Via Dogana, 1, 20123 Milan, Italy",
"type": "ice cream",
"rating": 10,
"_geo": {
"lat": 45.4632046,
"lng": 9.1719421
}
},
{
"id": 1,
"name": "Nàpiz' Milano",
"address": "Viale Vittorio Veneto, 30, 20124, Milan, Italy",
"type": "pizza",
"rating": 9,
"_geo": {
"lat": 45.4777599,
"lng": 9.1967508
}
}
]
```
`_geoPoint` also works when used together with other sorting rules. We can sort restaurants based on their proximity to the Eiffel Tower and their rating:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/restaurants/search' \
-H 'Content-type:application/json' \
--data-binary '{
"sort": [
"_geoPoint(48.8561446,2.2978204):asc",
"rating:desc"
]
}'
```
```javascript JS theme={null}
client.index('restaurants').search('', {
sort: ['_geoPoint(48.8561446, 2.2978204):asc', 'rating:desc'],
})
```
```python Python theme={null}
client.index('restaurants').search('', {
'sort': ['_geoPoint(48.8561446,2.2978204):asc', 'rating:desc']
})
```
```php PHP theme={null}
$client->index('restaurants')->search('', [
'sort' => ['_geoPoint(48.8561446,2.2978204):asc', 'rating:desc']
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q()("").sort(new String[] {
"_geoPoint(48.8561446,2.2978204):asc",
"rating:desc",
}).build();
client.index("restaurants").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('restaurants').search('', { sort: ['_geoPoint(48.8561446, 2.2978204):asc', 'rating:desc'] })
```
```go Go theme={null}
resp, err := client.Index("restaurants").Search("", &meilisearch.SearchRequest{
Sort: []string{
"_geoPoint(48.8561446,2.2978204):asc",
"rating:desc",
},
})
```
```csharp C# theme={null}
SearchQuery filters = new SearchQuery()
{
Sort = new string[] {
"_geoPoint(48.8561446,2.2978204):asc",
"rating:desc"
}
};
var restaurants = await client.Index("restaurants").SearchAsync("restaurants", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("restaurants")
.search()
.with_sort(&["_geoPoint(48.8561446, 2.2978204):asc", "rating:desc"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "",
sort: ["_geoPoint(48.8561446, 2.2978204):asc", "rating:desc"]
)
client.index("restaurants").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('restaurants').search(
'',
SearchQuery(
sort: ['_geoPoint(48.8561446, 2.2978204):asc', 'rating:desc']));
```
```json theme={null}
[
{
"id": 2,
"name": "Bouillon Pigalle",
"address": "22 Bd de Clichy, 75018 Paris, France",
"type": "french",
"rating": 8,
"_geo": {
"lat": 48.8826517,
"lng": 2.3352748
}
},
{
"id": 3,
"name": "Artico Gelateria Tradizionale",
"address": "Via Dogana, 1, 20123 Milan, Italy",
"type": "ice cream",
"rating": 10,
"_geo": {
"lat": 45.4632046,
"lng": 9.1719421
}
},
{
"id": 1,
"name": "Nàpiz' Milano",
"address": "Viale Vittorio Veneto, 30, 20124, Milan, Italy",
"type": "pizza",
"rating": 9,
"_geo": {
"lat": 45.4777599,
"lng": 9.1967508
}
}
]
```
# Search with facets
Source: https://www.meilisearch.com/docs/learn/filtering_and_sorting/search_with_facet_filters
Faceted search interfaces provide users with a quick way to narrow down search results by selecting categories relevant to their query.
In Meilisearch, facets are a specialized type of filter. This guide shows you how to configure facets and use them when searching a database of books. It also gives you instruction on how to get
## Requirements
* a Meilisearch project
* a command-line terminal
## Configure facet index settings
First, create a new index using this books dataset. Documents in this dataset have the following fields:
```json theme={null}
{
"id": 5,
"title": "Hard Times",
"genres": ["Classics","Fiction", "Victorian", "Literature"],
"publisher": "Penguin Classics",
"language": "English",
"author": "Charles Dickens",
"description": "Hard Times is a novel of social […] ",
"format": "Hardcover",
"rating": 3
}
```
Next, add `genres`, `language`, and `rating` to the list of `filterableAttributes`:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/books/settings/filterable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"genres", "rating", "language"
]'
```
```javascript JS theme={null}
client.index('movie_ratings').updateFilterableAttributes(['genres', 'rating', 'language'])
```
```python Python theme={null}
client.index('movie_ratings').update_filterable_attributes([
'genres',
'director',
'language'
])
```
```php PHP theme={null}
$client->index('movie_ratings')->updateFilterableAttributes(['genres', 'rating', 'language']);
```
```java Java theme={null}
client.index("movie_ratings").updateFilterableAttributesSettings(new String[] { "genres", "director", "language" });
```
```ruby Ruby theme={null}
client.index('movie_ratings').update_filterable_attributes(['genres', 'rating', 'language'])
```
```go Go theme={null}
filterableAttributes := []interface{}{
"genres",
"rating",
"language",
}
client.Index("movie_ratings").UpdateFilterableAttributes(&filterableAttributes)
```
```csharp C# theme={null}
List attributes = new() { "genres", "rating", "language" };
TaskInfo result = await client.Index("movie_ratings").UpdateFilterableAttributesAsync(attributes);
```
```rust Rust theme={null}
let task: TaskInfo = client
.index("movie_ratings")
.set_filterable_attributes(&["genres", "rating", "language"])
.await
.unwrap();
```
```dart Dart theme={null}
await client
.index('movie_ratings')
.updateFilterableAttributes(['genres', 'rating', 'language']);
```
You have now configured your index to use these attributes as filters.
## Use facets in a search query
Make a search query setting the `facets` search parameter:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/books/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "classic",
"facets": [
"genres", "rating", "language"
]
}'
```
```javascript JS theme={null}
client.index('books').search('classic', { facets: ['genres', 'rating', 'language'] })
```
```python Python theme={null}
client.index('books').search('classic', {
'facets': ['genres', 'rating', 'language']
})
```
```php PHP theme={null}
$client->index('books')->search('classic', [
'facets' => ['genres', 'rating', 'language']
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("classic").facets(new String[]
{
"genres",
"rating",
"language"
}).build();
client.index("books").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('books').search('classic', {
facets: ['genres', 'rating', 'language']
})
```
```go Go theme={null}
resp, err := client.Index("books").Search("classic", &meilisearch.SearchRequest{
Facets: []string{
"genres",
"rating",
"language",
},
})
```
```csharp C# theme={null}
var sq = new SearchQuery
{
Facets = new string[] { "genres", "rating", "language" }
};
await client.Index("books").SearchAsync("classic", sq);
```
```rust Rust theme={null}
let books = client.index("books");
let results: SearchResults = SearchQuery::new(&books)
.with_query("classic")
.with_facets(Selectors::Some(&["genres", "rating", "language"]))
.execute()
.await
.unwrap();
```
```dart Dart theme={null}
await client
.index('books')
.search('', SearchQuery(facets: ['genres', 'rating', 'language']));
```
The response returns all books matching the query. It also returns two fields you can use to create a faceted search interface, `facetDistribution` and `facetStats`:
```json theme={null}
{
"hits": [
…
],
…
"facetDistribution": {
"genres": {
"Classics": 6,
…
},
"language": {
"English": 6,
"French": 1,
"Spanish": 1
},
"rating": {
"2.5": 1,
…
}
},
"facetStats": {
"rating": {
"min": 2.5,
"max": 4.7
}
}
}
```
`facetDistribution` lists all facets present in your search results, along with the number of documents returned for each facet.
`facetStats` contains the highest and lowest values for all facets containing numeric values.
### Sorting facet values
By default, all facet values are sorted in ascending alphanumeric order. You can change this using the `sortFacetValuesBy` property of the [`faceting` index settings](/reference/api/settings/get-faceting):
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/books/settings/faceting' \
-H 'Content-Type: application/json' \
--data-binary '{
"sortFacetValuesBy": {
"genres": "count"
}
}'
```
```javascript JS theme={null}
client.index('books').updateFaceting({
sortFacetValuesBy: {
genres: 'count'
}
})
```
```python Python theme={null}
client.index('books').update_faceting_settings({ 'sortFacetValuesBy': { 'genres': 'count' } })
```
```php PHP theme={null}
$client->index('books')->updateFaceting(['sortFacetValuesBy' => ['genres' => 'count']]);
```
```java Java theme={null}
Faceting newFaceting = new Faceting();
HashMap facetSortValues = new HashMap<>();
facetSortValues.put("genres", FacetSortValue.COUNT);
newFaceting.setSortFacetValuesBy(facetSortValues);
client.index("books").updateFacetingSettings(newFaceting);
```
```ruby Ruby theme={null}
client.index('books').update_faceting(
sort_facet_values_by: {
genres: 'count'
}
)
```
```go Go theme={null}
client.Index("books").UpdateFaceting(&meilisearch.Faceting{
SortFacetValuesBy: {
"genres": SortFacetTypeCount,
}
})
```
```csharp C# theme={null}
var newFaceting = new Faceting
{
SortFacetValuesBy = new Dictionary
{
["genres"] = SortFacetValuesByType.Count
}
};
await client.Index("books").UpdateFacetingAsync(newFaceting);
```
```rust Rust theme={null}
let mut facet_sort_setting = BTreeMap::new();
facet_sort_setting.insert("genres".to_string(), FacetSortValue::Count);
let faceting = FacetingSettings {
max_values_per_facet: 100,
sort_facet_values_by: Some(facet_sort_setting),
};
let res = client.index("books")
.set_faceting(&faceting)
.await
.unwrap();
```
```dart Dart theme={null}
await client.index('books').updateFaceting(
Faceting(
sortFacetValuesBy: {
'genres': FacetingSortTypes.count,
},
),
);
```
The above code sample sorts the `genres` facet by descending value count.
Repeating the previous query using the new settings will result in a different order in `facetsDistribution`:
```json theme={null}
{
…
"facetDistribution": {
"genres": {
"Fiction": 8,
"Literature": 7,
"Classics": 6,
"Novel": 2,
"Horror": 2,
"Fantasy": 2,
"Victorian": 2,
"Vampires": 1,
"Tragedy": 1,
"Satire": 1,
"Romance": 1,
"Historical Fiction": 1,
"Coming-of-Age": 1,
"Comedy": 1
},
…
}
}
```
## Searching facet values
You can also search for facet values with the [facet search endpoint](/reference/api/facet-search/search-in-facets):
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/books/facet-search' \
-H 'Content-Type: application/json' \
--data-binary '{
"facetQuery": "c",
"facetName": "genres"
}'
```
```javascript JS theme={null}
client.index('books').searchForFacetValues({
facetQuery: 'c',
facetName: 'genres'
})
```
```python Python theme={null}
client.index('books').facet_search('genres', 'c')
```
```php PHP theme={null}
$client->index('books')->facetSearch(
(new FacetSearchQuery())
->setFacetQuery('c')
->setFacetName('genres')
);
```
```java Java theme={null}
FacetSearchRequest fsr = FacetSearchRequest.builder().facetName("genres").facetQuery("c").build();
client.index("books").facetSearch(fsr);
```
```ruby Ruby theme={null}
client.index('books').facet_search('genres', 'c')
```
```go Go theme={null}
client.Index("books").FacetSearch(&meilisearch.FacetSearchRequest{
FacetQuery: "c",
FacetName: "genres",
ExhaustiveFacetCount: true
})
```
```csharp C# theme={null}
var query = new SearchFacetsQuery()
{
FacetQuery = "c",
ExhaustiveFacetCount: true
};
await client.Index("books").FacetSearchAsync("genres", query);
```
```rust Rust theme={null}
let res = client.index("books")
.facet_search("genres")
.with_facet_query("c")
.execute()
.await
.unwrap();
```
```dart Dart theme={null}
await client.index('books').facetSearch(
FacetSearchQuery(
facetQuery: 'c',
facetName: 'genres',
),
);
```
The following code sample searches the `genres` facet for values starting with `c`:
The response contains a `facetHits` array listing all matching facets, together with the total number of documents that include that facet:
```json theme={null}
{
…
"facetHits": [
{
"value": "Children's Literature",
"count": 1
},
{
"value": "Classics",
"count": 6
},
{
"value": "Comedy",
"count": 2
},
{
"value": "Coming-of-Age",
"count": 1
}
],
"facetQuery": "c",
…
}
```
You can further refine results using the `q`, `filter`, and `matchingStrategy` parameters. [Learn more about them in the API reference.](/reference/api/facet-search/search-in-facets)
# Sort search results
Source: https://www.meilisearch.com/docs/learn/filtering_and_sorting/sort_search_results
By default, Meilisearch sorts results according to their relevancy. You can alter this behavior so users can decide at search time results they want to see first.
By default, Meilisearch focuses on ordering results according to their relevancy. You can alter this sorting behavior so users can decide at search time what type of results they want to see first.
This can be useful in many situations, such as when a user wants to see the cheapest products available in a webshop.
Sorting at search time can be particularly effective when combined with placeholder searches (`"q": null`).
## Configure Meilisearch for sorting at search time
To allow your users to sort results at search time you must:
1. Decide which attributes you want to use for sorting
2. Add those attributes to the `sortableAttributes` index setting
3. Update Meilisearch's [ranking rules](/learn/relevancy/relevancy) (optional)
Meilisearch sorts strings in lexicographic order based on their byte values. For example, `á`, which has a value of 225, will be sorted after `z`, which has a value of 122.
Uppercase letters are sorted as if they were lowercase. They will still appear uppercase in search results.
### Add attributes to `sortableAttributes`
Meilisearch allows you to sort results based on document fields. Only fields containing numbers, strings, arrays of numeric values, and arrays of string values can be used for sorting.
After you have decided which fields you will allow your users to sort on, you must add their attributes to the [`sortableAttributes` index setting](/reference/api/settings/get-sortableattributes).
If a field has values of different types across documents, Meilisearch will give precedence to numbers over strings. This means documents with numeric field values will be ranked higher than those with string values.
This can lead to unexpected behavior when sorting. For optimal user experience, only sort based on fields containing the same type of value.
#### Example
Suppose you have collection of books containing the following fields:
```json theme={null}
[
{
"id": 1,
"title": "Solaris",
"author": "Stanislaw Lem",
"genres": [
"science fiction"
],
"rating": {
"critics": 95,
"users": 87
},
"price": 5.00
},
…
]
```
If you are using this dataset in a webshop, you might want to allow your users to sort on `author` and `price`:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/books/settings/sortable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"author",
"price"
]'
```
```javascript JS theme={null}
client.index('books').updateSortableAttributes([
'author',
'price'
])
```
```python Python theme={null}
client.index('books').update_sortable_attributes([
'author',
'price'
])
```
```php PHP theme={null}
$client->index('books')->updateSortableAttributes([
'author',
'price'
]);
```
```java Java theme={null}
client.index("books").updateSortableAttributesSettings(new String[] {"price", "author"});
```
```ruby Ruby theme={null}
client.index('books').update_sortable_attributes(['author', 'price'])
```
```go Go theme={null}
sortableAttributes := []string{
"author",
"price",
}
client.Index("books").UpdateSortableAttributes(&sortableAttributes)
```
```csharp C# theme={null}
await client.Index("books").UpdateSortableAttributesAsync(new [] { "price", "author" });
```
```rust Rust theme={null}
let sortable_attributes = [
"author",
"price"
];
let task: TaskInfo = client
.index("books")
.set_sortable_attributes(&sortable_attributes)
.await
.unwrap();
```
```swift Swift theme={null}
client.index("books").updateSortableAttributes(["price", "author"]) { (result: Result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('books').updateSortableAttributes(['author', 'price']);
```
### Customize ranking rule order (optional)
When users sort results at search time, [Meilisearch's ranking rules](/learn/relevancy/relevancy) are set up so the top matches emphasize relevant results over sorting order. You might need to alter this behavior depending on your application's needs.
This is the default configuration of Meilisearch's ranking rules:
```json theme={null}
[
"words",
"typo",
"proximity",
"attributeRank",
"sort",
"wordPosition",
"exactness"
]
```
`"sort"` is in fifth place. This means it acts as a tie-breaker rule: Meilisearch will first place results closely matching search terms at the top of the returned documents list and only then will apply the `"sort"` parameters as requested by the user. In other words, by default Meilisearch provides a very relevant sorting.
Placing `"sort"` ranking rule higher in the list will emphasize exhaustive sorting over relevant sorting: your results will more closely follow the sorting order your user chose, but will not be as relevant.
Sorting applies equally to all documents. Meilisearch does not offer native support for promoting, pinning, and boosting specific documents so they are displayed more prominently than other search results. Consult these Meilisearch blog articles for workarounds on [implementing promoted search results with React InstantSearch](https://blog.meilisearch.com/promoted-search-results-with-react-instantsearch) and [document boosting](https://blog.meilisearch.com/document-boosting).
#### Example
If your users care more about finding cheaper books than they care about finding specific matches to their queries, you can place `sort` much higher in the ranking rules:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/books/settings/ranking-rules' \
-H 'Content-Type: application/json' \
--data-binary '[
"words",
"sort",
"typo",
"proximity",
"attributeRank",
"wordPosition",
"exactness"
]'
```
```javascript JS theme={null}
client.index('books').updateRankingRules([
'words',
'sort',
'typo',
'proximity',
'attribute',
'exactness'
])
```
```python Python theme={null}
client.index('books').update_ranking_rules([
'words',
'sort',
'typo',
'proximity',
'attribute',
'exactness'
])
```
```php PHP theme={null}
$client->index('books')->updateRankingRules([
'words',
'sort',
'typo',
'proximity',
'attribute',
'exactness'
]);
```
```java Java theme={null}
Settings settings = new Settings();
settings.setRankingRules(new String[]
{
"words",
"sort",
"typo",
"proximity",
"attribute",
"exactness"
});
client.index("books").updateSettings(settings);
```
```ruby Ruby theme={null}
client.index('books').update_ranking_rules([
'words',
'sort',
'typo',
'proximity',
'attribute',
'exactness'
])
```
```go Go theme={null}
rankingRules := []string{
"words",
"sort",
"typo",
"proximity",
"attribute",
"exactness",
}
client.Index("books").UpdateRankingRules(&rankingRules)
```
```csharp C# theme={null}
await client.Index("books").UpdateRankingRulesAsync(new[]
{
"words",
"sort",
"typo",
"proximity",
"attribute",
"exactness"
});
```
```rust Rust theme={null}
let ranking_rules = [
"words",
"sort",
"typo",
"proximity",
"attribute",
"exactness"
];
let task: TaskInfo = client
.index("books")
.set_ranking_rules(&ranking_rules)
.await
.unwrap();
```
```swift Swift theme={null}
let rankingRules: [String] = [
"words",
"sort",
"typo",
"proximity",
"attribute",
"exactness"
]
client.index("books").updateRankingRules(rankingRules) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('books').updateRankingRules(
['words', 'sort', 'typo', 'proximity', 'attribute', 'exactness']);
```
## Sort results at search time
After configuring `sortableAttributes`, you can use the [`sort` search parameter](/reference/api/search/search-with-post#body-sort) to control the sorting order of your search results.
`sort` expects a list of attributes that have been added to the `sortableAttributes` list.
Attributes must be given as `attribute:sorting_order`. In other words, each attribute must be followed by a colon (`:`) and a sorting order: either ascending (`asc`) or descending (`desc`).
When using the `POST` route, `sort` expects an array of strings:
```json theme={null}
"sort": [
"price:asc",
"author:desc"
]
```
When using the `GET` route, `sort` expects a comma-separated string:
```
sort="price:desc,author:asc"
```
The order of `sort` values matter: the higher an attribute is in the search parameter value, the more Meilisearch will prioritize it over attributes placed lower. In our example, if multiple documents have the same value for `price`, Meilisearch will decide the order between these similarly-priced documents based on their `author`.
### Example
Suppose you are searching for books in a webshop and want to see the cheapest science fiction titles. This query searches for `"science fiction"` books sorted from cheapest to most expensive:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/books/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "science fiction",
"sort": ["price:asc"]
}'
```
```javascript JS theme={null}
client.index('books').search('science fiction', {
sort: ['price:asc'],
})
```
```python Python theme={null}
client.index('books').search('science fiction', {
'sort': ['price:asc']
})
```
```php PHP theme={null}
$client->index('books')->search('science fiction', ['sort' => ['price:asc']]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("science fiction").sort(new String[] {"price:asc"}).build();
client.index("books").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('books').search('science fiction', { sort: ['price:asc'] })
```
```go Go theme={null}
resp, err := client.Index("books").Search("science fiction", &meilisearch.SearchRequest{
Sort: []string{
"price:asc",
},
})
```
```csharp C# theme={null}
var sq = new SearchQuery
{
Sort = new[] { "price:asc" },
};
await client.Index("books").SearchAsync("science fiction", sq);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("books")
.search()
.with_query("science fiction")
.with_sort(&["price:asc"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "science fiction",
sort: ["price:asc"]
)
client.index("books").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client
.index('books')
.search('science fiction', SearchQuery(sort: ['price:asc']));
```
With our example dataset, the results look like this:
```json theme={null}
[
{
"id": 1,
"title": "Solaris",
"author": "Stanislaw Lem",
"genres": [
"science fiction"
],
"rating": {
"critics": 95,
"users": 87
},
"price": 5.00
},
{
"id": 2,
"title": "The Parable of the Sower",
"author": "Octavia E. Butler",
"genres": [
"science fiction"
],
"rating": {
"critics": 90,
"users": 92
},
"price": 10.00
}
]
```
It is common to search books based on an author's name. `sort` can help grouping results from the same author. This query would only return books matching the query term `"butler"` and group results according to their authors:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/books/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "butler",
"sort": ["author:desc"]
}'
```
```javascript JS theme={null}
client.index('books').search('butler', {
sort: ['author:desc'],
})
```
```python Python theme={null}
client.index('books').search('butler', {
'sort': ['author:desc']
})
```
```php PHP theme={null}
$client->index('books')->search('butler', ['sort' => ['author:desc']]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("butler").sort(new String[] {"author:desc"}).build();
client.index("books").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('books').search('butler', { sort: ['author:desc'] })
```
```go Go theme={null}
resp, err := client.Index("books").Search("butler", &meilisearch.SearchRequest{
Sort: []string{
"author:desc",
},
})
```
```csharp C# theme={null}
var sq = new SearchQuery
{
Sort = new[] { "author:desc" },
};
await client.Index("books").SearchAsync("butler", sq);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("books")
.search()
.with_query("butler")
.with_sort(&["author:desc"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "butler",
sort: ["author:desc"]
)
client.index("books").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client
.index('books')
.search('butler', SearchQuery(sort: ['author:desc']));
```
```json theme={null}
[
{
"id": 2,
"title": "The Parable of the Sower",
"author": "Octavia E. Butler",
"genres": [
"science fiction"
],
"rating": {
"critics": 90,
"users": 92
},
"price": 10.00
},
{
"id": 5,
"title": "Wild Seed",
"author": "Octavia E. Butler",
"genres": [
"fantasy"
],
"rating": {
"critics": 84,
"users": 80
},
"price": 5.00
},
{
"id": 4,
"title": "Gender Trouble",
"author": "Judith Butler",
"genres": [
"feminism",
"philosophy"
],
"rating": {
"critics": 86,
"users": 73
},
"price": 10.00
}
]
```
### Sort by nested fields
Use dot notation to sort results based on a document's nested fields. The following query sorts returned documents by their user review scores:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/books/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "science fiction",
"sort": ["rating.users:asc"]
}'
```
```javascript JS theme={null}
client.index('books').search('science fiction', {
'sort': ['rating.users:asc'],
})
```
```python Python theme={null}
client.index('books').search('science fiction', {
'sort': ['rating.users:asc']
})
```
```php PHP theme={null}
$client->index('books')->search('science fiction', ['sort' => ['rating.users:asc']]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("science fiction").sort(new String[] {"rating.users:asc"}).build();
client.index("books").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('books').search('science fiction', { sort: ['rating.users:asc'] })
```
```go Go theme={null}
resp, err := client.Index("books").Search("science fiction", &meilisearch.SearchRequest{
Sort: []string{
"rating.users:asc",
},
})
```
```csharp C# theme={null}
SearchQuery sort = new SearchQuery() { Sort = new string[] { "rating.users:asc" }};
await client.Index("books").SearchAsync("science fiction", sort);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("books")
.search()
.with_query("science fiction")
.with_sort(&["rating.users:asc"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "science fiction",
sort: ["rating.users:asc"]
)
client.index("books").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client
.index('movie_ratings')
.search('thriller', SearchQuery(sort: ['rating.users:asc']));
```
## Sorting and custom ranking rules
There is a lot of overlap between sorting and configuring [custom ranking rules](/learn/relevancy/custom_ranking_rules), as both can greatly influence which results a user will see first.
Sorting is most useful when you want your users to be able to alter the order of returned results at query time. For example, webshop users might want to order results by price depending on what they are searching and to change whether they see the most expensive or the cheapest products first.
Custom ranking rules, instead, establish a default sorting rule that is enforced in every search. This approach can be useful when you want to promote certain results above all others, regardless of a user's preferences. For example, you might want a webshop to always feature discounted products first, no matter what a user is searching for.
## Example application
Take a look at our demos for examples of how to implement sorting:
* **Ecommerce demo**: [preview](https://ecommerce.meilisearch.com/) • [GitHub repository](https://github.com/meilisearch/ecommerce-demo/)
* **CRM SaaS demo**: [preview](https://saas.meilisearch.com/) • [GitHub repository](https://github.com/meilisearch/saas-demo/)
# Filtering and sorting by date
Source: https://www.meilisearch.com/docs/learn/filtering_and_sorting/working_with_dates
Learn how to index documents with chronological data, and how to sort and filter search results based on time.
In this guide, you will learn about Meilisearch's approach to date and time values, how to prepare your dataset for indexing, and how to chronologically sort and filter search results.
## Preparing your documents
To filter and sort search results chronologically, your documents must have at least one field containing a [UNIX timestamp](https://kb.narrative.io/what-is-unix-time). You may also use a string with a date in a format that can be sorted lexicographically, such as `"2025-01-13"`.
As an example, consider a database of video games. In this dataset, the release year is formatted as a timestamp:
```json theme={null}
[
{
"id": 0,
"title": "Return of the Obra Dinn",
"genre": "adventure",
"release_timestamp": 1538949600
},
{
"id": 1,
"title": "The Excavation of Hob's Barrow",
"genre": "adventure",
"release_timestamp": 1664316000
},
{
"id": 2,
"title": "Bayonetta 2",
"genre": "action",
"release_timestamp": 1411164000
}
]
```
Once all documents in your dataset have a date field, [index your data](/reference/api/documents/add-or-replace-documents) as usual. The example below adds a videogame dataset to a `games` index:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/games/documents' \
-H 'Content-Type: application/json' \
--data-binary @games.json
```
```javascript JS theme={null}
const games = require('./games.json')
client.index('games').addDocuments(games).then((res) => console.log(res))
```
```python Python theme={null}
import json
json_file = open('./games.json', encoding='utf-8')
games = json.load(json_file)
client.index('games').add_documents(games)
```
```php PHP theme={null}
$gamesJson = file_get_contents('games.json');
$games = json_decode($gamesJson);
$client->index('games')->addDocuments($games);
```
```java Java theme={null}
import com.meilisearch.sdk;
import org.json.JSONArray;
import java.nio.file.Files;
import java.nio.file.Path;
Path fileName = Path.of("games.json");
String gamesJson = Files.readString(fileName);
Index index = client.index("games");
index.addDocuments(gamesJson);
```
```ruby Ruby theme={null}
require 'json'
games = JSON.parse(File.read('games.json'))
client.index('games').add_documents(games)
```
```go Go theme={null}
jsonFile, _ := os.Open("games.json")
defer jsonFile.Close()
byteValue, _ := io.ReadAll(jsonFile)
var games []map[string]interface{}
json.Unmarshal(byteValue, &games)
client.Index("games").AddDocuments(games, nil)
```
```csharp C# theme={null}
string jsonString = await File.ReadAllTextAsync("games.json");
var games = JsonSerializer.Deserialize>(jsonString, options);
var index = client.Index("games");
await index.AddDocumentsAsync(games);
```
```rust Rust theme={null}
let mut file = File::open("games.json")
.unwrap();
let mut content = String::new();
file
.read_to_string(&mut content)
.unwrap();
let docs: Vec = serde_json::from_str(&content)
.unwrap();
client
.index("games")
.add_documents(&docs, None)
.await
.unwrap();
```
```swift Swift theme={null}
let path = Bundle.main.url(forResource: "games", withExtension: "json")!
let documents: Data = try Data(contentsOf: path)
client.index("games").addDocuments(documents: documents) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
//import 'dart:io';
//import 'dart:convert';
final json = await File('games.json').readAsString();
await client.index('games').addDocumentsJson(json);
```
## Filtering by date
To filter search results based on their timestamp, add your document's timestamp field to the list of [`filterableAttributes`](/reference/api/settings/update-filterableattributes):
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/games/settings/filterable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"release_timestamp"
]'
```
```javascript JS theme={null}
client.index('games').updateFilterableAttributes(['release_timestamp'])
```
```python Python theme={null}
client.index('games').update_filterable_attributes(['release_timestamp'])
```
```php PHP theme={null}
$client->index('games')->updateFilterableAttributes(['release_timestamp']);
```
```java Java theme={null}
client.index("movies").updateFilterableAttributesSettings(
new String[] { "release_timestamp" });
```
```ruby Ruby theme={null}
client.index('games').update_filterable_attributes(['release_timestamp'])
```
```go Go theme={null}
filterableAttributes := []interface{}{"release_timestamp"}
client.Index("games").UpdateFilterableAttributes(&filterableAttributes)
```
```csharp C# theme={null}
await client.Index("games").UpdateFilterableAttributesAsync(new string[] { "release_timestamp" });
```
```rust Rust theme={null}
let settings = Settings::new()
.with_filterable_attributes(["release_timestamp"]);
let task: TaskInfo = client
.index("games")
.set_settings(&settings)
.await
.unwrap();
```
```swift Swift theme={null}
client.index("games").updateFilterableAttributes(["release_timestamp"]) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client
.index('games')
.updateFilterableAttributes(['release_timestamp']);
```
Once you have configured `filterableAttributes`, you can filter search results by date. The following query only returns games released between 2018 and 2022:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/games/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "",
"filter": "release_timestamp >= 1514761200 AND release_timestamp < 1672527600"
}'
```
```javascript JS theme={null}
client.index('games').search('', {
filter: 'release_timestamp >= 1514761200 AND release_timestamp < 1672527600'
})
```
```python Python theme={null}
client.index('games').search('', {
'filter': 'release_timestamp >= 1514761200 AND release_timestamp < 1672527600'
})
```
```php PHP theme={null}
$client->index('games')->search('', [
'filter' => ['release_timestamp >= 1514761200 AND release_timestamp < 1672527600']
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("").filter(new String[] {"release_timestamp >= 1514761200 AND release_timestamp < 1672527600"}).build();
client.index("games").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('games').search('', {
filter: 'release_timestamp >= 1514761200 AND release_timestamp < 1672527600'
})
```
```go Go theme={null}
client.Index("games").Search("", &meilisearch.SearchRequest{
Filter: "release_timestamp >= 1514761200 AND release_timestamp < 1672527600",
})
```
```csharp C# theme={null}
var filters = new SearchQuery() { Filter = "release_timestamp >= 1514761200 AND release_timestamp < 1672527600" };
var games = await client.Index("games").SearchAsync("", filters);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("games")
.search()
.with_filter("release_timestamp >= 1514761200 AND release_timestamp < 1672527600")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "",
filter: "release_timestamp >= 1514761200 AND release_timestamp < 1672527600"
)
client.index("games").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('games').search(
'',
SearchQuery(
filterExpression: Meili.and([
Meili.gte(
'release_timestamp'.toMeiliAttribute(),
Meili.value(DateTime(2017, 12, 31, 23, 0)),
),
Meili.lt(
'release_timestamp'.toMeiliAttribute(),
Meili.value(DateTime(2022, 12, 31, 23, 0)),
),
]),
),
);
```
## Sorting by date
To sort search results chronologically, add your document's timestamp field to the list of [`sortableAttributes`](/reference/api/settings/update-sortableattributes):
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/games/settings/sortable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"release_timestamp"
]'
```
```javascript JS theme={null}
client.index('games').updateSortableAttributes(['release_timestamp'])
```
```python Python theme={null}
client.index('games').update_sortable_attributes(['release_timestamp'])
```
```php PHP theme={null}
$client->index('games')->updateSortableAttributes(['release_timestamp']);
```
```java Java theme={null}
Settings settings = new Settings();
settings.setSortableAttributes(new String[] {"release_timestamp"});
client.index("games").updateSettings(settings);
```
```ruby Ruby theme={null}
client.index('games').update_sortable_attributes(['release_timestamp'])
```
```go Go theme={null}
sortableAttributes := []string{"release_timestamp","author"}
client.Index("games").UpdateSortableAttributes(&sortableAttributes)
```
```csharp C# theme={null}
await client.Index("games").UpdateSortableAttributesAsync(new string[] { "release_timestamp" });
```
```rust Rust theme={null}
let settings = Settings::new()
.with_sortable_attributes(["release_timestamp"]);
let task: TaskInfo = client
.index("games")
.set_settings(&settings)
.await
.unwrap();
```
```swift Swift theme={null}
client.index("games").updateSortableAttributes(["release_timestamp"]) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client
.index('games')
.updateSortableAttributes(['release_timestamp']);
```
Once you have configured `sortableAttributes`, you can sort your search results based on their timestamp. The following query returns all games sorted from most recent to oldest:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/games/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "",
"sort": ["release_timestamp:desc"]
}'
```
```javascript JS theme={null}
client.index('games').search('', {
sort: ['release_timestamp:desc'],
})
```
```python Python theme={null}
client.index('games').search('', {
'sort': ['release_timestamp:desc']
})
```
```php PHP theme={null}
$client->index('games')->search('', ['sort' => ['release_timestamp:desc']]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("").sort(new String[] {"release_timestamp:desc"}).build();
client.index("games").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('games').search('', sort: ['release_timestamp:desc'])
```
```go Go theme={null}
client.Index("games").Search("", &meilisearch.SearchRequest{
Sort: []string{
"release_timestamp:desc",
},
})
```
```csharp C# theme={null}
SearchQuery sort = new SearchQuery() { Sort = new string[] { "release_timestamp:desc" }};
await client.Index("games").SearchAsync("", sort);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("games")
.search()
.with_sort(["release_timestamp:desc"])
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(
query: "",
sort: ["release_timestamp:desc"],
)
client.index("games").search(searchParameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client
.index('games')
.search('', SearchQuery(sort: ['release_timestamp:desc']));
```
# Getting started with Meilisearch Cloud
Source: https://www.meilisearch.com/docs/learn/getting_started/cloud_quick_start
Learn how to create your first Meilisearch Cloud project.
This tutorial walks you through setting up [Meilisearch Cloud](https://meilisearch.com/cloud), creating a project and an index, adding documents to it, and performing your first search with the default web interface.
You need a Meilisearch Cloud account to follow along. If you don't have one, register for a 14-day free trial account at [https://cloud.meilisearch.com/register](https://cloud.meilisearch.com/register?utm_campaign=oss\&utm_source=docs\&utm_medium=cloud-quick-start).
## Creating a project
To use Meilisearch Cloud, you must first create a project. Projects act as containers for indexes, tasks, billing, and other information related to Meilisearch Cloud.
Click the "New project" button on the top menu. If you have a free trial account and this is your first project, the button will read "Start free trial" instead:
Name your project `meilisearch-quick-start` and select the region closest to you, then click on "Create project":
If you are not using a free trial account, you must also choose a billing plan based on the size of your dataset and number of searches per month:
Creating your project might take a few minutes. Check the project list to follow its status. Once the project is ready, click on its name to go to the project overview page:
## Creating an index and adding documents
After creating your project, you must index the data you want to search. Meilisearch stores and processes data you add to it in indexes. A single project may contain multiple indexes.
First, click on the indexes tab in the project page menu:
This leads you to the index listing. Click on "New index":
Write `movies` in the name field and click on "Create Index":
The final step in creating an index is to add data to it. Choose "File upload":
Meilisearch Cloud will ask you for your dataset. To follow along, use this list of movies. Download the file to your computer, drag and drop it into the indicated area, then click on "Import documents":
Meilisearch Cloud will index your documents. This may take a moment. Click on "See index list" and wait. Once it is done, click on "Settings" to visit the index overview:
## Searching
With all data uploaded and processed, the last step is to run a few test searches to confirm Meilisearch is running as expected.
Click on the project name on the breadcrumb menu to return to the project overview:
Meilisearch Cloud comes with a search preview interface. Click on "Search preview" to access it:
Finally, try searching for a few movies, like "Solaris":
If you can see the results coming in as you type, congratulations: you now know all the basic steps to using Meilisearch Cloud.
## What's next
This tutorial taught you how to use Meilisearch Cloud's interface to create a project, add an index to it, and use the search preview interface.
In most real-life settings, you will be creating your own search interface and retrieving results through Meilisearch's API. To learn how to add documents and search using the command-line or an SDK in your preferred language, check out the [Meilisearch quick start](/learn/self_hosted/getting_started_with_self_hosted_meilisearch).
# Documents
Source: https://www.meilisearch.com/docs/learn/getting_started/documents
Documents are the individual items that make up a dataset. Each document is an object composed of one or more fields.
A document is an object composed of one or more fields. Each field consists of an **attribute** and its associated **value**. Documents function as containers for organizing data and are the basic building blocks of a Meilisearch database. To search for a document, you must first add it to an [index](/learn/getting_started/indexes).
Nothing will be shared between two indexes if they contain the exact same document. Instead, both documents will be treated as different documents. Depending on the [index's settings](/reference/api/settings/list-all-settings), the documents might have different sizes.
## Structure
### Important terms
* **Document**: an object which contains data in the form of one or more fields
* **[Field](#fields)**: a set of two data items that are linked together: an attribute and a value
* **Attribute**: the first part of a field. Acts as a name or description for its associated value
* **Value**: the second part of a field, consisting of data of any valid JSON type
* **[Primary Field](#primary-field)**: a special field that is mandatory in all documents. It contains the primary key and document identifier
## Fields
A **field** is a set of two data items linked together: an attribute and a value. Documents are made up of fields.
An **attribute** is a case-sensitive string that functions as a field's name and allows you to store, access, and describe data.
That data is the field's **value**. Every field has a data type dictated by its value. Every value must be a valid [JSON data type](https://www.w3schools.com/js/js_json_datatypes.asp).
If the value is a string, it **[can contain at most 65535 positions](/learn/resources/known_limitations#maximum-number-of-words-per-attribute)**. Words exceeding the 65535 position limit will be ignored.
If a field contains an object, Meilisearch flattens it during indexing using dot notation and brings the object's keys and values to the root level of the document itself. This flattened object is only an intermediary representation—you will get the original structure upon search. You can read more about this in our [dedicated guide](/learn/engine/datatypes#objects).
With [ranking rules](/learn/relevancy/ranking_rules), you can decide which fields are more relevant than others. For example, you may decide recent movies should be more relevant than older ones. You can also designate certain fields as displayed or searchable.
Some features require Meilisearch to reserve attributes. For example, to use [geosearch functionality](/learn/filtering_and_sorting/geosearch) your documents must include a `_geo` field.
Reserved attributes are always prefixed with an underscore (`_`).
### Displayed and searchable fields
By default, all fields in a document are both displayed and searchable. Displayed fields are contained in each matching document, while searchable fields are searched for matching query words.
You can modify this behavior using the [update settings endpoint](/reference/api/settings/update-all-settings), or the respective update endpoints for [displayed attributes](/reference/api/settings/update-displayedattributes), and [searchable attributes](/reference/api/settings/update-searchableattributes) so that a field is:
* Searchable but not displayed
* Displayed but not searchable
* Neither displayed nor searchable
In the latter case, the field will be completely ignored during search. However, it will still be [stored](/learn/relevancy/displayed_searchable_attributes#data-storing) in the document.
To learn more, refer to our [displayed and searchable attributes guide](/learn/relevancy/displayed_searchable_attributes).
## Primary field
The primary field is a special field that must be present in all documents. Its attribute is the [primary key](/learn/getting_started/primary_key#primary-field) and its value is the [document id](/learn/getting_started/primary_key#document-id). If you try to [index a document](/learn/self_hosted/getting_started_with_self_hosted_meilisearch#add-documents) that's missing a primary key or possessing the wrong primary key for a given index, it will cause an error and no documents will be added.
To learn more, refer to the [primary key explanation](/learn/getting_started/primary_key).
## Upload
By default, Meilisearch limits the size of all payloads—and therefore document uploads—to 100MB. You can [change the payload size limit](/learn/self_hosted/configure_meilisearch_at_launch#payload-limit-size) at runtime using the `http-payload-size-limit` option.
Meilisearch uses a lot of RAM when indexing documents. Be aware of your [RAM availability](/learn/resources/faq#what-are-the-recommended-requirements-for-hosting-a-meilisearch-instance) as you increase your batch size as this could cause Meilisearch to crash.
When using the [add new documents endpoint](/reference/api/documents/add-or-update-documents), ensure:
* The payload format is correct. There are no extraneous commas, mismatched brackets, missing quotes, etc.
* All documents are sent in an array, even if there is only one document
### Dataset format
Meilisearch accepts datasets in the following formats:
* [JSON](#json)
* [NDJSON](#ndjson)
* [CSV](#csv)
#### JSON
Documents represented as JSON objects are key-value pairs enclosed by curly brackets. As such, [any rule that applies to formatting JSON objects](https://www.w3schools.com/js/js_json_objects.asp) also applies to formatting Meilisearch documents. For example, an attribute must be a string, while a value must be a valid [JSON data type](https://www.w3schools.com/js/js_json_datatypes.asp).
Meilisearch will only accept JSON documents when it receives the `application/json` content-type header.
As an example, let's say you are creating an index that contains information about movies. A sample document might look like this:
```json theme={null}
{
"id": 1564,
"title": "Kung Fu Panda",
"genres": "Children's Animation",
"release-year": 2008,
"cast": [
{ "Jack Black": "Po" },
{ "Jackie Chan": "Monkey" }
]
}
```
In the above example:
* `"id"`, `"title"`, `"genres"`, `"release-year"`, and `"cast"` are attributes
* Each attribute is associated with a value, for example, `"Kung Fu Panda"` is the value of `"title"`
* The document contains a field with the primary key attribute and a unique document id as its value: `"id": "1564"`
#### NDJSON
NDJSON or jsonlines objects consist of individual lines where each individual line is valid JSON text and each line is delimited with a newline character. Any [rules that apply to formatting NDJSON](https://github.com/ndjson/ndjson-spec) also apply to Meilisearch documents.
Meilisearch will only accept NDJSON documents when it receives the `application/x-ndjson` content-type header.
Compared to JSON, NDJSON has better writing performance and is less CPU and memory intensive. It is easier to validate and, unlike CSV, can handle nested structures.
The above JSON document would look like this in NDJSON:
```json theme={null}
{ "id": 1564, "title": "Kung Fu Panda", "genres": "Children's Animation", "release-year": 2008, "cast": [{ "Jack Black": "Po" }, { "Jackie Chan": "Monkey" }] }
```
#### CSV
CSV files express data as a sequence of values separated by a delimiter character. Meilisearch accepts `string`, `boolean`, and `number` data types for CSV documents. If you don't specify the data type for an attribute, it will default to `string`. Empty fields such as `,,` and `, ,` will be considered `null`.
By default, Meilisearch uses a single comma (`,`) as the delimiter. Use the `csvDelimiter` query parameter with the [add or update documents](/reference/api/documents/add-or-update-documents) or [add or replace documents](/reference/api/documents/add-or-replace-documents) endpoints to set a different character. Any [rules that apply to formatting CSV](https://datatracker.ietf.org/doc/html/rfc4180) also apply to Meilisearch documents.
Meilisearch will only accept CSV documents when it receives the `text/csv` content-type header.
Compared to JSON, CSV has better writing performance and is less CPU and memory intensive.
The above JSON document would look like this in CSV:
```csv theme={null}
"id:number","title:string","genres:string","release-year:number"
"1564","Kung Fu Panda","Children's Animation","2008"
```
Since CSV does not support arrays or nested objects, `cast` cannot be converted to CSV.
### Auto-batching
Auto-batching combines similar operations in the same index into a single batch, then processes them together. This significantly speeds up the indexing process.
Tasks within the same batch share the same values for `startedAt`, `finishedAt`, and `duration`.
If a task fails due to an invalid document, it will be removed from the batch. The rest of the batch will still process normally. If an [`internal`](/reference/errors/overview#errors) error occurs, the whole batch will fail and all tasks within it will share the same `error` object.
#### Auto-batching and task cancellation
If the task you're canceling is part of a batch, Meilisearch interrupts the whole process, discards all progress, and cancels that task. Then, it automatically creates a new batch without the canceled task and immediately starts processing it.
# Indexes
Source: https://www.meilisearch.com/docs/learn/getting_started/indexes
An index is a collection of documents, much like a table in MySQL or a collection in MongoDB.
An index is a group of documents with associated settings. It is comparable to a table in `SQL` or a collection in MongoDB.
An index is defined by a `uid` and contains the following information:
* One [primary key](#primary-key)
* Customizable [settings](#index-settings)
* An arbitrary number of documents
#### Example
Suppose you manage a database that contains information about movies, similar to [IMDb](https://imdb.com/). You would probably want to keep multiple types of documents, such as movies, TV shows, actors, directors, and more. Each of these categories would be represented by an index in Meilisearch.
Using an index's settings, you can customize search behavior for that index. For example, a `movies` index might contain documents with fields like `movie_id`, `title`, `genre`, `overview`, and `release_date`. Using settings, you could make a movie's `title` have a bigger impact on search results than its `overview`, or make the `movie_id` field non-searchable.
One index's settings do not impact other indexes. For example, you could use a different list of synonyms for your `movies` index than for your `costumes` index, even if they're on the same server.
## Index creation
### Implicit index creation
If you try to add documents or settings to an index that does not already exist, Meilisearch will automatically create it for you.
### Explicit index creation
You can explicitly create an index using the [create index endpoint](/reference/api/indexes/create-index). Once created, you can add documents using the [add documents endpoint](/reference/api/documents/add-or-update-documents).
While implicit index creation is more convenient, requiring only a single API request, **explicit index creation is considered safer for production**. This is because implicit index creation bundles multiple actions into a single task. If one action completes successfully while the other fails, the problem can be difficult to diagnose.
## Index UID
The `uid` is the **unique identifier** of an index. It is set when creating the index and must be an integer or string containing only alphanumeric characters `a-z A-Z 0-9`, hyphens `-` and underscores `_`.
```json theme={null}
{
"uid": "movies",
"createdAt": "2019-11-20T09:40:33.711324Z",
"updatedAt": "2019-11-20T10:16:42.761858Z"
}
```
You can change an index's `uid` using the [`/indexes` API route](/reference/api/indexes/update-index).
## Primary key
Every index has a primary key: a required attribute that must be present in all documents in the index. Each document must have a unique value associated with this attribute.
The primary key serves to identify each document, such that two documents in an index can never be completely identical. If you add two documents with the same value for the primary key, they will be treated as the same document: one will overwrite the other. If you try adding documents, and even a single one is missing the primary key, none of the documents will be stored.
You can set the primary key for an index or let it be inferred by Meilisearch. Read more about [setting the primary key](/learn/getting_started/primary_key#setting-the-primary-key).
[Learn more about the primary field](/learn/getting_started/primary_key)
## Index settings
Index settings can be thought of as a JSON object containing many different options for customizing search behavior.
To change index settings, use the [update settings endpoint](/reference/api/settings/update-all-settings) or any of the child routes.
### Displayed and searchable attributes
By default, every document field is searchable and displayed in response to search queries. However, you can choose to set some fields as non-searchable, non-displayed, or both.
You can update these field attributes using the [update settings endpoint](/reference/api/settings/update-all-settings), or the respective endpoints for [displayed attributes](/reference/api/settings/update-displayedattributes) and [searchable attributes](/reference/api/settings/update-searchableattributes).
[Learn more about displayed and searchable attributes.](/learn/relevancy/displayed_searchable_attributes)
### Distinct attribute
If your dataset contains multiple similar documents, you may want to return only one on search. Suppose you have numerous black jackets in different sizes in your `costumes` index. Setting `costume_name` as the distinct attribute will mean Meilisearch will not return more than one black jacket with the same `costume_name`.
Designate the distinct attribute using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update distinct attribute endpoint](/reference/api/settings/update-distinctattribute). **You can only set one field as the distinct attribute per index.**
[Learn more about distinct attributes.](/learn/relevancy/distinct_attribute)
### Faceting
Facets are a specific use-case of filters in Meilisearch: whether something is a facet or filter depends on your UI and UX design. Like filters, you need to add your facets to [`filterableAttributes`](/reference/api/settings/update-filterableattributes), then make a search query using the [`filter` search parameter](/reference/api/search/search-with-post#body-filter).
By default, Meilisearch returns `100` facet values for each faceted field. You can change this using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update faceting settings endpoint](/reference/api/settings/update-facetsearch).
[Learn more about faceting.](/learn/filtering_and_sorting/search_with_facet_filters)
### Filterable attributes
Filtering allows you to refine your search based on different categories. For example, you could search for all movies of a certain `genre`: `Science Fiction`, with a `rating` above `8`.
Before filtering on any document attribute, you must add it to `filterableAttributes` using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update filterable attributes endpoint](/reference/api/settings/update-filterableattributes). Then, make a search query using the [`filter` search parameter](/reference/api/search/search-with-post#body-filter).
[Learn more about filtering.](/learn/filtering_and_sorting/filter_search_results)
### Pagination
To protect your database from malicious scraping, Meilisearch only returns up to `1000` results for a search query. You can change this limit using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update pagination settings endpoint](/reference/api/settings/update-pagination).
[Learn more about pagination.](/guides/front_end/pagination)
### Ranking rules
Meilisearch uses ranking rules to sort matching documents so that the most relevant documents appear at the top. All indexes are created with the same built-in ranking rules executed in default order. The order of these rules matters: the first rule has the most impact, and the last rule has the least.
You can alter this order or define custom ranking rules to return certain results first. This can be done using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update ranking rules endpoint](/reference/api/settings/update-rankingrules).
[Learn more about ranking rules.](/learn/relevancy/relevancy)
### Sortable attributes
By default, Meilisearch orders results according to their relevancy. You can alter this sorting behavior to show certain results first.
Add the attributes you'd like to sort by to `sortableAttributes` using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update sortable attributes endpoint](/reference/api/settings/update-sortableattributes). You can then use the [`sort` search parameter](/reference/api/search/search-with-post#body-sort) to sort your results in ascending or descending order.
[Learn more about sorting.](/learn/filtering_and_sorting/sort_search_results)
### Stop words
Your dataset may contain words you want to ignore during search because, for example, they don't add semantic value or occur too frequently (for instance, `the` or `of` in English). You can add these words to the [stop words list](/reference/api/settings/get-stopwords) and Meilisearch will ignore them during search.
Change your index's stop words list using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update stop words endpoint](/reference/api/settings/update-stopwords). In addition to improving relevancy, designating common words as stop words greatly improves performance.
[Learn more about stop words.](/reference/api/settings/get-stopwords)
### Synonyms
Your dataset may contain words with similar meanings. For these, you can define a list of synonyms: words that will be treated as the same or similar for search purposes. Words set as synonyms won't always return the same results due to factors like typos and splitting the query.
Since synonyms are defined for a given index, they won't apply to any other index on the same Meilisearch instance. You can create your list of synonyms using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update synonyms endpoint](/reference/api/settings/update-synonyms).
[Learn more about synonyms.](/learn/relevancy/synonyms)
### Typo tolerance
Typo tolerance is a built-in feature that helps you find relevant results even when your search queries contain spelling mistakes or typos, for example, typing `chickne` instead of `chicken`. This setting allows you to do the following for your index:
* Enable or disable typo tolerance
* Configure the minimum word size for typos
* Disable typos on specific words
* Disable typos on specific document attributes
You can update the typo tolerance settings using the [update settings endpoint](/reference/api/settings/update-all-settings) or the [update typo tolerance endpoint](/reference/api/settings/update-typotolerance).
[Learn more about typo tolerance.](/learn/relevancy/typo_tolerance_settings)
## Swapping indexes
Suppose you have an index in production, `movies`, where your users are currently making search requests. You want to deploy a new version of `movies` with different settings, but updating it normally could cause downtime for your users. This problem can be solved using index swapping.
To use index swapping, you would create a second index, `movies_new`, containing all the changes you want to make to `movies`.
This means that the documents, settings, and task history of `movies` will be swapped with the documents, settings, and task history of `movies_new` **without any downtime for the search clients**. The task history of `enqueued` tasks is not modified.
Once swapped, your users will still be making search requests to the `movies` index but it will contain the data of `movies_new`. You can delete `movies_new` after the swap or keep it in case something goes wrong and you want to swap back.
Swapping indexes is an atomic transaction: **either all indexes are successfully swapped, or none are**.
For more information, see the [swap indexes endpoint](/reference/api/indexes/swap-indexes).
# Primary key
Source: https://www.meilisearch.com/docs/learn/getting_started/primary_key
The primary key is a special field that must be present in all documents indexed by Meilisearch.
## Primary field
An [index](/learn/getting_started/indexes) in Meilisearch is a collection of [documents](/learn/getting_started/documents). Documents are composed of fields, each field containing an attribute and a value.
The primary field is a special field that must be present in all documents. Its attribute is the **[primary key](#primary-key-1)** and its value is the **[document id](#document-id)**. It uniquely identifies each document in an index, ensuring that **it is impossible to have two exactly identical documents** present in the same index.
### Example
Suppose we have an index of books. Each document contains a number of fields with data on the book's `author`, `title`, and `price`. More importantly, each document contains a **primary field** consisting of the index's **primary key** `id` and a **unique id**.
```json theme={null}
[
{
"id": 1,
"title": "Diary of a Wimpy Kid: Rodrick Rules",
"author": "Jeff Kinney",
"genres": ["comedy","humor"],
"price": 5.00
},
{
"id": 2,
"title": "Black Leopard, Red Wolf",
"author": "Marlon James",
"genres": ["fantasy","drama"],
"price": 5.00
}
]
```
Aside from the primary key, **documents in the same index are not required to share attributes**. A book in this dataset could be missing the `title` or `genre` attribute and still be successfully indexed by Meilisearch, provided it has the `id` attribute.
### Primary key
The primary key is the attribute of the primary field.
Every index has a primary key, an attribute that must be shared across all documents in that index. If you attempt to add documents to an index and even a single one is missing the primary key, **none of the documents will be stored.**
#### Example
```json theme={null}
{
"id": 1,
"title": "Diary of a Wimpy Kid",
"author": "Jeff Kinney",
"genres": ["comedy","humor"],
"price": 5.00
}
```
Each document in the above index is identified by a primary field containing the primary key `id` and a unique document id value.
### Document id
The document id is the value associated with the primary key. It is part of the primary field and acts as a unique identifier for each document in a given index.
Two documents in an index can have the same values for all attributes except the primary key. If two documents in the same index have the same id, then they are treated as the same document and **the preceding document will be overwritten**.
Document addition requests in Meilisearch are atomic. This means that **if the primary field value of even a single document in a batch is incorrectly formatted, an error will occur, and Meilisearch will not index documents in that batch.**
#### Example
Good:
```json theme={null}
"id": "_Aabc012_"
```
Bad:
```json theme={null}
"id": "@BI+* ^5h2%"
```
#### Formatting the document id
The document id must be an integer or a string. If the id is a string, it can only contain alphanumeric characters (`a-z`, `A-Z`, `0-9`), hyphens (`-`), and underscores (`_`).
## Setting the primary key
You can set the primary key explicitly or let Meilisearch infer it from your dataset. Whatever your choice, an index can have only one primary key at a time, and the primary key cannot be changed while documents are present in the index.
### Setting the primary key on index creation
When creating an index manually, you can explicitly indicate the primary key you want that index to use.
The code below creates an index called `books` and sets `reference_number` as its primary key:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes' \
-H 'Content-Type: application/json' \
--data-binary '{
"uid": "books",
"primaryKey": "reference_number"
}'
```
```javascript JS theme={null}
client.createIndex('books', { primaryKey: 'reference_number' })
```
```python Python theme={null}
client.create_index('books', {'primaryKey': 'reference_number'})
```
```php PHP theme={null}
$client->createIndex('books', ['primaryKey' => 'reference_number']);
```
```java Java theme={null}
client.createIndex("books", "reference_number");
```
```ruby Ruby theme={null}
client.create_index('books', primary_key: 'reference_number')
```
```go Go theme={null}
client.CreateIndex(&meilisearch.IndexConfig{
Uid: "books",
PrimaryKey: "reference_number",
})
```
```csharp C# theme={null}
TaskInfo task = await client.CreateIndexAsync("books", "reference_number");
```
```rust Rust theme={null}
client
.create_index("books", Some("reference_number"))
.await
.unwrap();
```
```swift Swift theme={null}
client.createIndex(uid: "books", primaryKey: "reference_number") { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.createIndex('books', primaryKey: 'reference_number');
```
```json theme={null}
{
"taskUid": 1,
"indexUid": "books",
"status": "enqueued",
"type": "indexCreation",
"enqueuedAt": "2022-09-20T12:06:24.364352Z"
}
```
### Setting the primary key on document addition
When adding documents to an empty index, you can explicitly set the index's primary key as part of the document addition request.
The code below adds a document to the `books` index and sets `reference_number` as that index's primary key:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/books/documents?primaryKey=reference_number' \
-H 'Content-Type: application/json' \
--data-binary '[
{
"reference_number": 287947,
"title": "Diary of a Wimpy Kid",
"author": "Jeff Kinney",
"genres": [
"comedy",
"humor"
],
"price": 5.00
}
]'
```
```javascript JS theme={null}
client.index('books').addDocuments([
{
reference_number: 287947,
title: 'Diary of a Wimpy Kid',
author: 'Jeff Kinney',
genres: ['comedy','humor'],
price: 5.00
}
], { primaryKey: 'reference_number' })
```
```python Python theme={null}
client.index('books').add_documents([{
'reference_number': 287947,
'title': 'Diary of a Wimpy Kid',
'author': 'Jeff Kinney',
'genres': ['comedy', 'humor'],
'price': 5.00
}], 'reference_number')
```
```php PHP theme={null}
$client->index('books')->addDocuments([
[
'reference_number' => 287947,
'title' => 'Diary of a Wimpy Kid',
'author' => 'Jeff Kinney',
'genres' => ['comedy', 'humor'],
'price' => 5.00
]
], 'reference_number');
```
```java Java theme={null}
client.index("books").addDocuments("[{"
+ "\"reference_number\": 2879,"
+ "\"title\": \"Diary of a Wimpy Kid\","
+ "\"author\": \"Jeff Kinney\","
+ "\"genres\": [\"comedy\", \"humor\"],"
+ "\"price\": 5.00"
+ "}]"
, "reference_number");
```
```ruby Ruby theme={null}
client.index('books').add_documents([
{
reference_number: 287947,
title: 'Diary of a Wimpy Kid',
author: 'Jeff Kinney',
genres: ['comedy', 'humor'],
price: 5.00
}
], 'reference_number')
```
```go Go theme={null}
documents := []map[string]interface{}{
{
"reference_number": 287947,
"title": "Diary of a Wimpy Kid",
"author": "Jeff Kinney",
"genres": []string{"comedy", "humor"},
"price": 5.00,
},
}
refrenceNumber := "reference_number"
client.Index("books").AddDocuments(documents, &refrenceNumber)
```
```csharp C# theme={null}
await index.AddDocumentsAsync(
new[] {
new Book {
ReferenceNumber = 287947,
Title = "Diary of a Wimpy Kid",
Author = "Jeff Kinney",
Genres = new string[] { "comedy", "humor" },
Price = 5.00
}
},
"reference_number");
```
```rust Rust theme={null}
#[derive(Serialize, Deserialize)]
struct Book {
reference_number: String,
title: String,
author: String,
genres: Vec,
price: f64
}
let task: TaskInfo = client
.index("books")
.add_documents(&[
Book {
reference_number: "287947".to_string(),
title: "Diary of a Wimpy Kid".to_string(),
author: "Jeff Kinney".to_string(),
genres: vec!["comedy".to_string(),"humor".to_string()],
price: 5.00
}
], Some("reference_number"))
.await
.unwrap();
```
```swift Swift theme={null}
let documents: Data = """
[
{
"reference_number": 287947,
"title": "Diary of a Wimpy Kid",
"author": "Jeff Kinney",
"genres": ["comedy", "humor"],
"price": 5
}
]
""".data(using: .utf8)!
client.index("books").addDocuments(documents: documents, primaryKey: "reference_number") { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movies').addDocuments([
{
'reference_number': 287947,
'title': 'Diary of a Wimpy Kid',
'author': 'Jeff Kinney',
'genres': ['comedy', 'humor'],
'price': 5.00
}
], primaryKey: 'reference_number');
```
**Response:**
```json theme={null}
{
"taskUid": 1,
"indexUid": "books",
"status": "enqueued",
"type": "documentAdditionOrUpdate",
"enqueuedAt": "2022-09-20T12:08:55.463926Z"
}
```
### Changing your primary key with the update index endpoint
The primary key cannot be changed while documents are present in the index. To change the primary key of an index that already contains documents, you must therefore [delete all documents](/reference/api/documents/delete-all-documents) from that index, [change the primary key](/reference/api/indexes/update-index), then [add them](/reference/api/documents/add-or-replace-documents) again.
The code below updates the primary key to `title`:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/books' \
-H 'Content-Type: application/json' \
--data-binary '{ "primaryKey": "title" }'
```
```javascript JS theme={null}
client.updateIndex('books', {
primaryKey: 'title'
})
```
```python Python theme={null}
client.index('books').update(primary_key='title')
```
```php PHP theme={null}
$client->updateIndex('books', ['primaryKey' => 'title']);
```
```java Java theme={null}
client.updateIndex("books", "title");
```
```ruby Ruby theme={null}
client.index('books').update(primary_key: 'title')
```
```go Go theme={null}
client.Index("books").UpdateIndex(&meilisearch.UpdateIndexRequestParams{
PrimaryKey: "title",
})
```
```csharp C# theme={null}
TaskInfo task = await client.UpdateIndexAsync("books", "title");
```
```rust Rust theme={null}
let task = IndexUpdater::new("books", &client)
.with_primary_key("title")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
client.updateIndex(uid: "movies", primaryKey: "title") { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.updateIndex('books', 'title');
```
**Response:**
```json theme={null}
{
"taskUid": 1,
"indexUid": "books",
"status": "enqueued",
"type": "indexUpdate",
"enqueuedAt": "2022-09-20T12:10:06.444672Z"
}
```
### Meilisearch guesses your primary key
Suppose you add documents to an index without previously setting its primary key. In this case, Meilisearch will automatically look for an attribute ending with the string `id` in a case-insensitive manner (for example, `uid`, `BookId`, `ID`) in your first document and set it as the index's primary key.
If Meilisearch finds [multiple attributes ending with `id`](#index_primary_key_multiple_candidates_found) or [cannot find a suitable attribute](#index_primary_key_no_candidate_found), it will throw an error. In both cases, the document addition process will be interrupted and no documents will be added to your index.
## Primary key errors
This section covers some primary key errors and how to resolve them.
### `index_primary_key_multiple_candidates_found`
This error occurs when you add documents to an index for the first time and Meilisearch finds multiple attributes ending with `id`. It can be resolved by [manually setting the index's primary key](#setting-the-primary-key-on-document-addition).
```json theme={null}
{
"uid": 4,
"indexUid": "books",
"status": "failed",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 5,
"indexedDocuments": 5
},
"error": {
"message": "The primary key inference failed as the engine found 2 fields ending with `id` in their names: 'id' and 'author_id'. Please specify the primary key manually using the `primaryKey` query parameter.",
"code": "index_primary_key_multiple_candidates_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index-primary-key-multiple-candidates-found"
},
"duration": "PT0.006002S",
"enqueuedAt": "2023-01-17T10:44:42.625574Z",
"startedAt": "2023-01-17T10:44:42.626041Z",
"finishedAt": "2023-01-17T10:44:42.632043Z"
}
```
### `index_primary_key_no_candidate_found`
This error occurs when you add documents to an index for the first time and none of them have an attribute ending with `id`. It can be resolved by [manually setting the index's primary key](#setting-the-primary-key-on-document-addition), or ensuring that all documents you add possess an `id` attribute.
```json theme={null}
{
"uid": 1,
"indexUid": "books",
"status": "failed",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 5,
"indexedDocuments": null
},
"error": {
"message": "The primary key inference failed as the engine did not find any field ending with `id` in its name. Please specify the primary key manually using the `primaryKey` query parameter.",
"code": "index_primary_key_no_candidate_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index-primary-key-no-candidate-found"
},
"duration": "PT0.006579S",
"enqueuedAt": "2023-01-17T10:19:14.464858Z",
"startedAt": "2023-01-17T10:19:14.465369Z",
"finishedAt": "2023-01-17T10:19:14.471948Z"
}
```
### `invalid_document_id`
This happens when your document id does not have the correct [format](#formatting-the-document-id). The document id can only be of type integer or string, composed of alphanumeric characters `a-z A-Z 0-9`, hyphens `-`, and underscores `_`.
```json theme={null}
{
"uid": 1,
"indexUid": "books",
"status": "failed",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 5,
"indexedDocuments": null
},
"error": {
"message": "Document identifier `1@` is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (_).",
"code": "invalid_document_id",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#invalid_document_id"
},
"duration": "PT0.009738S",
"enqueuedAt": "2021-12-30T11:28:59.075065Z",
"startedAt": "2021-12-30T11:28:59.076144Z",
"finishedAt": "2021-12-30T11:28:59.084803Z"
}
```
### `missing_document_id`
This error occurs when your index already has a primary key, but one of the documents you are trying to add is missing this attribute.
```json theme={null}
{
"uid": 1,
"indexUid": "books",
"status": "failed",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 1,
"indexedDocuments": null
},
"error": {
"message": "Document doesn't have a `id` attribute: `{\"title\":\"Solaris\",\"author\":\"Stanislaw Lem\",\"genres\":[\"science fiction\"],\"price\":5.0.",
"code": "missing_document_id",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#missing_document_id"
},
"duration": "PT0.007899S",
"enqueuedAt": "2021-12-30T11:23:52.304689Z",
"startedAt": "2021-12-30T11:23:52.307632Z",
"finishedAt": "2021-12-30T11:23:52.312588Z"
}
```
# Search preview
Source: https://www.meilisearch.com/docs/learn/getting_started/search_preview
Meilisearch comes with a built-in search interface for quick testing during development.
Meilisearch Cloud gives you access to a dedicated search preview interface. This is useful to test search result relevancy when you are tweaking an index's settings.
If you are self-hosting Meilisearch and need a local search interface, access `http://localhost:7700` in your browser. This local preview only allows you to perform plain searches and offers no customization options.
## Accessing and using search preview
Log into your [Meilisearch Cloud](https://cloud.meilisearch.com/login) account, navigate to your project, then click on "Search preview":
Select the index you want to search on using the input on the left-hand side:
Then use the main input to perform plain keyword searches:
When debugging relevancy, you may want to activate the "Ranking score" option. This displays the overall [ranking score](/learn/relevancy/ranking_score) for each result, together with the score for each individual ranking rule:
## Configuring search options
Use the menu on the left-hand side to configure [sorting](/learn/filtering_and_sorting/sort_search_results) and [filtering](/learn/filtering_and_sorting/filter_search_results). These require you to first edit your index's sortable and filterable attributes. You may additionally configure any filterable attributes as facets. In this example, "Genres" is one of the configured facets:
You can also perform [AI-powered searches](/learn/ai_powered_search/getting_started_with_ai_search) if this functionality has been enabled for your project.
Clicking on "Advanced parameters" gives you access to further customization options, including setting which document fields Meilisearch returns and explicitly declaring the search language:
## Exporting search options
You can export the full search query for further testing in other tools and environments. Click on the cloud icon next to "Advanced parameters", then choose to download a JSON file or copy the query to your clipboard:
# What is Meilisearch?
Source: https://www.meilisearch.com/docs/learn/getting_started/what_is_meilisearch
Meilisearch is a search engine featuring a blazing fast RESTful search API, typo tolerance, comprehensive language support, and much more.
Meilisearch is a **RESTful search API**. It aims to be a **ready-to-go solution** for everyone who wants a **fast and relevant search experience** for their end-users ⚡️🔎
## Meilisearch Cloud
[Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=what-is-meilisearch) is the recommended way of using Meilisearch. Using Meilisearch Cloud greatly simplifies installing, maintaining, and updating Meilisearch. [Get started with a 14-day free trial](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=what-is-meilisearch).
## Demo
[](https://where2watch.meilisearch.com/?utm_campaign=oss\&utm_source=docs\&utm_medium=what-is-meilisearch\&utm_content=gif)
*Meilisearch helps you find where to watch a movie at [where2watch.meilisearch.com](https://where2watch.meilisearch.com/?utm_campaign=oss\&utm_source=docs\&utm_medium=what-is-meilisearch\&utm_content=link).*
## Features
* **Blazing fast**: Answers in less than 50 milliseconds
* [AI-powered search](/learn/ai_powered_search/getting_started_with_ai_search): Use the power of AI to make search feel human
* **Search as you type**: Results are updated on each keystroke using [prefix-search](/learn/engine/prefix#prefix-search)
* [Typo tolerance](/learn/relevancy/typo_tolerance_settings): Get relevant matches even when queries contain typos and misspellings
* [Comprehensive language support](/learn/resources/language): Optimized support for **Chinese, Japanese, Hebrew, and languages using the Latin alphabet**
* **Returns the whole document**: The entire document is returned upon search
* **Highly customizable search and indexing**: Customize search behavior to better meet your needs
* [Custom ranking](/learn/relevancy/relevancy): Customize the relevancy of the search engine and the ranking of the search results
* [Filtering](/learn/filtering_and_sorting/filter_search_results) and [faceted search](/learn/filtering_and_sorting/search_with_facet_filters): Enhance user search experience with custom filters and build a faceted search interface in a few lines of code
* [Highlighting](/reference/api/search/search-with-post#body-highlight-pre-tag): Highlighted search results in documents
* [Stop words](/reference/api/settings/get-stopwords): Ignore common non-relevant words like `of` or `the`
* [Synonyms](/reference/api/settings/get-synonyms): Configure synonyms to include more relevant content in your search results
* **RESTful API**: Integrate Meilisearch in your technical stack with our plugins and SDKs
* [Search preview](/learn/getting_started/search_preview): Allows you to test your search settings without implementing a front-end
* [API key management](/learn/security/basic_security): Protect your instance with API keys. Set expiration dates and control access to indexes and endpoints so that your data is always safe
* [Multitenancy and tenant tokens](/learn/security/multitenancy_tenant_tokens): Manage complex multi-user applications. Tenant tokens help you decide which documents each one of your users can search
* [Multi-search](/reference/api/search/perform-a-multi-search): Perform multiple search queries on multiple indexes with a single HTTP request
* [Geosearch](/learn/filtering_and_sorting/geosearch): Filter and sort results based on their geographic location
* [Index swapping](/learn/getting_started/indexes#swapping-indexes): Deploy major database updates with zero search downtime
## Philosophy
Our goal is to provide a simple and intuitive experience for both developers and end-users. Ease of use was the primary focus of Meilisearch from its first release, and it continues to drive its development today.
Meilisearch's ease-of-use goes hand-in-hand with ultra relevant search results. Meilisearch **sorts results according to a set of [ranking rules](/learn/relevancy/ranking_rules)**. Our default ranking rules work for most use cases as we developed them by working directly with our users. You can also **configure the [search parameters](/reference/api/search/search-with-post)** to refine your search even further.
Meilisearch should **not be your main data store**. It is a search engine, not a database. Meilisearch should contain only the data you want your users to search through. If you must add data that is irrelevant to search, be sure to [make those fields non-searchable](/learn/relevancy/displayed_searchable_attributes#searchable-fields) to improve relevancy and response time.
Meilisearch provides an intuitive search-as-you-type experience with response times under 50 milliseconds, no matter whether you are developing a site or an app. This helps end-users find what they are looking for quickly and efficiently. To make that happen, we are fully committed to the philosophy of [prefix search](/learn/engine/prefix).
## Give it a try
Instead of showing you examples, why not just invite you to test Meilisearch interactively in the **out-of-the-box search preview** we deliver?
There's no need to write a single line of front-end code. All you need to do is follow [this guide](/learn/self_hosted/getting_started_with_self_hosted_meilisearch) to give the search engine a try!
# Indexing best practices
Source: https://www.meilisearch.com/docs/learn/indexing/indexing_best_practices
Tips to speed up your documents indexing process.
In this guide, you will find some of the best practices to index your data efficiently and speed up the indexing process.
## Define searchable attributes
Review your list of [searchable attributes](/learn/relevancy/displayed_searchable_attributes#searchable-fields) and ensure it includes only the fields you want to be checked for query word matches. This improves both relevancy and search speed by removing irrelevant data from your database. It will also keep your disk usage to the necessary minimum.
By default, all document fields are searchable. The fewer fields Meilisearch needs to index, the faster the indexing process.
### Review filterable and sortable attributes
Some document fields are necessary for [filtering](/learn/filtering_and_sorting/filter_search_results) and [sorting](/learn/filtering_and_sorting/sort_search_results) results, but they do not need to be *searchable*. Generally, **numeric and boolean fields** fall into this category. Make sure to review your list of searchable attributes and remove any fields that are only used for filtering or sorting.
## Configure your index before adding documents
When creating a new index, first [configure its settings](/reference/api/settings/list-all-settings) and only then add your documents. Whenever you update settings such as [ranking rules](/learn/relevancy/relevancy), Meilisearch will trigger a reindexing of all your documents. This can be a time-consuming process, especially if you have a large dataset. For this reason, it is better to define ranking rules and other settings before indexing your data.
## Optimize document size
Smaller documents are processed faster, so make sure to trim down any unnecessary data from your documents. When a document field is missing from the list of [searchable](/reference/api/settings/get-searchableattributes), [filterable](/reference/api/settings/get-filterableattributes), [sortable](/reference/api/settings/get-sortableattributes), or [displayed](/reference/api/settings/get-displayedattributes) attributes, it might be best to remove it from the document. To go further, consider compressing your data using methods such as `br`, `deflate`, or `gzip`. Consult the [supported encoding formats reference](/reference/api/headers).
## Prefer bigger HTTP payloads
A single large HTTP payload is processed more quickly than multiple smaller payloads. For example, adding the same 100,000 documents in two batches of 50,000 documents will be quicker than adding them in four batches of 25,000 documents. By default, Meilisearch sets the maximum payload size to 100MB, but [you can change this value if necessary](/learn/self_hosted/configure_meilisearch_at_launch#payload-limit-size).
Larger payload consume more RAM. An instance may crash if it requires more memory than is currently available in a machine.
## Keep Meilisearch up-to-date
Make sure to keep your Meilisearch instance up-to-date to benefit from the latest improvements. You can see [a list of all our engine releases on GitHub](https://github.com/meilisearch/meilisearch/releases?q=prerelease%3Afalse).
For more information on how indexing works under the hood, take a look [this blog post about indexing best practices](https://blog.meilisearch.com/best-practices-for-faster-indexing/).
## Do not use Meilisearch as your main database
Meilisearch is optimized for information retrieval was not designed to be your main data container. The more documents you add, the longer will indexing and search take. Only index documents you want to retrieve when searching.
## Create separate indexes for multiple languages
If you have a multilingual dataset, create a separate index for each language.
## Avoid creating too many indexes
Due to the complexities of dynamic virtual address management, having more indexes than necessary can negatively impact performance.
What constitutes too many indexes depends on your specific setup. If you notice significant performance degradation when performing multi-index searches, try to reduce the number of indexes in your instance.
## Remove I/O operation limits
Ensure there is no limit to I/O operations in your machine. The restrictions imposed by cloud providers such as [AWS's Amazon EBS service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#IOcredit) can severely impact indexing performance.
## Consider upgrading to machines with SSDs, more RAM, and multi-threaded processors
If you have followed the previous tips in this guide and are still experiencing slow indexing times, consider upgrading your machine.
Indexing is a memory-intensive and multi-threaded operation. The more memory and processor cores available, the faster Meilisearch will index new documents. When trying to improve indexing speed, using a machine with more processor cores is more effective than increasing RAM.
Due to how Meilisearch works, it is best to avoid HDDs (Hard Disk Drives) as they can easily become performance bottlenecks.
## Enable binary quantization when using AI-powered search
If you are experiencing performance issues when indexing documents for AI-powered search, consider enabling [binary quantization](/reference/api/settings/update-embedders) for your embedders. Binary quantization compresses vectors by representing each dimension with 1-bit values. This reduces the relevancy of semantic search results, but greatly improves performance.
Binary quantization works best with large datasets containing more than 1M documents and using models with more than 1400 dimensions.
**Activating binary quantization is irreversible.** Once enabled, Meilisearch converts all vectors and discards all vector data that does fit within 1-bit. The only way to recover the vectors' original values is to re-vectorize the whole index in a new embedder.
# Handling multilingual datasets
Source: https://www.meilisearch.com/docs/learn/indexing/multilingual-datasets
This guide covers indexing strategies, language-specific tokenizers, and best practices for aligning document and query tokenization.
When working with datasets that include content in multiple languages, it’s important to ensure that both documents and queries are processed correctly. This guide explains how to index and search multilingual datasets in Meilisearch, highlighting best practices, useful features, and what to avoid.
## Recommended indexing strategy
### Create a separate index for each language (recommended)
If you have a multilingual dataset, the best practice is to create one index per language.
#### Benefits
* Provides natural sharding of your data by language, making it easier to maintain and scale.
* Lets you apply language-specific settings, such as [stop words](/reference/api/settings/get-stopwords), and [separators](/reference/api/settings/get-separatortokens).
* Simplifies the handling of complex languages like Chinese or Japanese, which require specialized tokenizers.
#### Searching across languages
If you want to allow users to search in more than one language at once, you can:
* Run a [multi-search](/reference/api/search/perform-a-multi-search), querying several indexes in parallel.
* Use [federated search](/reference/api/search/perform-a-multi-search), aggregating results from multiple language indexes into a single response.
### Create a single index for multiple languages
In some cases, you may prefer to keep multiple languages in a **single index**. This approach is generally acceptable for proof of concepts or datasets with fewer than \~1M documents.
#### When it works well
* Suitable for languages that use spaces to separate words and share similar tokenization behavior (e.g., English, French, Italian, Spanish, Portuguese).
* Useful when you want a simple setup without maintaining multiple indexes.
#### Limitations
* Languages with compound words (like German) or diacritics that change meaning (like Swedish), as well as non-space-separated writing systems (like Chinese, or Japanese), work better in their own index since they require specialized [tokenizers](/learn/indexing/tokenization).
* Chinese and Japanese documents should not be mixed in the same field, since distinguishing between them automatically is very difficult. Each of these languages works best in its own dedicated index. However, if fields are strictly separated by language (e.g., title\_zh always Chinese, title\_ja always Japanese), it is possible to store them in the same index.
* As the number of documents and languages grows, performance and relevancy can decrease, since queries must run across a larger, mixed dataset.
#### Best practices for the single index approach
* Use language-specific field names with a prefix or suffix (e.g., title\_fr, title\_en, or fr\_title).
* Declare these fields as [localized attributes](/reference/api/settings/get-localized-attributes) so Meilisearch can apply the correct tokenizer to each one.
* This allows you to filter and search by language, even when multiple languages are stored in the same index.
## Language detection and configuration
Accurate language detection is essential for applying the right tokenizer and normalization rules, which directly impact search quality.
By default, Meilisearch automatically detects the language of your documents and queries.
This automatic detection works well in most cases, especially with longer texts. However, results can vary depending on the type of input:
* **Documents**: detection is generally reliable for longer content, but short snippets may produce less accurate results.
* **Queries**: short or partial inputs (such as type-as-you-search) are harder to identify correctly, making explicit configuration more important.
When you explicitly set `localizedAttributes` for documents and `locales` for queries, you **restrict the detection to the languages you’ve declared**.
**Benefits**:
* Meilisearch only chooses between the specified languages (e.g., English vs German).
* Detection is more **reliable and consistent**, reducing mismatches.
For search to work effectively, **queries must be tokenized and normalized in the same way as documents**. If strategies are not aligned, queries may fail to match even when the correct terms exist in the index.
### Aligning document and query tokenization
To keep queries and documents consistent, Meilisearch provides configuration options for both sides. Meilisearch uses the same `locales` configuration concept for both documents and queries:
* In **documents**, `locales` are declared through `localizedAttributes`.
* In **queries**, `locales` are passed as a \[search parameter].
#### Declaring locales for documents
The [`localizedAttributes` setting](/reference/api/settings/get-localized-attributes) allows you to explicitly define which languages are present in your dataset, and in which fields.
For example, if your dataset contains multilingual titles, you can declare which attribute belongs to which language:
```json theme={null}
{
"id": 1,
"title_en": "Danube Steamship Company",
"title_de": "Donaudampfschifffahrtsgesellschaft",
"title_fr": "Compagnie de navigation à vapeur du Danube"
}
```
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/localized-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
{ "attributePatterns": ["*_en"], "locales": ["eng"] },
{ "attributePatterns": ["*_de"], "locales": ["deu"] },
{ "attributePatterns": ["*_fr"], "locales": ["fra"] }
]'
```
#### Specifying locales for queries
When performing searches, you can specify [query locales](/reference/api/search/search-with-post#body-locales) to ensure queries are tokenized with the correct rules.
```javascript theme={null}
client.index('INDEX_NAME').search('schiff', { locales: ['deu'] })
```
This ensures queries are interpreted with the correct tokenizer and normalization rules, avoiding false mismatches.
## Conclusion
Handling multilingual datasets in Meilisearch requires careful planning of both indexing and querying.
By choosing the right indexing strategy, and explicitly configuring languages with `localizedAttributes` and `locales`, you ensure that documents and queries are processed consistently.
# Optimize indexing performance with batch statistics
Source: https://www.meilisearch.com/docs/learn/indexing/optimize_indexing_performance
Learn how to analyze the `progressTrace` to identify and resolve indexing bottlenecks in Meilisearch.
# Optimize indexing performance by analyzing batch statistics
Indexing performance can vary significantly depending on your dataset, index settings, and hardware. The [batch object](/reference/api/async-task-management/list-batches) provides information about the progress of asynchronous indexing operations.
The `progressTrace` field within the batch object offers a detailed breakdown of where time is spent during the indexing process. Use this data to identify bottlenecks and improve indexing speed.
## Understanding the `progressTrace`
`progressTrace` is a hierarchical trace showing each phase of indexing and how long it took.
Each entry follows the structure:
```json theme={null}
"processing tasks > indexing > extracting word proximity": "33.71s"
```
This means:
* The step occurred during **indexing**.
* The subtask was **extracting word proximity**.
* It took **33.71 seconds**.
Focus on the **longest-running steps** and investigate which index settings or data characteristics influence them.
## Key phases and how to optimize them
### `computing document changes`and `extracting documents`
| Description | Optimization |
| --------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
| Meilisearch compares incoming documents to existing ones. | No direct optimization possible. Process duration scales with the number and size of incoming documents. |
### `extracting facets` and `merging facet caches`
| Description | Optimization |
| ------------------------------------------ | -------------------------------------------------------------------------------------------------------------- |
| Extracts and merges filterable attributes. | Keep the number of [**filterable attributes**](/reference/api/settings/get-filterableattributes) to a minimum. |
### `extracting words` and `merging word caches`
| Description | Optimization |
| --------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Tokenizes text and builds the inverted index. | Ensure the [searchable attributes](/reference/api/settings/get-searchableattributes) list only includes the fields you want to be checked for query word matches. |
### `extracting word proximity` and `merging word proximity`
| Description | Optimization |
| -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| Builds data structures for phrase and attribute ranking. | Lower the precision of this operation by setting [proximity precision](/reference/api/settings/update-proximityprecision) to `byAttribute` |
### `waiting for database writes`
| Description | Optimization |
| -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| Time spent writing data to disk. | No direct optimization possible. Either the disk is too slow or you are writing too much data in a single operation. Avoid HDDs (Hard Disk Drives) |
### `waiting for extractors`
| Description | Optimization |
| -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Time spent waiting for CPU-bound extraction. | No direct optimization possible. Indicates a CPU bottleneck. Use more cores or scale horizontally with [sharding](/learn/multi_search/implement_sharding). |
### `post processing facets > strings bulk` / `numbers bulk`
| Description | Optimization |
| ----------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Processes equality or comparison filters. | - Disable unused [**filter features**](/reference/api/settings/get-filterableattributes), such as comparison operators on string values. - Reduce the number of [**sortable attributes**](/reference/api/settings/get-sortableattributes). |
### `post processing facets > facet search`
| Description | Optimization |
| ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ |
| Builds structures for the [facet search API](/reference/api/facet-search/search-in-facets). | If you don’t use the facet search API, [disable it](/reference/api/settings/update-facetsearch). |
### Embeddings
| Trace key | Description | Optimization |
| -------------------------------- | --------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `writing embeddings to database` | Time spent saving vector embeddings. | Use embedding vectors with fewer dimensions. - Consider enabling [binary quantization](/reference/api/settings/update-embedders). |
| `extracting embeddings` | Time spent extracting embeddings from embedding providers' responses. | Reduce the amount of data sent to embeddings provider. - [Include fewer attributes in `documentTemplate`](/learn/ai_powered_search/document_template_best_practices). - [Reduce maximum size of the document template](/reference/api/settings/update-embedders). - [Disabling embedding regeneration on document update](/reference/api/documents/add-or-update-documents). - If using a third-party service like OpenAI, upgrade your account to a higher tier. |
### `post processing words > word prefix *`
| Description | Optimization | |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| | Builds prefix data for autocomplete. Allows matching documents that begin with a specific query term, instead of only exact matches. | Disable [**prefix search**](/reference/api/settings/get-prefix-search) (`prefixSearch: disabled`). *This can severely impact search result relevancy.* |
### `post processing words > word fst`
| Description | Optimization |
| ---------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Builds the word FST (finite state transducer). | No direct action possible, as FST size reflect the number of different words in the database. Using documents with fewer searchable words may improve operation speed. |
## Example analysis
If you see:
```json theme={null}
"processing tasks > indexing > post processing facets > facet search": "1763.06s"
```
[Facet searching](/learn/filtering_and_sorting/search_with_facet_filters#searching-facet-values) is raking significant indexing time. If your application doesn’t use facets, disable the feature:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/INDEX_UID/settings/facet-search' \
-H 'Content-Type: application/json' \
--data-binary 'false'
```
```javascript JS theme={null}
client.index('INDEX_NAME').updateFacetSearch(false);
```
```python Python theme={null}
client.index('books').update_facet_search_settings(False)
```
```php PHP theme={null}
$client->index('INDEX_NAME')->updateFacetSearch(false);
```
```ruby Ruby theme={null}
client.index('INDEX_UID').update_facet_search_setting(false)
```
```go Go theme={null}
client.Index("books").UpdateFacetSearch(false)
```
```rust Rust theme={null}
let task: TaskInfo = client
.index(INDEX_UID)
.set_facet_search(false)
.await
.unwrap();
```
## Learn more
* [Indexing best practices](/learn/indexing/indexing_best_practices)
* [Impact of RAM and multi-threading on indexing performance
](/learn/indexing/ram_multithreading_performance)
* [Configuring index settings](/learn/configuration/configuring_index_settings)
# Impact of RAM and multi-threading on indexing performance
Source: https://www.meilisearch.com/docs/learn/indexing/ram_multithreading_performance
Adding new documents to a Meilisearch index is a multi-threaded and memory-intensive operation. Consult this article for more information on indexing performance.
Adding new documents to an index is a multi-threaded and memory-intensive operation. Meilisearch's indexes are at the core of what makes our search engine fast, relevant, and reliable. This article explains some of the details regarding RAM consumption and multi-threading.
## RAM
By default, our indexer uses the `sysinfo` Rust library to calculate a machine's total memory size. Meilisearch then adapts its behavior so indexing uses a maximum two thirds of available resources. Alternatively, you can use the [`--max-indexing-memory`](/learn/self_hosted/configure_meilisearch_at_launch#max-indexing-memory) instance option to manually control the maximum amount of RAM Meilisearch can consume.
It is important to prevent Meilisearch from using all available memory during indexing. If that happens, there are two negative consequences:
1. Meilisearch may be killed by the OS for over-consuming RAM
2. Search performance may decrease while the indexer is processing an update
Memory overconsumption can still happen in two cases:
1. When letting Meilisearch automatically set the maximum amount of memory used during indexing, `sysinfo` may not be able to calculate the amount of available RAM for certain OSes. Meilisearch still makes an educated estimate and adapts its behavior based on that, but crashes may still happen in this case. [Follow this link for an exhaustive list of OSes supported by `sysinfo`](https://docs.rs/sysinfo/0.20.0/sysinfo/#supported-oses)
2. Lower-end machines might struggle when processing huge datasets. Splitting your data payload into smaller batches can help in this case. [For more information, consult the section below](#memory-crashes)
## Multi-threading
In machines with multi-core processors, the indexer avoids using more than half of the available processing units. For example, if your machine has twelve cores, the indexer will try to use six of them at most. This ensures Meilisearch is always ready to perform searches, even while you are updating an index.
You can override Meilisearch's default threading limit by using the [`--max-indexing-threads`](/learn/self_hosted/configure_meilisearch_at_launch#max-indexing-threads) instance option. Allowing Meilisearch to use all processor cores for indexing might negatively impact your users' search experience.
Multi-threading is unfortunately not possible in machines with only one processor core.
## Memory crashes
In some cases, the OS will interrupt Meilisearch and stop all its processes. Most of these crashes happen during indexing and are a result of a machine running out of RAM. This means your computer does not have enough memory to process your dataset.
Meilisearch is aware of this issue and actively trying to resolve it. If you are struggling with memory-related crashes, consider:
* Adding new documents in smaller batches
* Increasing your machine's RAM
* [Following indexing best practices](/learn/indexing/indexing_best_practices)
# Rename an index
Source: https://www.meilisearch.com/docs/learn/indexing/rename_an_index
Use the PATCH endpoint of the /indexes route to rename an index
This guide shows you how to change the name of an index.
## Requirements
* A Meilisearch project with at least one index
* A command-line terminal
## Choose the target index and its new name
Decide which index you want to rename and keep note of its `uid`. This guide changes the name of an index called `INDEX_A`.
Also choose the new name you wish to assign the index. This guide uses `INDEX_B` for the new name of the index.
## Query the `/indexes/{index_uid}` route
Send a `PATCH` request targeting the index you want to rename:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_A' \
-H 'Content-Type: application/json' \
--data-binary '{ "uid": "INDEX_B" }'
```
```python Python theme={null}
client.index("INDEX_A").update(new_uid="INDEX_B")
```
```java Java theme={null}
client.updateIndex("indexA", null, "indexB");
```
```ruby Ruby theme={null}
client.index('indexA').update(uid: 'indexB')
```
```rust Rust theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_A' \
-H 'Content-Type: application/json' \
--data-binary '{ "uid": "INDEX_B" }'
```
Replace `INDEX_A` with the current name of your index, and `INDEX_B` with its new name.
# Tokenization
Source: https://www.meilisearch.com/docs/learn/indexing/tokenization
Tokenization is the process of taking a sentence or phrase and splitting it into smaller units of language. It is a crucial procedure when indexing documents.
**Tokenization** is the act of taking a sentence or phrase and splitting it into smaller units of language, called tokens. It is the first step of document indexing in the Meilisearch engine, and is a critical factor in the quality of search results.
Breaking sentences into smaller chunks requires understanding where one word ends and another begins, making tokenization a highly complex and language-dependent task. Meilisearch's solution to this problem is a **modular tokenizer** that follows different processes, called **pipelines**, based on the language it detects.
This allows Meilisearch to function in several different languages with zero setup.
## Deep dive: The Meilisearch tokenizer
When you add documents to a Meilisearch index, the tokenization process is handled by an abstract interface called the tokenizer. The tokenizer is responsible for splitting each field by writing system (for example, Latin alphabet, Chinese hanzi). It then applies the corresponding pipeline to each part of each document field.
We can break down the tokenization process like so:
1. Crawl the document(s), splitting each field by script
2. Go back over the documents part-by-part, running the corresponding tokenization pipeline, if it exists
Pipelines include many language-specific operations. Currently, we have a number of pipelines, including a default pipeline for languages that use whitespace to separate words, and dedicated pipelines for Chinese, Japanese, Hebrew, Thai, and Khmer.
For more details, check out the [tokenizer contribution guide](https://github.com/meilisearch/charabia).
# Implement sharding with remote federated search
Source: https://www.meilisearch.com/docs/learn/multi_search/implement_sharding
This guide walks you through implementing a sharding strategy by activating the `/network` route, configuring the network object, and performing remote federated searches.
Sharding is the process of splitting an index containing many documents into multiple smaller indexes, often called shards. This horizontal scaling technique is useful when handling large databases. In Meilisearch, the best way to implement a sharding strategy is to use remote federated search.
This guide walks you through activating the `/network` route, configuring the network object, and performing remote federated searches.
Sharding is an Enterprise Edition feature. You are free to use it for evaluation purposes. Please [reach out to us](mailto:sales@meilisearch.com) before using it in production.
## Configuring multiple instances
To minimize issues and limit unexpected behavior, instance, network, and index configuration should be identical for all shards. This guide describes the individual steps you must take on a single instance and assumes you will replicate them across all instances.
## Prerequisites
* Multiple Meilisearch projects (instances) running Meilisearch >=v1.19
## Activate the `/network` endpoint
### Meilisearch Cloud
If you are using Meilisearch Cloud, contact support to enable this feature in your projects.
### Self-hosting
Use the `/experimental-features` route to enable `network`:
```sh theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/experimental-features/' \
-H 'Content-Type: application/json' \
--data-binary '{
"network": true
}'
```
Meilisearch should respond immediately, confirming the route is now accessible. Repeat this process for all instances.
## Configuring the network object
Next, you must configure the network object. It consists of the following fields:
* `remotes`: defines a list with the required information to access each remote instance
* `self`: specifies which of the configured `remotes` corresponds to the current instance
* `sharding`: whether to use sharding.
### Setting up the list of remotes
Use the `/network` route to configure the `remotes` field of the network object. `remotes` should be an object containing one or more objects. Each one of the nested objects should consist of the name of each instance, associated with its URL and an API key with search permission:
```sh theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/network' \
-H 'Content-Type: application/json' \
--data-binary '{
"remotes": {
"REMOTE_NAME_1": {
"url": "INSTANCE_URL_1",
"searchApiKey": "SEARCH_API_KEY_1"
},
"REMOTE_NAME_2": {
"url": "INSTANCE_URL_2",
"searchApiKey": "SEARCH_API_KEY_2"
},
"REMOTE_NAME_3": {
"url": "INSTANCE_URL_3",
"searchApiKey": "SEARCH_API_KEY_3"
},
…
}
}'
```
Configure the entire set of remote instances in your sharded database, making sure to send the same remotes to each instance.
### Specify the name of the current instance
Now all instances share the same list of remotes, set the `self` field to specify which of the remotes corresponds to the current instance:
```sh theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/network' \
-H 'Content-Type: application/json' \
--data-binary '{
"self": "REMOTE_NAME_1"
}'
```
Meilisearch processes searches on the remote that corresponds to `self` locally instead of making a remote request.
### Enabling sharding
Finally enable the automatic sharding of documents by Meilisearch on all instances:
```sh theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/network' \
-H 'Content-Type: application/json' \
--data-binary '{
"sharding": true
}'
```
### Adding or removing an instance
Changing the topology of the network involves moving some documents from an instance to another, depending on your hashing scheme.
As Meilisearch does not provide atomicity across multiple instances, you will need to either:
1. accept search downtime while migrating documents
2. accept some documents will not appear in search results during the migration
3. accept some duplicate documents may appear in search results during the migration
#### Reducing downtime
If your disk space allows, you can reduce the downtime by applying the following algorithm:
1. Create a new temporary index in each remote instance
2. Compute the new instance for each document
3. Send the documents to the temporary index of their new instance
4. Once Meilisearch has copied all documents to their instance of destination, swap the new index with the previously used index
5. Delete the temporary index after the swap
6. Update network configuration and search queries across all instances
## Create indexes
Create the same empty indexes with the same settings on all instances.
Keeping the settings and indexes in sync is important to avoid errors and unexpected behavior, though not strictly required.
## Add documents
Pick a single instance to send all your documents to. Documents will be replicated to the other instances.
Each instance will index the documents they are responsible for and ignore the others.
You *may* send the same document to multiple instances, the task will be replicated to all instances, and only the instance responsible for the document will index it.
Similarly, you may send any future versions of any document to the instance you picked, and only the correct instance will process that document.
### Updating index settings
Changing settings in a sharded database is not fundamentally different from changing settings on a single Meilisearch instance. If the update enables a feature, such as setting filterable attributes, wait until all changes have been processed before using the `filter` search parameter in a query. Likewise, if an update disables a feature, first remove it from your search requests, then update your settings.
## Perform a search
Send your federated search request containing one query per instance:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/multi-search' \
-H 'Content-Type: application/json' \
--data-binary '{
"federation": {},
"queries": [
{
"indexUid": "movies",
"q": "batman",
"federationOptions": {
"remote": "ms-00"
}
},
{
"indexUid": "movies",
"q": "batman",
"federationOptions": {
"remote": "ms-01"
}
}
]
```
If all instances share the same network configuration, you can send the search request to any instance. Having `"remote": "ms-00"` appear in the list of queries on the instance of that name will not cause an actual proxy search thanks to `network.self`.
# Differences between multi-search and federated search
Source: https://www.meilisearch.com/docs/learn/multi_search/multi_search_vs_federated_search
This article defines multi-search and federated search and then describes the different uses of each.
This article defines multi-search and federated search and then describes the different uses of each.
## What is multi-search?
Multi-search, also called multi-index search, is a search operation that makes multiple queries at the same time. These queries may target different indexes. Meilisearch then returns a separate list results for each query. Use the `/multi-search` route to perform multi-searches.
Multi-search favors discovery scenarios, where users might not have a clear idea of what they need and searches might have many valid results.
## What is federated search?
Federated search is a type of multi-index search. This operation also makes multiple search requests at the same time, but returns a single list with the most relevant results from all queries. Use the `/multi-search` route and specify a non-null value for `federation` to perform a federated search.
Federated search favors scenarios where users have a clear idea of what they need and expect a single best top result.
## Use cases
Because multi-search groups results by query, it is often useful when the origin and type of document contain information relevant to your users. For example, a person searching for `shygirl` in a music streaming application is likely to appreciate seeing separate results for matching artists, albums, and individual tracks.
Federated search is a better approach when the source of the information is not relevant to your users. For example, a person searching for a client's email in a CRM application is unlikely to care whether this email comes from chat logs, support tickets, or other data sources.
# Using multi-search to perform a federated search
Source: https://www.meilisearch.com/docs/learn/multi_search/performing_federated_search
In this tutorial you will see how to perform a query searching multiple indexes at the same time to obtain a single list of results.
Meilisearch allows you to make multiple search requests at the same time with the `/multi-search` endpoint. A federated search is a multi-search that returns results from multiple queries in a single list.
In this tutorial you will see how to create separate indexes containing different types of data from a CRM application. You will then perform a query searching all these indexes at the same time to obtain a single list of results.
## Requirements
* A running Meilisearch project
* A command-line console
## Create three indexes
Download the following datasets: `crm-chats.json`, `crm-profiles.json`, and `crm-tickets.json` containing data from a fictional CRM application.
Add the datasets to Meilisearch and create three separate indexes, `profiles`, `chats`, and `tickets`:
```sh theme={null}
curl -X POST 'MEILISEARCH_URL/indexes/profiles' -H 'Content-Type: application/json' --data-binary @crm-profiles.json &&
curl -X POST 'MEILISEARCH_URL/indexes/chats' -H 'Content-Type: application/json' --data-binary @crm-chats.json &&
curl -X POST 'MEILISEARCH_URL/indexes/tickets' -H 'Content-Type: application/json' --data-binary @crm-tickets.json
```
[Use the tasks endpoint](/learn/async/working_with_tasks) to check the indexing status. Once Meilisearch successfully indexed all three datasets, you are ready to perform a federated search.
## Perform a federated search
When you are looking for Natasha Nguyen's email address in your CRM application, you may not know whether you will find it in a chat log, among the existing customer profiles, or in a recent support ticket. In this situation, you can use federated search to search across all possible sources and receive a single list of results.
Use the `/multi-search` endpoint with the `federation` parameter to query the three indexes simultaneously:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/multi-search' \
-H 'Content-Type: application/json' \
--data-binary '{
"federation": {},
"queries": [
{
"indexUid": "chats",
"q": "natasha"
},
{
"indexUid": "profiles",
"q": "natasha"
},
{
"indexUid": "tickets",
"q": "natasha"
}
]
}'
```
Meilisearch should respond with a single list of search results:
```json theme={null}
{
"hits": [
{
"id": 0,
"client_name": "Natasha Nguyen",
"message": "My email is natasha.nguyen@example.com",
"time": 1727349362,
"_federation": {
"indexUid": "chats",
"queriesPosition": 0
}
},
…
],
"processingTimeMs": 0,
"limit": 20,
"offset": 0,
"estimatedTotalHits": 3,
"semanticHitCount": 0
}
```
## Promote results from a specific index
Since this is a CRM application, users have profiles with their preferred contact information. If you want to search for Riccardo Rotondo's preferred email, you can boost documents in the `profiles` index.
Use the `weight` property of the `federation` parameter to boost results coming from a specific query:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/multi-search' \
-H 'Content-Type: application/json' \
--data-binary '{
"federation": {},
"queries": [
{
"indexUid": "chats",
"q": "rotondo"
},
{
"indexUid": "profiles",
"q": "rotondo",
"federationOptions": { "weight": 1.2 }
},
{
"indexUid": "tickets",
"q": "rotondo"
}
]
}'
```
This request will lead to results from the query targeting `profile` ranking higher than documents from other queries:
```json theme={null}
{
"hits": [
{
"id": 1,
"name": "Riccardo Rotondo",
"email": "riccardo.rotondo@example.com",
"_federation": {
"indexUid": "profiles",
"queriesPosition": 1
}
},
…
],
"processingTimeMs": 0,
"limit": 20,
"offset": 0,
"estimatedTotalHits": 3,
"semanticHitCount": 0
}
```
## Conclusion
You have created three indexes, then performed a federated multi-index search to receive all results in a single list. You then used `weight` to boost results from the index most likely to contain the information you wanted.
# Performing personalized search queries
Source: https://www.meilisearch.com/docs/learn/personalization/making_personalized_search_queries
Search personalization uses context about the person performing the search to provide results more relevant to that specific user. This article guides you through configuring and performing personalized search queries.
## Requirements
* A Meilisearch project
* Self-hosted Meilisearch users: a Cohere API key
## Activate personalized search
### Cloud users
Open a support ticket requesting Meilisearch to activate search personalization for your project.
### Self-hosted users
Relaunch your instance using the search personalization instance option:
```sh theme={null}
meilisearch --experimental-personalization-api-key="COHERE_API_KEY"
```
## Generating user context
Search personalization requires a description about the user performing the search. Meilisearch does not currently provide automated generation of user context.
You’ll need to **dynamically generate a plain-text user description** for each search request. This should summarize relevant traits, such as:
* Category preferences, like brand or size
* Price sensitivity, like budget-conscious
* Possible use cases, such as fitness and sport
* Other assorted information, such as general interests or location
The re-ranking model is optimized to favor positive signals. For best results, focus on affirmatively stated preferences, behaviors, and affinities, such as "likes the color red" and "prefers cheaper brands" over "dislikes blue" and "is not interested in luxury brands".
## Perform a personalized search
Once search personalization is active and you have a pipeline in place to generate user profiles, you are ready to perform personalized searches.
Submit a search query and include the `personalize` search parameter. `personalize` must be an object with a single field, `userContext`. Use the description you generated in the previous step as the value for `userContext`:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "wireless keyboard",
"personalize": {
"userContext": "The user prefers compact mechanical keyboards from Keychron or Logitech, with a mid-range budget and quiet keys for remote work."
}
}'
```
# What is search personalization?
Source: https://www.meilisearch.com/docs/learn/personalization/search_personalization
Search personalization lets you boost search results based on user profiles, making results tailored to their behavior.
Search personalization uses AI technology to re-rank search results at query time based on the user context you provide.
## Why use search personalization?
Not everyone search the same way. Personalizing search results allows you to adapt relevance to each user’s preferences, behavior, or intent.
For example, in an e-commerce site, someone who often shops for sportswear might see sneakers and activewear ranked higher when searching for “shoes”. A user interested in luxury fashion might see designer heels or leather boots first instead.
## How does search personalization work?
1. First generate a plain-text description of the user: `"The user prefers genres like Documentary, Music, Drama"`
2. When the user performs a search, you submit their description together their search request
3. Meilisearch retrieves documents based on the user's query as usual
4. Finally, the re-ranking model reorders results based on the user context you provided in the first step
## How to enable search personalization in Meilisearch?
Search personalization is an experimental feature.
If you are a Meilisearch Cloud user, contact support to activate it for your projects.
If you are self-hosting Meilisearch, relaunch it using the [search personalization instance option](/learn/self_hosted/configure_meilisearch_at_launch#search-personalization).
Consult the [search personalization guide](/learn/personalization/making_personalized_search_queries) for more information on how to implement it in your application.
# Attribute ranking order
Source: https://www.meilisearch.com/docs/learn/relevancy/attribute_ranking_order
This article explains how the order of attributes in the `searchableAttributes` array impacts search result relevancy.
In most datasets, some fields are more relevant to search than others. A `title`, for example, might be more meaningful to a movie search than its `overview` or its `release_date`.
When `searchableAttributes` is using its default value, `[*]`, all fields carry the same weight.
If you manually configure [the searchable attributes list](/learn/relevancy/displayed_searchable_attributes#the-searchableattributes-list), attributes that appear early in the array are more important when calculating search result relevancy.
## Example
```json theme={null}
[
"title",
"overview",
"release_date"
]
```
With the above attribute ranking order, matching words found in the `title` field would have a higher impact on relevancy than the same words found in `overview` or `release_date`. If you searched for "1984", for example, results like Michael Radford's film "1984" would be ranked higher than movies released in the year 1984.
## Attribute ranking order and nested objects
By default, nested fields share the same weight as their parent attribute. Use dot notation to set different weights for attributes in nested objects:
```json theme={null}
[
"title",
"review.critic",
"overview",
"review.user"
]
```
With the above ranking order, `review.critic` becomes more important than its sibling `review.user` when calculating a document's ranking score.
The `attributeRank` and `wordPosition` rules' positions in [`rankingRules`](/learn/relevancy/ranking_rules) determine how the results are sorted. Meaning, **if `attributeRank` is at the bottom of the ranking rules list, it will have almost no impact on your search results.**
The legacy `attribute` rule combines both `attributeRank` and `wordPosition`. If you use `attribute`, its position determines the impact of both attribute ranking order and position within attributes.
# Custom ranking rules
Source: https://www.meilisearch.com/docs/learn/relevancy/custom_ranking_rules
Custom ranking rules promote certain documents over other search results that are otherwise equally relevant.
There are two types of ranking rules in Meilisearch: [built-in ranking rules](/learn/relevancy/ranking_rules) and custom ranking rules. This article describes the main aspects of using and configuring custom ranking rules.
## Ascending and descending sorting rules
Meilisearch supports two types of custom rules: one for ascending sort and one for descending sort.
To add a custom ranking rule, you have to communicate the attribute name followed by a colon (`:`) and either `asc` for ascending order or `desc` for descending order.
* To apply an **ascending sort** (results sorted by increasing value of the attribute): `attribute_name:asc`
* To apply a **descending sort** (results sorted by decreasing value of the attribute): `attribute_name:desc`
**The attribute must have either a numeric or a string value** in all of the documents contained in that index.
You can add this rule to the existing list of ranking rules using the [update settings endpoint](/reference/api/settings/update-all-settings) or [update ranking rules endpoint](/reference/api/settings/update-rankingrules).
## How to use custom ranking rules
Custom ranking rules sort results in lexicographical order. For example, `Elena` will rank higher than `Ryu` and lower than `11` in a descending sort.
Since this operation does not take into consideration document relevancy, in the majority of cases you should place custom ranking rules after the built-in ranking rules. This ensures that results are first sorted by relevancy, and the lexicographical sorting takes place only when two or more documents share the same ranking score.
Setting a custom ranking rule at a high position may result in a degraded search experience, since users will see documents in alphanumerical order instead of sorted by relevance.
## Example
Suppose you have a movie dataset. The documents contain the fields `release_date` with a timestamp as value, and `movie_ranking`, an integer that represents its ranking.
The following example creates a rule that makes older movies more relevant than recent ones. A movie released in 1999 will appear before a movie released in 2020.
```
release_date:asc
```
The following example will create a rule that makes movies with a good rank more relevant than movies with a lower rank. Movies with a higher ranking will appear first.
```
movie_ranking:desc
```
The following array includes all built-in ranking rules and places the custom rules at the bottom of the processing order:
```json theme={null}
[
"words",
"typo",
"proximity",
"attributeRank",
"sort",
"wordPosition",
"exactness",
"release_date:asc",
"movie_ranking:desc"
]
```
## Sorting at search time and custom ranking rules
Meilisearch allows users to define [sorting order at query time](/learn/filtering_and_sorting/sort_search_results) by using the [`sort` search parameter](/reference/api/search/search-with-post#body-sort). There is some overlap between sorting and custom ranking rules, but the two do have different uses.
In general, `sort` will be most useful when you want to allow users to define what type of results they want to see first. A good use-case for `sort` is creating a webshop interface where customers can sort products by descending or ascending product price.
Custom ranking rules, instead, are always active once configured and are useful when you want to promote certain types of results. A good use-case for custom ranking rules is ensuring discounted products in a webshop always feature among the top results.
Meilisearch does not offer native support for promoting, pinning, and boosting specific documents so they are displayed more prominently than other search results. Consult these Meilisearch blog articles for workarounds on [implementing promoted search results with React InstantSearch](https://blog.meilisearch.com/promoted-search-results-with-react-instantsearch) and [document boosting](https://blog.meilisearch.com/document-boosting).
# Displayed and searchable attributes
Source: https://www.meilisearch.com/docs/learn/relevancy/displayed_searchable_attributes
Displayed and searchable attributes define what data Meilisearch returns after a successful query and which fields Meilisearch takes in account when searching. Knowing how to configure them can help improve your application's performance.
By default, whenever a document is added to Meilisearch, all new attributes found in it are automatically added to two lists:
* [`displayedAttributes`](/learn/relevancy/displayed_searchable_attributes#displayed-fields): Attributes whose fields are displayed in documents
* [`searchableAttributes`](/learn/relevancy/displayed_searchable_attributes#the-searchableattributes-list): Attributes whose values are searched for matching query words
By default, every field in a document is **displayed** and **searchable**. These properties can be modified in the [settings](/reference/api/settings/list-all-settings).
## Displayed fields
The fields whose attributes are added to the [`displayedAttributes` list](/reference/api/settings/get-displayedattributes) are **displayed in each matching document**.
Documents returned upon search contain only displayed fields. If a field attribute is not in the displayed-attribute list, the field won't be added to the returned documents.
**By default, all field attributes are set as displayed**.
### Example
Suppose you manage a database that contains information about movies. By adding the following settings, documents returned upon search will contain the fields `title`, `overview`, `release_date` and `genres`.
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/movies/settings/displayed-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"title",
"overview",
"genres",
"release_date"
]'
```
```javascript JS theme={null}
client.index('movies').updateDisplayedAttributes([
'title',
'overview',
'genres',
'release_date',
]
)
```
```python Python theme={null}
client.index('movies').update_displayed_attributes([
'title',
'overview',
'genres',
'release_date'
])
```
```php PHP theme={null}
$client->index('movies')->updateDisplayedAttributes([
'title',
'overview',
'genres',
'release_date'
]);
```
```java Java theme={null}
String[] attributes = {"title", "overview", "genres", "release_date"}
client.index("movies").updateDisplayedAttributesSettings(attributes);
```
```ruby Ruby theme={null}
client.index('movies').update_settings({
displayed_attributes: [
'title',
'overview',
'genres',
'release_date'
]
})
```
```go Go theme={null}
displayedAttributes := []string{
"title",
"overview",
"genres",
"release_date",
}
client.Index("movies").UpdateDisplayedAttributes(&displayedAttributes)
```
```csharp C# theme={null}
await client.Index("movies").UpdateDisplayedAttributesAsync(new[]
{
"title",
"overview",
"genres",
"release_date"
});
```
```rust Rust theme={null}
let displayed_attributes = [
"title",
"overview",
"genres",
"release_date"
];
let task: TaskInfo = client
.index("movies")
.set_displayed_attributes(&displayed_attributes)
.await
.unwrap();
```
```swift Swift theme={null}
let displayedAttributes: [String] = [
"title",
"overview",
"genres",
"release_date"
]
client.index("movies").updateDisplayedAttributes(displayedAttributes) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movies').updateDisplayedAttributes([
'title',
'overview',
'genres',
'release_date',
]);
```
## Searchable fields
A field can either be **searchable** or **non-searchable**.
When you perform a search, all searchable fields are checked for matching query words and used to assess document relevancy, while non-searchable fields are ignored entirely. **By default, all fields are searchable.**
Non-searchable fields are most useful for internal information that's not relevant to the search experience, such as URLs, sales numbers, or ratings used exclusively for sorting results.
Even if you make a field non-searchable, it will remain [stored in the database](#data-storing) and can be made searchable again at a later time.
### The `searchableAttributes` list
Meilisearch uses an ordered list to determine which attributes are searchable. The order in which attributes appear in this list also determines their [impact on relevancy](/learn/relevancy/attribute_ranking_order), from most impactful to least.
In other words, the `searchableAttributes` list serves two purposes:
1. It designates the fields that are searchable
2. It dictates the [attribute ranking order](/learn/relevancy/attribute_ranking_order)
There are two possible modes for the `searchableAttributes` list.
#### Default: Automatic
**By default, all attributes are automatically added to the `searchableAttributes` list in their order of appearance.** This means that the initial order will be based on the order of attributes in the first document indexed, with each new attribute found in subsequent documents added at the end of this list.
This default behavior is indicated by a `searchableAttributes` value of `["*"]`. To verify the current value of your `searchableAttributes` list, use the [get searchable attributes endpoint](/reference/api/settings/get-searchableattributes).
If you'd like to restore your searchable attributes list to this default behavior, [set `searchableAttributes` to an empty array `[]`](/reference/api/settings/update-searchableattributes) or use the [reset searchable attributes endpoint](/reference/api/settings/delete-searchableattributes).
#### Manual
You may want to make some attributes non-searchable, or change the [attribute ranking order](/learn/relevancy/attribute_ranking_order) after documents have been indexed. To do so, place the attributes in the desired order and send the updated list using the [update searchable attributes endpoint](/reference/api/settings/update-searchableattributes).
After manually updating the `searchableAttributes` list, **subsequent new attributes will no longer be automatically added** unless the settings are [reset](/reference/api/settings/delete-searchableattributes).
Due to an implementation bug, manually updating `searchableAttributes` will change the displayed order of document fields in the JSON response. This behavior is inconsistent and will be fixed in a future release.
#### Example
Suppose that you manage a database of movies with the following fields: `id`, `overview`, `genres`, `title`, `release_date`. These fields all contain useful information. However, **some are more useful to search than others**. To make the `id` and `release_date` fields non-searchable and re-order the remaining fields by importance, you might update the searchable attributes list in the following way.
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/movies/settings/searchable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"title",
"overview",
"genres"
]'
```
```javascript JS theme={null}
client.index('movies').updateSearchableAttributes([
'title',
'overview',
'genres',
]
)
```
```python Python theme={null}
client.index('movies').update_searchable_attributes([
'title',
'overview',
'genres'
])
```
```php PHP theme={null}
$client->index('movies')->updateSearchableAttributes([
'title',
'overview',
'genres'
]);
```
```java Java theme={null}
String[] attributes = {"title", "overview", "genres"}
client.index("movies").updateSearchableAttributesSettings(attributes);
```
```ruby Ruby theme={null}
client.index('movies').update_searchable_attributes([
'title',
'overview',
'genres'
])
```
```go Go theme={null}
searchableAttributes := []string{
"title",
"overview",
"genres",
}
client.Index("movies").UpdateSearchableAttributes(&searchableAttributes)
```
```csharp C# theme={null}
await client.Index("movies").UpdateSearchableAttributesAsync(new[]
{
"title",
"overview",
"genres"
});
```
```rust Rust theme={null}
let searchable_attributes = [
"title",
"overview",
"genres"
];
let task: TaskInfo = client
.index("movies")
.set_searchable_attributes(&searchable_attributes)
.await
.unwrap();
```
```swift Swift theme={null}
let searchableAttributes: [String] = [
"title",
"overview",
"genres"
]
client.index("movies").updateSearchableAttributes(searchableAttributes) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client
.index('movies')
.updateSearchableAttributes(['title', 'overview', 'genres']);
```
### Customizing attributes to search on at search time
By default, all queries search through all attributes in the `searchableAttributes` list. Use [the `attributesToSearchOn` search parameter](/reference/api/search/search-with-post#body-attributes-to-search-on) to restrict specific queries to a subset of your index's `searchableAttributes`.
## Data storing
All fields are stored in the database. **This behavior cannot be changed**.
Thus, even if a field is missing from both the `displayedAttributes` list and the `searchableAttributes` list, **it is still stored in the database** and can be added to either or both lists at any time.
# Distinct attribute
Source: https://www.meilisearch.com/docs/learn/relevancy/distinct_attribute
Distinct attribute is a field that prevents Meilisearch from returning a set of several similar documents. Often used in ecommerce datasets where many documents are variations of the same item.
The distinct attribute is a special, user-designated field. It is most commonly used to prevent Meilisearch from returning a set of several similar documents, instead forcing it to return only one.
You may set a distinct attribute in two ways: using the `distinctAttribute` index setting during configuration, or the `distinct` search parameter at search time.
## Setting a distinct attribute during configuration
`distinctAttribute` is an index setting that configures a default distinct attribute Meilisearch applies to all searches and facet retrievals in that index.
There can be only one `distinctAttribute` per index. Trying to set multiple fields as a `distinctAttribute` will return an error.
The value of a field configured as a distinct attribute will always be unique among returned documents. This means **there will never be more than one occurrence of the same value** in the distinct attribute field among the returned documents.
When multiple documents have the same value for the distinct attribute, Meilisearch returns only the highest-ranked result after applying [ranking rules](/learn/relevancy/ranking_rules). If two or more documents are equivalent in terms of ranking, Meilisearch returns the first result according to its `internal_id`.
## Example
Suppose you have an e-commerce dataset. For an index that contains information about jackets, you may have several identical items with minor variations such as color or size.
As shown below, this dataset contains three documents representing different versions of a Lee jeans leather jacket. One of the jackets is brown, one is black, and the last one is blue.
```json theme={null}
[
{
"id": 1,
"description": "Leather jacket",
"brand": "Lee jeans",
"color": "brown",
"product_id": "123456"
},
{
"id": 2,
"description": "Leather jacket",
"brand": "Lee jeans",
"color": "black",
"product_id": "123456"
},
{
"id": 3,
"description": "Leather jacket",
"brand": "Lee jeans",
"color": "blue",
"product_id": "123456"
}
]
```
By default, a search for `lee leather jacket` would return all three documents. This might not be desired, since displaying nearly identical variations of the same item can make results appear cluttered.
In this case, you may want to return only one document with the `product_id` corresponding to this Lee jeans leather jacket. To do so, you could set `product_id` as the `distinctAttribute`.
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/jackets/settings/distinct-attribute' \
-H 'Content-Type: application/json' \
--data-binary '"product_id"'
```
```javascript JS theme={null}
client.index('jackets').updateDistinctAttribute('product_id')
```
```python Python theme={null}
client.index('jackets').update_distinct_attribute('product_id')
```
```php PHP theme={null}
$client->index('jackets')->updateDistinctAttribute('product_id');
```
```java Java theme={null}
client.index("jackets").updateDistinctAttributeSettings("product_id");
```
```ruby Ruby theme={null}
client.index('jackets').update_distinct_attribute('product_id')
```
```go Go theme={null}
client.Index("jackets").UpdateDistinctAttribute("product_id")
```
```csharp C# theme={null}
await client.Index("jackets").UpdateDistinctAttributeAsync("product_id");
```
```rust Rust theme={null}
let task: TaskInfo = client
.index("jackets")
.set_distinct_attribute("product_id")
.await
.unwrap();
```
```swift Swift theme={null}
client.index("jackets").updateDistinctAttribute("product_id") { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('jackets').updateDistinctAttribute('product_id');
```
By setting `distinctAttribute` to `product_id`, search requests **will never return more than one document with the same `product_id`**.
After setting the distinct attribute as shown above, querying for `lee leather jacket` would only return the first document found. The response would look like this:
```json theme={null}
{
"hits": [
{
"id": 1,
"description": "Leather jacket",
"brand": "Lee jeans",
"color": "brown",
"product_id": "123456"
}
],
"offset": 0,
"limit": 20,
"estimatedTotalHits": 1,
"processingTimeMs": 0,
"query": "lee leather jacket"
}
```
For more in-depth information on distinct attribute, consult the [API reference](/reference/api/settings/get-distinctattribute).
## Setting a distinct attribute at search time
`distinct` is a search parameter you may add to any search query. It allows you to selectively use distinct attributes depending on the context. `distinct` takes precedence over `distinctAttribute`.
To use an attribute with `distinct`, first add it to the `filterableAttributes` list:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/products/settings/filterable-attributes' \
-H 'Content-Type: application/json' \
--data-binary '[
"product_id",
"sku",
"url"
]'
```
```javascript JS theme={null}
client.index('products').updateFilterableAttributes(['product_id', 'sku', 'url'])
```
```python Python theme={null}
client.index('products').update_filterable_attributes(['product_id', 'sku', 'url'])
```
```php PHP theme={null}
$client->index('products')->updateFilterableAttributes(['product_id', 'sku', 'url']);
```
```java Java theme={null}
Settings settings = new Settings();
settings.setFilterableAttributes(new String[] { "product_id", "SKU", "url" });
client.index("products").updateSettings(settings);
```
```ruby Ruby theme={null}
client.index('products').update_filterable_attributes([
'product_id',
'sku',
'url'
])
```
```go Go theme={null}
filterableAttributes := []interface{}{
"product_id",
"sku",
"url",
}
client.Index("products").UpdateFilterableAttributes(&filterableAttributes)
```
```csharp C# theme={null}
List attributes = new() { "product_id", "sku", "url" };
TaskInfo result = await client.Index("products").UpdateFilterableAttributesAsync(attributes);
```
```rust Rust theme={null}
let task: TaskInfo = client
.index("products")
.settings()
.set_filterable_attributes(["product_id", "sku", "url"])
.execute()
.await
.unwrap();
```
Then use `distinct` in a search query, specifying one of the configured attributes:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/products/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "white shirt",
"distinct": "sku"
}'
```
```javascript JS theme={null}
client.index('products').search('white shirt', { distinct: 'sku' })
```
```python Python theme={null}
client.index('products').search('white shirt', { distinct: 'sku' })
```
```php PHP theme={null}
$client->index('products')->search('white shirt', [
'distinct' => 'sku'
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("white shirt").distinct("sku").build();
client.index("products").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('products').search('white shirt', {
distinct: 'sku'
})
```
```go Go theme={null}
client.Index("products").Search("white shirt", &meilisearch.SearchRequest{
Distinct: "sku",
})
```
```csharp C# theme={null}
var params = new SearchQuery()
{
Distinct = "sku"
};
await client.Index("products").SearchAsync("white shirt", params);
```
```rust Rust theme={null}
let res = client
.index("products")
.search()
.with_query("white shirt")
.with_distinct("sku")
.execute()
.await
.unwrap();
```
# Built-in ranking rules
Source: https://www.meilisearch.com/docs/learn/relevancy/ranking_rules
Built-in ranking rules are the core of Meilisearch's relevancy calculations.
There are two types of ranking rules in Meilisearch: built-in ranking rules and [custom ranking rules](/learn/relevancy/custom_ranking_rules). This article describes the main aspects of using and configuring built-in ranking rules.
Built-in ranking rules are the core of Meilisearch's relevancy calculations.
## List of built-in ranking rules
Meilisearch contains seven built-in ranking rules in the following order:
```json theme={null}
[
"words",
"typo",
"proximity",
"attributeRank",
"sort",
"wordPosition",
"exactness"
]
```
Depending on your needs, you might want to change this order. To do so, use the [update settings endpoint](/reference/api/settings/update-all-settings) or [update ranking rules endpoint](/reference/api/settings/update-rankingrules).
## 1. Words
Results are sorted by **decreasing number of matched query terms**. Returns documents that contain all query terms first.
To ensure optimal relevancy, **Meilisearch always sort results as if the `words` ranking rule were present** with a higher priority than the attributes, exactness, typo and proximity ranking rules. This happens even if `words` has been removed or set with a lower priority.
The `words` rule works from right to left. Therefore, the order of the query string impacts the order of results.
For example, if someone were to search `batman dark knight`, the `words` rule would rank documents containing all three terms first, documents containing only `batman` and `dark` second, and documents containing only `batman` third.
## 2. Typo
Results are sorted by **increasing number of typos**. Returns documents that match query terms with fewer typos first.
## 3. Proximity
Results are sorted by **increasing distance between matched query terms**. Returns documents where query terms occur close together and in the same order as the query string first.
[It is possible to lower the precision of this ranking rule.](/reference/api/settings/update-proximityprecision) This may significantly improve indexing performance. In a minority of use cases, lowering precision may also lead to lower search relevancy for queries using multiple search terms.
## 4. Attribute
`attribute` is an older built-in ranking rule equivalent to using both `attributeRank` and `wordPosition` together. When you use `attribute`, Meilisearch first sorts results by the attribute ranking order, then uses the position within attributes as a tiebreaker.
You cannot use `attribute` together with `attributeRank` or `wordPosition`. If you try to configure ranking rules with both, Meilisearch will return an error. We recommend using a combination of the `attributeRank` and `wordPosition` rules.
For most use-cases, we recommend using `attributeRank` and `wordPosition` separately. This gives you more control over result ordering by allowing you to place other ranking rules (like `sort` or custom ranking rules) between them.
## 4. Attribute rank
Results are sorted according to the **[attribute ranking order](/learn/relevancy/attribute_ranking_order)**. Returns documents that contain query terms in more important attributes first.
This rule evaluates only the attribute ranking order and does not consider the position of matched words within attributes.
## 5. Sort
Results are sorted **according to parameters decided at query time**. When the `sort` ranking rule is in a higher position, sorting is exhaustive: results will be less relevant but follow the user-defined sorting order more closely. When `sort` is in a lower position, sorting is relevant: results will be very relevant but might not always follow the order defined by the user.
Differently from other ranking rules, sort is only active for queries containing the [`sort` search parameter](/reference/api/search/search-with-post#body-sort). If a search request does not contain `sort`, or if its value is invalid, this rule will be ignored.
## 6. Word position
Results are sorted by the **position of query terms within the attributes**. Returns documents that contain query terms closer to the beginning of an attribute first.
This rule evaluates only the position of matched words within attributes and does not consider the attribute ranking order.
## 7. Exactness
Results are sorted by **the similarity of the matched words with the query words**. Returns documents that contain exactly the same terms as the ones queried first.
## Examples
### Typo
* `vogli`: 0 typo
* `volli`: 1 typo
The `typo` rule sorts the results by increasing number of typos on matched query words.
### Proximity
The reason why `Creature` is listed before `Mississippi Grind` is because of the `proximity` rule. The smallest **distance** between the matching words in `creature` is smaller than the smallest **distance** between the matching words in `Mississippi Grind`.
The `proximity` rule sorts the results by increasing distance between matched query terms.
### Attribute rank
`If It's Tuesday, This must be Belgium` is the first document because the matched word `Belgium` is found in the `title` attribute and not the `overview`.
The `attributeRank` rule sorts the results by [attribute importance](/learn/relevancy/attribute_ranking_order).
### Exactness
`Knight Moves` is displayed before `Knights of Badassdom`. `Knight` is exactly the same as the search query `Knight` whereas there is a letter of difference between `Knights` and the search query `Knight`.
# Ranking score
Source: https://www.meilisearch.com/docs/learn/relevancy/ranking_score
This article explains how the order of attributes in the `searchableAttributes` array impacts search result relevancy.
When using the [`showRankingScore` search parameter](/reference/api/search/search-with-post#body-show-ranking-score), Meilisearch adds a global ranking score field, `_rankingScore`, to each document. The `_rankingScore` is between `0.0` and `1.0`. The higher the ranking score, the more relevant the document.
Ranking rules sort documents either by relevancy (`words`, `typo`, `proximity`, `exactness`, `attributeRank`, `wordPosition`) or by the value of a field (`sort`). Since `sort` doesn't rank documents by relevancy, it does not influence the `_rankingScore`.
A document's ranking score does not change based on the scores of other documents in the same index.
For example, if a document A has a score of `0.5` for a query term, this value remains constant no matter the score of documents B, C, or D.
The table below details all the index settings that can influence the `_rankingScore`. **Unlisted settings do not influence the ranking score.**
| Index setting | Influences if | Rationale |
| :--------------------- | :--------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `searchableAttributes` | The `attributeRank` ranking rule is used | The `attributeRank` ranking rule rates the document depending on the attribute in which the query terms show up. The order is determined by `searchableAttributes` |
| `searchableAttributes` | The `wordPosition` ranking rule is used | The `wordPosition` ranking rule rates the document based on the position of query terms within attributes |
| `rankingRules` | Always | The score is computed by computing the subscore of each ranking rule with a weight that depends on their order |
| `stopWords` | Always | Stop words influence the `words` ranking rule, which is almost always used |
| `synonyms` | Always | Synonyms influence the `words` ranking rule, which is almost always used |
| `typoTolerance` | The `typo` ranking rule is used | Used to compute the maximum number of typos for a query |
# Relevancy
Source: https://www.meilisearch.com/docs/learn/relevancy/relevancy
Relevancy refers to the accuracy of search results. If search results tend to be appropriate for a given query, then they can be considered relevant.
**Relevancy** refers to the accuracy and effectiveness of search results. If search results are almost always appropriate, then they can be considered relevant, and vice versa.
Meilisearch has a number of features for fine-tuning the relevancy of search results. The most important tool among them is **ranking rules**. There are two types of ranking rules: [built-in ranking rules](/learn/relevancy/ranking_rules) and custom ranking rules.
## Behavior
Each index possesses a list of ranking rules stored as an array in the [settings object](/reference/api/settings/list-all-settings). This array is fully customizable, meaning you can delete existing rules, add new ones, and reorder them as needed.
Meilisearch uses a [bucket sort](https://en.wikipedia.org/wiki/Bucket_sort) algorithm to rank documents whenever a search query is made. The first ranking rule applies to all documents, while each subsequent rule is only applied to documents considered equal under the previous rule as a tiebreaker.
**The order in which ranking rules are applied matters.** The first rule in the array has the most impact, and the last rule has the least. Our default configuration meets most standard needs, but [you can change it](/reference/api/settings/update-rankingrules).
Deleting a rule means that Meilisearch will no longer sort results based on that rule. For example, **if you delete the [typo ranking rule](/learn/relevancy/ranking_rules#2-typo), documents with typos will still be considered during search**, but they will no longer be sorted by increasing number of typos.
# Synonyms
Source: https://www.meilisearch.com/docs/learn/relevancy/synonyms
Use Meilisearch synonyms to indicate sets of query terms which should be considered equivalent during search.
If multiple words have an equivalent meaning in your dataset, you can [create a list of synonyms](/reference/api/settings/update-synonyms). This will make your search results more relevant.
Words set as synonyms won't always return the same results. With the default settings, the `movies` dataset should return 547 results for `great` and 66 for `fantastic`. Let's set them as synonyms:
```bash cURL theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/movies/settings/synonyms' \
-H 'Content-Type: application/json' \
--data-binary '{
"great": ["fantastic"], "fantastic": ["great"]
}'
```
```javascript JS theme={null}
client.index('movies').updateSynonyms({
'great': ['fantastic'],
'fantastic': ['great']
})
```
```python Python theme={null}
client.index('movies').update_synonyms({
'great': ['fantastic'],
'fantastic': ['great']
})
```
```php PHP theme={null}
$client->index('movies')->updateSynonyms([
'great' => ['fantastic'],
'fantastic' => ['great'],
]);
```
```java Java theme={null}
HashMap synonyms = new HashMap();
synonyms.put("great", new String[] {"fantastic"});
synonyms.put("fantastic", new String[] {"great"});
client.index("movies").updateSynonymsSettings(synonyms);
```
```ruby Ruby theme={null}
client.index('movies').update_synonyms({
great: ['fantastic'],
fantastic: ['great']
})
```
```go Go theme={null}
synonyms := map[string][]string{
"great": []string{"fantastic"},
"fantastic": []string{"great"},
}
client.Index("movies").UpdateSynonyms(&synonyms)
```
```csharp C# theme={null}
var synonyms = new Dictionary>
{
{ "great", new string[] { "fantastic" } },
{ "fantastic", new string[] { "great" } }
};
await client.Index("movies").UpdateSynonymsAsync(synonyms);
```
```rust Rust theme={null}
let mut synonyms = std::collections::HashMap::new();
synonyms.insert(String::from("great"), vec![String::from("fantastic")]);
synonyms.insert(String::from("fantastic"), vec![String::from("great")]);
let task: TaskInfo = client
.index("movies")
.set_synonyms(&synonyms)
.await
.unwrap();
```
```swift Swift theme={null}
let synonyms: [String: [String]] = [
"great": ["fantastic"],
"fantastic": ["great"]
]
client.index("movies").updateSynonyms(synonyms) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movies').updateSynonyms({
'great': ['fantastic'],
'fantastic': ['great'],
});
```
With the new settings, searching for `great` returns 595 results and `fantastic` returns 423 results. This is due to various factors like [typos](/learn/relevancy/typo_tolerance_settings#minwordsizefortypos) and [splitting the query](/learn/engine/concat#split-queries) to find relevant documents. The search for `great` will allow only one typo (for example, `create`) and take into account all variations of `great` (for instance, `greatest`) along with `fantastic`.
The number of search results may vary depending on changes to the `movies` dataset.
## Normalization
All synonyms are **lowercased** and **de-unicoded** during the indexing process.
### Example
Consider a situation where `Résumé` and `CV` are set as synonyms.
```json theme={null}
{
"Résumé": [
"CV"
],
"CV": [
"Résumé"
]
}
```
A search for `cv` would return any documents containing `cv` or `CV`, in addition to any that contain `Résumé`, `resumé`, `resume`, etc., unaffected by case or accent marks.
## One-way association
Use this when you want one word to be synonymous with another, but not the other way around.
```
phone => iphone
```
A search for `phone` will return documents containing `iphone` as if they contained the word `phone`.
However, if you search for `iphone`, documents containing `phone` will be ranked lower in the results due to [the typo rule](/learn/relevancy/ranking_rules).
### Example
To create a one-way synonym list, this is the JSON syntax that should be [added to the settings](/reference/api/settings/update-synonyms).
```json theme={null}
{
"phone": [
"iphone"
]
}
```
## Relevancy
**The exact search query will always take precedence over its synonyms.** The `exactness` ranking rule favors exact words over synonyms when ranking search results.
Taking the following set of search results:
```json theme={null}
[
{
"id": 0,
"title": "Ghouls 'n Ghosts"
},
{
"id": 1,
"title": "Phoenix Wright: Spirit of Justice"
}
]
```
If you configure `ghost` as a synonym of `spirit`, queries searching for `spirit` will return document `1` before document `0`.
## Mutual association
By associating one or more synonyms with each other, they will be considered the same in both directions.
```
shoe <=> boot <=> slipper <=> sneakers
```
When a search is done with one of these words, all synonyms will be considered as the same word and will appear in the search results.
### Example
To create a mutual association between four words, this is the JSON syntax that should be [added to the settings](/reference/api/settings/update-synonyms).
```json theme={null}
{
"shoe": [
"boot",
"slipper",
"sneakers"
],
"boot": [
"shoe",
"slipper",
"sneakers"
],
"slipper": [
"shoe",
"boot",
"sneakers"
],
"sneakers": [
"shoe",
"boot",
"slipper"
]
}
```
## Multi-word synonyms
Meilisearch treats multi-word synonyms as [phrases](/reference/api/search/search-with-post#body-q).
### Example
Suppose you set `San Francisco` and `SF` as synonyms with a [mutual association](#mutual-association)
```json theme={null}
{
"san francisco": [
"sf"
],
"sf": [
"san francisco"
]
}
```
If you input `SF` as a search query, Meilisearch will also return results containing the phrase `San Francisco`. However, depending on the ranking rules, they might be considered less [relevant](/learn/relevancy/relevancy) than those containing `SF`. The reverse is also true: if your query is `San Francisco`, documents containing `San Francisco` may rank higher than those containing `SF`.
## Maximum number of synonyms per term
A single term may have up to 50 synonyms. Meilisearch silently ignores any synonyms beyond this limit. For example, if you configure 51 synonyms for `book`, Meilisearch will only return results containing the term itself and the first 50 synonyms.
If any synonyms for a term contain more than one word, the sum of all words across all synonyms for that term cannot exceed 100 words. Meilisearch silently ignores any synonyms beyond this limit. For example, if you configure 40 synonyms for `computer` in your application, taken together these synonyms must contain fewer than 100 words.
# Typo tolerance calculations
Source: https://www.meilisearch.com/docs/learn/relevancy/typo_tolerance_calculations
Typo tolerance helps users find relevant results even when their search queries contain spelling mistakes or typos.
Typo tolerance helps users find relevant results even when their search queries contain spelling mistakes or typos, for example, typing `phnoe` instead of `phone`. You can [configure the typo tolerance feature for each index](/reference/api/settings/update-typotolerance).
Meilisearch uses a prefix [Levenshtein algorithm](https://en.wikipedia.org/wiki/Levenshtein_distance) to determine if a word in a document could be a possible match for a query term.
The [number of typos referenced above](/learn/relevancy/typo_tolerance_settings#minwordsizefortypos) is roughly equivalent to Levenshtein distance. The Levenshtein distance between two words *M* and *P* can be thought of as "the minimum cost of transforming *M* into *P*" by performing the following elementary operations on *M*:
* substitution of a character (for example, `kitten` → `sitten`)
* insertion of a character (for example, `siting` → `sitting`)
* deletion of a character (for example, `saturday` → `satuday`)
By default, Meilisearch uses the following rules for matching documents. Note that these rules are **by word** and not for the whole query string.
* If the query word is between `1` and `4` characters, **no typo** is allowed. Only documents that contain words that **start with** or are of the **same length** with this query word are considered valid
* If the query word is between `5` and `8` characters, **one typo** is allowed. Documents that contain words that match with **one typo** are retained for the next steps.
* If the query word contains more than `8` characters, we accept a maximum of **two typos**
This means that `saturday` which is `7` characters long, uses the second rule and matches every document containing **one typo**. For example:
* `saturday` is accepted because it is the same word
* `satuday` is accepted because it contains **one typo**
* `sutuday` is not accepted because it contains **two typos**
* `caturday` is not accepted because it contains **two typos** (as explained [above](/learn/relevancy/typo_tolerance_settings#minwordsizefortypos), a typo on the first letter of a word is treated as two typos)
## Impact of typo tolerance on the `typo` ranking rule
The [`typo` ranking rule](/learn/relevancy/ranking_rules#2-typo) sorts search results by increasing number of typos on matched query words. Documents with 0 typos will rank highest, followed by those with 1 and then 2 typos.
The presence or absence of the `typo` ranking rule has no impact on the typo tolerance setting. However, **[disabling the typo tolerance setting](/learn/relevancy/typo_tolerance_settings#enabled) effectively also disables the `typo` ranking rule.** This is because all returned documents will contain `0` typos.
To summarize:
* Typo tolerance affects how lenient Meilisearch is when matching documents
* The `typo` ranking rule affects how Meilisearch sorts its results
* Disabling typo tolerance also disables `typo`
# Typo tolerance settings
Source: https://www.meilisearch.com/docs/learn/relevancy/typo_tolerance_settings
This article describes each of the typo tolerance settings.
Typo tolerance helps users find relevant results even when their search queries contain spelling mistakes or typos, for example, typing `phnoe` instead of `phone`. You can [configure the typo tolerance feature for each index](/reference/api/settings/update-typotolerance).
## `enabled`
Typo tolerance is enabled by default, but you can disable it if needed:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/movies/settings/typo-tolerance' \
-H 'Content-Type: application/json' \
--data-binary '{ "enabled": false }'
```
```javascript JS theme={null}
client.index('movies').updateTypoTolerance({
enabled: false
})
```
```python Python theme={null}
client.index('movies').update_typo_tolerance({
'enabled': False
})
```
```php PHP theme={null}
$client->index('movies')->updateTypoTolerance([
'enabled' => false
]);
```
```java Java theme={null}
TypoTolerance typoTolerance = new TypoTolerance();
typoTolerance.setEnabled(false);
client.index("movies").updateTypoToleranceSettings(typoTolerance);
```
```ruby Ruby theme={null}
index('books').update_typo_tolerance({ enabled: false })
```
```go Go theme={null}
client.Index("movies").UpdateTypoTolerance(&meilisearch.TypoTolerance{
Enabled: false,
})
```
```csharp C# theme={null}
var typoTolerance = new TypoTolerance {
Enabled = false
};
await client.Index("movies").UpdateTypoToleranceAsync(typoTolerance);
```
```rust Rust theme={null}
let typo_tolerance = TypoToleranceSettings {
enabled: Some(false),
disable_on_attributes: None,
disable_on_words: None,
min_word_size_for_typos: None,
};
let task: TaskInfo = client
.index("movies")
.set_typo_tolerance(&typo_tolerance)
.await
.unwrap();
```
```dart Dart theme={null}
final toUpdate = TypoTolerance(enabled: false);
await client.index('movies').updateTypoTolerance(toUpdate);
```
With typo tolerance disabled, Meilisearch no longer considers words that are a few characters off from your query terms as matches. For example, a query for `phnoe` will no longer return a document containing the word `phone`.
**In most cases, keeping typo tolerance enabled results in a better search experience.** Massive or multilingual datasets may be exceptions, as typo tolerance can cause false-positive matches in these cases.
## `minWordSizeForTypos`
By default, Meilisearch accepts one typo for query terms containing five or more characters, and up to two typos if the term is at least nine characters long.
If your dataset contains `seven`, searching for `sevem` or `sevan` will match `seven`. But `tow` won't match `two` as it's less than `5` characters.
You can override these default settings using the `minWordSizeForTypos` object. The code sample below sets the minimum word size for one typo to `4` and the minimum word size for two typos to `10`.
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/movies/settings/typo-tolerance' \
-H 'Content-Type: application/json' \
--data-binary '{
"minWordSizeForTypos": {
"oneTypo": 4,
"twoTypos": 10
}
}'
```
```javascript JS theme={null}
client.index('movies').updateTypoTolerance({
minWordSizeForTypos: {
oneTypo: 4,
twoTypos: 10
}
})
```
```python Python theme={null}
client.index('movies').update_typo_tolerance({
'minWordSizeForTypos': {
'oneTypo': 4,
'twoTypos': 10
}
})
```
```php PHP theme={null}
$client->index('movies')->updateTypoTolerance([
'minWordSizeForTypos' => [
'oneTypo' => 4,
'twoTypos' => 10
]
]);
```
```java Java theme={null}
TypoTolerance typoTolerance = new TypoTolerance();
HashMap minWordSizeTypos =
new HashMap() {
{
put("oneTypo", 4);
put("twoTypos", 10);
}
};
typoTolerance.setMinWordSizeForTypos(minWordSizeTypos);
client.index("movies").updateTypoToleranceSettings(typoTolerance);
```
```ruby Ruby theme={null}
index('books').update_typo_tolerance({
min_word_size_for_typos: {
one_typo: 4,
two_typos: 10
}
})
```
```go Go theme={null}
client.Index("movies").UpdateTypoTolerance(&meilisearch.TypoTolerance{
MinWordSizeForTypos: meilisearch.MinWordSizeForTypos{
OneTypo: 4,
TwoTypos: 10,
},
})
```
```csharp C# theme={null}
var typoTolerance = new TypoTolerance {
MinWordSizeTypos = new TypoTolerance.TypoSize {
OneTypo = 4,
TwoTypos = 10
}
};
await client.Index("movies").UpdateTypoToleranceAsync(typoTolerance);
```
```rust Rust theme={null}
let min_word_size_for_typos = MinWordSizeForTypos {
one_typo: Some(4),
two_typos: Some(12)
};
let typo_tolerance = TypoToleranceSettings {
enabled: Some(true),
disable_on_attributes: Some(vec![]),
disable_on_words: Some(vec!["title".to_string()]),
min_word_size_for_typos: Some(min_word_size_for_typos),
};
let task: TaskInfo = client
.index("movies")
.set_typo_tolerance(&typo_tolerance)
.await
.unwrap();
```
```dart Dart theme={null}
final toUpdate = TypoTolerance(
minWordSizeForTypos: MinWordSizeForTypos(
oneTypo: 4,
twoTypos: 10,
),
);
await client.index('movies').updateTypoTolerance(toUpdate);
```
When updating the `minWordSizeForTypos` object, keep in mind that:
* `oneTypo` must be greater than or equal to 0 and less than or equal to `twoTypos`
* `twoTypos` must be greater than or equal to `oneTypo` and less than or equal to `255`
To put it another way: `0 ≤ oneTypo ≤ twoTypos ≤ 255`.
We recommend keeping the value of `oneTypo` between `2` and `8` and the value of `twoTypos` between `4` and `14`. If either value is too low, you may get a large number of false-positive results. On the other hand, if both values are set too high, many search queries may not benefit from typo tolerance.
**Typo on the first character**\
Meilisearch considers a typo on a query's first character as two typos.
**Concatenation**\
When considering possible candidates for typo tolerance, Meilisearch will concatenate multiple search terms separated by a [space separator](/learn/engine/datatypes#string). This is treated as one typo. For example, a search for `any way` would match documents containing `anyway`.
For more about typo calculations, [see below](/learn/relevancy/typo_tolerance_calculations).
## `disableOnWords`
You can disable typo tolerance for a list of query terms by adding them to `disableOnWords`. `disableOnWords` is case insensitive.
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/movies/settings/typo-tolerance' \
-H 'Content-Type: application/json' \
--data-binary '{
"disableOnWords": [
"shrek"
]
}'
```
```javascript JS theme={null}
client.index('movies').updateTypoTolerance({
disableOnWords: ['shrek']
})
```
```python Python theme={null}
client.index('movies').update_typo_tolerance({
'disableOnWords': ['shrek']
})
```
```php PHP theme={null}
$client->index('movies')->updateTypoTolerance([
'disableOnWords' => ['shrek']
]);
```
```java Java theme={null}
TypoTolerance typoTolerance = new TypoTolerance();
typoTolerance.setDisableOnWords(new String[] {"shrek"});
client.index("movies").updateTypoToleranceSettings(typoTolerance);
```
```ruby Ruby theme={null}
index('books').update_typo_tolerance({ disable_on_words: ['shrek'] })
```
```go Go theme={null}
client.Index("movies").UpdateTypoTolerance(&meilisearch.TypoTolerance{
DisableOnWords: []string{"shrek"},
})
```
```csharp C# theme={null}
var typoTolerance = new TypoTolerance {
DisableOnWords = new string[] { "shrek" }
};
await client.Index("movies").UpdateTypoToleranceAsync(typoTolerance);
```
```rust Rust theme={null}
let min_word_size_for_typos = MinWordSizeForTypos {
one_typo: Some(5),
two_typos: Some(12)
}
let typo_tolerance = TypoToleranceSettings {
enabled: Some(true),
disable_on_attributes: None,
disable_on_words: Some(vec!["shrek".to_string()]),
min_word_size_for_typos: Some(min_word_size_for_typos),
};
let task: TaskInfo = client
.index("movies")
.set_typo_tolerance(&typo_tolerance)
.await
.unwrap();
```
```dart Dart theme={null}
final toUpdate = TypoTolerance(
disableOnWords: ['shrek'],
);
await client.index('movies').updateTypoTolerance(toUpdate);
```
Meilisearch won't apply typo tolerance on the query term `Shrek` or `shrek` at search time to match documents.
## `disableOnAttributes`
You can disable typo tolerance for a specific [document attribute](/learn/getting_started/documents) by adding it to `disableOnAttributes`. The code sample below disables typo tolerance for `title`:
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/movies/settings/typo-tolerance' \
-H 'Content-Type: application/json' \
--data-binary '{ "disableOnAttributes": ["title"] }'
```
```javascript JS theme={null}
client.index('movies').updateTypoTolerance({
disableOnAttributes: ['title']
})
```
```python Python theme={null}
client.index('movies').update_typo_tolerance({
'disableOnAttributes': ['title']
})
```
```php PHP theme={null}
$client->index('movies')->updateTypoTolerance([
'disableOnAttributes' => ['title']
]);
```
```java Java theme={null}
TypoTolerance typoTolerance = new TypoTolerance();
typoTolerance.setDisableOnAttributes(new String[] {"title"});
client.index("movies").updateTypoToleranceSettings(typoTolerance);
```
```ruby Ruby theme={null}
index('books').update_typo_tolerance({ disable_on_attributes: ['title'] })
```
```go Go theme={null}
client.Index("movies").UpdateTypoTolerance(&meilisearch.TypoTolerance{
DisableOnAttributes: []string{"title"},
})
```
```csharp C# theme={null}
var typoTolerance = new TypoTolerance {
DisableOnAttributes = new string[] { "title" }
};
await client.Index("movies").UpdateTypoToleranceAsync(typoTolerance);
```
```rust Rust theme={null}
let min_word_size_for_typos = MinWordSizeForTypos {
one_typo: Some(5),
two_typos: Some(12)
}
let typo_tolerance = TypoToleranceSettings {
enabled: Some(true),
disable_on_attributes: Some(vec!["title".to_string()]),
disable_on_words: None,
min_word_size_for_typos: None,
};
let task: TaskInfo = client
.index("movies")
.set_typo_tolerance(&typo_tolerance)
.await
.unwrap();
```
```dart Dart theme={null}
final toUpdate = TypoTolerance(
disableOnAttributes: ['title'],
);
await client.index('movies').updateTypoTolerance(toUpdate);
```
With the above settings, matches in the `title` attribute will not tolerate any typos. For example, a search for `beautiful` (9 characters) will not match the movie "Biutiful" starring Javier Bardem. With the default settings, this would be a match.
## `disableOnNumbers`
You can disable typo tolerance for all numeric values across all indexes and search requests by setting `disableOnNumbers` to `true`:
By default, typo tolerance on numerical values is turned on. This may lead to false positives, such as a search for `2024` matching documents containing `2025` or `2004`.
When `disableOnNumbers` is set to `true`, queries with numbers only return exact matches. Besides reducing the number of false positives, disabling typo tolerance on numbers may also improve indexing performance.
# Comparison to alternatives
Source: https://www.meilisearch.com/docs/learn/resources/comparison_to_alternatives
Deciding on a search engine for your project is an important but difficult task. This article describes the differences between Meilisearch and other search engines.
There are many search engines on the web, both open-source and otherwise. Deciding which search solution is the best fit for your project is very important, but also difficult. In this article, we'll go over the differences between Meilisearch and other search engines:
* In the [comparison table](#comparison-table), we present a general overview of the differences between Meilisearch and other search engines
* In the [approach comparison](#approach-comparison), instead, we focus on how Meilisearch measures up against [ElasticSearch](#meilisearch-vs-elasticsearch) and [Algolia](#meilisearch-vs-algolia), currently two of the biggest solutions available in the market
* Finally, we end this article with [an in-depth analysis of the broader search engine landscape](#a-quick-look-at-the-search-engine-landscape)
Please be advised that many of the search products described below are constantly evolving—just like Meilisearch. These are only our own impressions, and may not reflect recent changes. If something appears inaccurate, please don't hesitate to open an [issue or pull request](https://github.com/meilisearch/documentation).
## Comparison table
### General overview
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| --------------------- | :--------------------------------------------------------------------------------------------------: | :------------: | :------------------------------------------------------------------------------: | :--------------------------------------------------------------------------: |
| Source code licensing | [MIT](https://choosealicense.com/licenses/mit/) (Fully open-source) | Closed-source | [GPL-3](https://choosealicense.com/licenses/gpl-3.0/) (Fully open-source) | [AGPLv3](https://choosealicense.com/licenses/agpl-3.0/) (open-source) |
| Built with | Rust [Check out why we believe in Rust](https://www.abetterinternet.org/docs/memory-safety/). | C++ | C++ | Java |
| Data storage | Disk with Memory Mapping -- Not limited by RAM | Limited by RAM | Limited by RAM | Disk with RAM cache |
### Features
#### Integrations and SDKs
Note: we are only listing libraries officially supported by the internal teams of each different search engine.
Can't find a client you'd like us to support? [Submit your idea here](https://github.com/orgs/meilisearch/discussions)
| SDK | Meilisearch | Algolia | Typesense | Elasticsearch | |
| ------------------------------------------------------------------------------------------------------------- | :---------: | :-----: | :-----------: | :---------------------------------------: | - |
| REST API | ✅ | ✅ | ✅ | ✅ | |
| [JavaScript client](https://github.com/meilisearch/meilisearch-js) | ✅ | ✅ | ✅ | ✅ | |
| [PHP client](https://github.com/meilisearch/meilisearch-php) | ✅ | ✅ | ✅ | ✅ | |
| [Python client](https://github.com/meilisearch/meilisearch-python) | ✅ | ✅ | ✅ | ✅ | |
| [Ruby client](https://github.com/meilisearch/meilisearch-ruby) | ✅ | ✅ | ✅ | ✅ | |
| [Java client](https://github.com/meilisearch/meilisearch-java) | ✅ | ✅ | ✅ | ✅ | |
| [Swift client](https://github.com/meilisearch/meilisearch-swift) | ✅ | ✅ | ✅ | ❌ | |
| [.NET client](https://github.com/meilisearch/meilisearch-dotnet) | ✅ | ✅ | ✅ | ✅ | |
| [Rust client](https://github.com/meilisearch/meilisearch-rust) | ✅ | ❌ | 🔶 WIP | ✅ | |
| [Go client](https://github.com/meilisearch/meilisearch-go) | ✅ | ✅ | ✅ | ✅ | |
| [Dart client](https://github.com/meilisearch/meilisearch-dart) | ✅ | ✅ | ✅ | ❌ | |
| [Symfony](https://github.com/meilisearch/meilisearch-symfony) | ✅ | ✅ | ✅ | ❌ | |
| Django | ❌ | ✅ | ❌ | ❌ | |
| [Rails](https://github.com/meilisearch/meilisearch-rails) | ✅ | ✅ | 🔶 WIP | ✅ | |
| [Official Laravel Scout Support](https://github.com/laravel/scout) | ✅ | ✅ | ✅ | ❌ Available as a standalone module | |
| [Instantsearch](https://github.com/meilisearch/meilisearch-js-plugins/tree/main/packages/instant-meilisearch) | ✅ | ✅ | ✅ | ✅ | |
| [Autocomplete](https://github.com/meilisearch/meilisearch-js-plugins/tree/main/packages/autocomplete-client) | ✅ | ✅ | ✅ | ✅ | |
| [Strapi](https://github.com/meilisearch/strapi-plugin-meilisearch) | ✅ | ✅ | ❌ | ❌ | |
| [Firebase](https://github.com/meilisearch/firestore-meilisearch) | ✅ | ✅ | ✅ | ❌ | |
#### Configuration
##### Document schema
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| ------------------------------- | :-----------------------: | :-----: | :------------------------------------------------------------------: | :---------------------: |
| Schemaless | ✅ | ✅ | 🔶 `id` field is required and must be a string | ✅ |
| Nested field support | ✅ | ✅ | ✅ | ✅ |
| Nested document querying | ❌ | ❌ | ❌ | ✅ |
| Automatic document ID detection | ✅ | ❌ | ❌ | ❌ |
| Native document formats | `JSON`, `NDJSON`, `CSV` | `JSON` | `NDJSON` | `JSON`, `NDJSON`, `CSV` |
| Compression Support | Gzip, Deflate, and Brotli | Gzip | ❌ Reads payload as JSON which can lead to document corruption | Gzip |
##### Relevancy
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| ---------------------------- | :---------: | :-----: | :------------------------------------------------------------------------------: | :---------------------------------------------: |
| Typo tolerant | ✅ | ✅ | ✅ | 🔶 Needs to be specified by fuzzy queries |
| Orderable ranking rules | ✅ | ✅ | 🔶 Field weight can be changed, but ranking rules order cannot be changed. | ❌ |
| Custom ranking rules | ✅ | ✅ | ✅ | 🔶 Function score query |
| Query field weights | ✅ | ✅ | ✅ | ✅ |
| Synonyms | ✅ | ✅ | ✅ | ✅ |
| Stop words | ✅ | ✅ | ❌ | ✅ |
| Automatic language detection | ✅ | ✅ | ❌ | ❌ |
| All language supports | ✅ | ✅ | ✅ | ✅ |
| Ranking Score Details | ✅ | ✅ | ❌ | ✅ |
##### Security
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| ------------------------------------ | :-------------------------------------------------------------------------: | :-----: | :-------: | :-----------------: |
| API Key Management | ✅ | ✅ | ✅ | ✅ |
| Tenant tokens & multi-tenant indexes | ✅ [Multitenancy support](/learn/security/multitenancy_tenant_tokens) | ✅ | ✅ | ✅ Role-based |
##### Search
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| ---------------------------------------------------------------------------- | :--------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------: | :-----------: |
| Placeholder search | ✅ | ✅ | ✅ | ✅ |
| Multi-index search | ✅ | ✅ | ✅ | ✅ |
| Federated search | ✅ | ❌ | ❌ | ✅ |
| Exact phrase search | ✅ | ✅ | ✅ | ✅ |
| Geo search | ✅ | ✅ | ✅ | ✅ |
| Sort by | ✅ | 🔶 Limited to one `sort_by` rule per index. Indexes may have to be duplicated for each sort field and sort order | ✅ Up to 3 sort fields per search query | ✅ |
| Filtering | ✅ Support complex filter queries with an SQL-like syntax. | 🔶 Does not support `OR` operation across multiple fields | ✅ | ✅ |
| Faceted search | ✅ | ✅ | ✅ Faceted fields must be searchable Faceting can take several seconds when >10 million facet values must be returned | ✅ |
| Distinct attributes
De-duplicate documents by a field value
| ✅ | ✅ | ✅ | ✅ |
| Grouping
Bucket documents by field values
| ❌ | ✅ | ✅ | ✅ |
##### AI-powered search
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| --------------------- | :--------------------------------------------------------------: | :--------------------------: | :--------------------------------: | :-------------------------------------------------------------------------------------------------------------------: |
| Semantic Search | ✅ | 🔶 Under Premium plan | ✅ | ✅ |
| Hybrid Search | ✅ | 🔶 Under Premium plan | ✅ | ✅ |
| Embedding Generation | ✅ OpenAI HuggingFace REST embedders | Undisclosed | OpenAI GCP Vertex AI | ✅ ELSER E5 Cohere OpenAI Azure Google AI Studio Hugging Face |
| Prompt Templates | ✅ | Undisclosed | ❌ | ❌ |
| Vector Store | ✅ | Undisclosed | ✅ | ✅ |
| Langchain Integration | ✅ | ❌ | ✅ | ✅ |
| GPU support | ✅ CUDA | Undisclosed | ✅ CUDA | ❌ |
##### Visualize
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| --------------------------------------------------------------- | :-----------------------------------------------------------------------------------: | :---------------------: | :---------------------: | :--------------------: |
| [Mini Dashboard](https://github.com/meilisearch/mini-dashboard) | ✅ | 🔶 Cloud product | 🔶 Cloud product | ✅ |
| Search Analytics | ✅ [Cloud product](https://www.meilisearch.com/cloud) | ✅ Cloud Product | ❌ | ✅ Cloud Product |
| Monitoring Dashboard | ✅ [Cloud product](https://www.meilisearch.com/docs/learn/analytics/monitoring) | ✅ Cloud Product | ✅ Cloud Product | ✅ Cloud Product |
#### Deployment
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| ------------------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----: | :-----------------------------------------------------: | :---------------------------------------------------------------------: |
| Self-hosted | ✅ | ❌ | ✅ | ✅ |
| Platform Support | ARM x86 x64 | n/a | 🔶 ARM (requires Docker on macOS) x86 x64 | ARM x86 x64 |
| Official 1-click deploy | ✅ [DigitalOcean](https://marketplace.digitalocean.com/apps/meilisearch) [Platform.sh](https://console.platform.sh/projects/create-project?template=https://raw.githubusercontent.com/platformsh/template-builder/master/templates/meilisearch/.platform.template.yaml) [Azure](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fcmaneu%2Fmeilisearch-on-azure%2Fmain%2Fmain.json) [Railway](https://railway.app/new/template/TXxa09?referralCode=YltNo3) [Koyeb](https://app.koyeb.com/deploy?type=docker\&image=getmeili/meilisearch\&name=meilisearch-on-koyeb\&ports=7700;http;/\&env%5BMEILI_MASTER_KEY%5D=REPLACE_ME_WITH_A_STRONG_KEY) | ❌ | 🔶 Only for the cloud-hosted solution | ❌ |
| Official cloud-hosted solution | [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=comparison-table) | ✅ | ✅ | ✅ |
| High availability | Available with [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=comparison-table) | ✅ | ✅ | ✅ |
| Run-time dependencies | None | N/A | None | None |
| Backward compatibility | ✅ | N/A | ✅ | ✅ |
| Upgrade path | Documents are automatically reindexed on upgrade | N/A | Documents are automatically reindexed on upgrade | Documents are automatically reindexed on upgrade, up to 1 major version |
### Limits
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| ------------------------- | :-----------: | :---------------------------------------------------: | :----------------: | :-------------------------: |
| Maximum number of indexes | No limitation | 1000, increasing limit possible by contacting support | No limitation | No limitation |
| Maximum index size | 80TiB | 128GB | Constrained by RAM | No limitation |
| Maximum document size | No limitation | 100KB, configurable | No limitation | 100KB default, configurable |
### Community
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| ------------------------------------------ | :---------: | :-----: | :-------: | :-----------: |
| GitHub stars of the main project | 42K | N/A | 17K | 66K |
| Number of contributors on the main project | 179 | N/A | 38 | 1,900 |
| Public Discord/Slack community size | 2,100 | N/A | 2,000 | 16K |
### Support
| | Meilisearch | Algolia | Typesense | Elasticsearch |
| --------------------- | :-------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------: | :--------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------: |
| Status page | ✅ | ✅ | ✅ | ✅ |
| Free support channels | Instant messaging / chatbox (2-3h delay), emails, public Discord community, GitHub issues & discussions | Instant messaging / chatbox, public community forum | Instant messaging/chatbox (24h-48h delay), public Slack community, GitHub issues. | Public Slack community, public community forum, GitHub issues |
| Paid support channels | Slack Channel, emails, personalized support — whatever you need, we’ll be there! | Emails | Emails, phone, private Slack | Web support, emails, phone |
## Approach comparison
### Meilisearch vs Elasticsearch
Elasticsearch is designed as a backend search engine. Although it is not suited for this purpose, it is commonly used to build search bars for end-users.
Elasticsearch can handle searching through massive amounts of data and performing text analysis. In order to make it effective for end-user searching, you need to spend time understanding more about how Elasticsearch works internally to be able to customize and tailor it to fit your needs.
Unlike Elasticsearch, which is a general search engine designed for large amounts of log data (for example, back-facing search), Meilisearch is intended to deliver performant instant-search experiences aimed at end-users (for example, front-facing search).
Elasticsearch can sometimes be too slow if you want to provide a full instant search experience. Most of the time, it is significantly slower in returning search results compared to Meilisearch.
Meilisearch is a perfect choice if you need a simple and easy tool to deploy a typo-tolerant search bar. It provides prefix searching capability, makes search intuitive for users, and returns results instantly with excellent relevance out of the box.
For a more detailed analysis of how it compares with Meilisearch, refer to our [blog post on Elasticsearch](https://blog.meilisearch.com/meilisearch-vs-elasticsearch/?utm_campaign=oss\&utm_source=docs\&utm_medium=comparison).
### Meilisearch vs Algolia
Meilisearch was inspired by Algolia's product and the algorithms behind it. We indeed studied most of the algorithms and data structures described in their blog posts in order to implement our own. Meilisearch is thus a new search engine based on the work of Algolia and recent research papers.
Meilisearch provides similar features and reaches the same level of relevance just as quickly as its competitor.
If you are a current Algolia user considering a switch to Meilisearch, you may be interested in our [migration guide](/learn/update_and_migration/algolia_migration).
#### Key similarities
Some of the most significant similarities between Algolia and Meilisearch are:
* [Features](/learn/getting_started/what_is_meilisearch#features) such as search-as-you-type, typo tolerance, faceting, etc.
* Fast results targeting an instant search experience (answers \< 50 milliseconds)
* Schemaless indexing
* Support for all JSON data types
* Asynchronous API
* Similar query response
#### Key differences
Contrary to Algolia, Meilisearch is open-source and can be forked or self-hosted.
Additionally, Meilisearch is written in Rust, a modern systems-level programming language. Rust provides speed, portability, and flexibility, which makes the deployment of our search engine inside virtual machines, containers, or even [Lambda@Edge](https://aws.amazon.com/lambda/edge/) a seamless operation.
#### Pricing
The [pricing model for Algolia](https://www.algolia.com/pricing/) is based on the number of records kept and the number of API operations performed. It can be prohibitively expensive for small and medium-sized businesses.
Meilisearch is an **open-source** search engine available via [Meilisearch Cloud](https://meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=comparison) or self-hosted. Unlike Algolia, [Meilisearch pricing](https://www.meilisearch.com/pricing?utm_campaign=oss\&utm_source=docs\&utm_medium=comparison) is based on the number of documents stored and the number of search operations performed. However, Meilisearch offers a more generous free tier that allows more documents to be stored as well as fairer pricing for search usage. Meilisearch also offers a Pro tier for larger use cases to allow for more predictable pricing.
## A quick look at the search engine landscape
### Open source
#### Lucene
Apache Lucene is a free and open-source search library used for indexing and searching full-text documents. It was created in 1999 by Doug Cutting, who had previously written search engines at Xerox's Palo Alto Research Center (PARC) and Apple. Written in Java, Lucene was developed to build web search applications such as Google and DuckDuckGo, the last of which still uses Lucene for certain types of searches.
Lucene has since been divided into several projects:
* **Lucene itself**: the full-text search library.
* **Solr**: an enterprise search server with a powerful REST API.
* **Nutch**: an extensible and scalable web crawler relying on Apache Hadoop.
Since Lucene is the technology behind many open source or closed source search engines, it is considered as the reference search library.
#### Sonic
Sonic is a lightweight and schema-less search index server written in Rust. Sonic cannot be considered as an out-of-the-box solution, and compared to Meilisearch, it does not ensure relevancy ranking. Instead of storing documents, it comprises an inverted index with a Levenshtein automaton. This means any application querying Sonic has to retrieve the search results from an external database using the returned IDs and then apply some relevancy ranking.
Its ability to run on a few MBs of RAM makes it a minimalist and resource-efficient alternative to database tools that can be too heavyweight to scale.
#### Typesense
Like Meilisearch, Typesense is a lightweight open-source search engine optimized for speed. To better understand how it compares with Meilisearch, refer to our [blog post on Typesense](https://blog.meilisearch.com/meilisearch-vs-typesense/?utm_campaign=oss\&utm_source=docs\&utm_medium=comparison).
#### Lucene derivatives
#### Lucene-Solr
Solr is a subproject of Apache Lucene, created in 2004 by Yonik Seeley, and is today one of the most widely used search engines available worldwide. Solr is a search platform, written in Java, and built on top of Lucene. In other words, Solr is an HTTP wrapper around Lucene's Java API, meaning you can leverage all the features of Lucene by using it. In addition, Solr server is combined with Solr Cloud, providing distributed indexing and searching capabilities, thus ensuring high availability and scalability. Data is shared but also automatically replicated.
Furthermore, Solr is not only a search engine; it is often used as a document-structured NoSQL database. Documents are stored in collections, which can be comparable to tables in a relational database.
Due to its extensible plugin architecture and customizable features, Solr is a search engine with an endless number of use cases even though, since it can index and search documents and email attachments, it is specifically popular for enterprise search.
#### Bleve & Tantivy
Bleve and Tantivy are search engine projects, respectively written in Golang and Rust, inspired by Apache Lucene and its algorithms (for example, tf-idf, short for term frequency-inverse document frequency). Such as Lucene, both are libraries to be used for any search project; however they are not ready-to-use APIs.
### Source available
#### Elasticsearch
Elasticsearch is a search engine based on the Lucene library and is most popular for full-text search. It provides a REST API accessed by JSON over HTTP. One of its key options, called index sharding, gives you the ability to divide indexes into physical spaces in order to increase performance and ensure high availability. Both Lucene and Elasticsearch have been designed for processing high-volume data streams, analyzing logs, and running complex queries. You can perform operations and analysis (for example, calculate the average age of all users named "Thomas") on documents that match a specified query.
Today, Lucene and Elasticsearch are dominant players in the search engine landscape. They both are solid solutions for a lot of different use cases in search, and also for building your own recommendation engine. They are good general products, but they require to be configured properly to get similar results to those of Meilisearch or Algolia.
### Closed source
#### Algolia
Algolia is a company providing a search engine on a SaaS model. Its software is closed source. In its early stages, Algolia offered mobile search engines that could be embedded in apps, facing the challenge of implementing the search algorithms from scratch. From the very beginning, the decision was made to build a search engine directly dedicated to the end-users, specifically, implementing search within mobile apps or websites.
Algolia successfully demonstrated over the past few years how critical tolerating typos was in order to improve the users' experience, and in the same way, its impact on reducing bounce rate and increasing conversion.
Apart from Algolia, a wide choice of SaaS products are available on the Search Engine Market. Most of them use Elasticsearch and fine-tune its settings in order to have a custom and personalized solution.
#### Swiftype
Swiftype is a search service provider specialized in website search and analytics. Swiftype was founded in 2012 by Matt Riley and Quin Hoxie, and is now owned by Elastic since November 2017. It is an end-to-end solution built on top of Elasticsearch, meaning it has the ability to leverage the Elastic Stack.
#### Doofinder
Doofinder is a paid on-site search service that is developed to integrate into any website with very little configuration. Doofinder is used by online stores to increase their sales, aiming to facilitate the purchase process.
## Conclusions
Each Search solution fits best with the constraints of a particular use case. Since each type of search engine offers a unique set of features, it wouldn't be easy nor relevant to compare their performance. For instance, it wouldn't be fair to make a comparison of speed between Elasticsearch and Algolia over a product-based database. The same goes for a very large full text-based database.
We cannot, therefore, compare ourselves with Lucene-based or other search engines targeted to specific tasks.
In the particular use case we cover, the most similar solution to Meilisearch is Algolia.
While Algolia offers the most advanced and powerful search features, this efficiency comes with an expensive pricing. Moreover, their service is marketed to big companies.
Meilisearch is dedicated to all types of developers. Our goal is to deliver a developer-friendly tool, easy to install, and to deploy. Because providing an out-of-the-box awesome search experience for the end-users matters to us, we want to give everyone access to the best search experiences out there with minimum effort and without requiring any financial resources.
Usually, when a developer is looking for a search tool to integrate into their application, they will go for ElasticSearch or less effective choices. Even if Elasticsearch is not best suited for this use case, it remains a great source available solution. However, it requires technical know-how to execute advanced features and hence more time to customize it to your business.
We aim to become the default solution for developers.
# Contributing to our documentation
Source: https://www.meilisearch.com/docs/learn/resources/contributing_docs
The Meilisearch documentation is open-source. Learn how to help make it even better.
This documentation website is hosted in a [public GitHub repository](https://github.com/meilisearch/documentation). It is built with [Next.js](https://nextjs.org), written in [MDX](https://mdxjs.com), and deployed on [Vercel](https://www.vercel.com).
## Our documentation philosophy
Our documentation aims to be:
* **Efficient**: we don't want to waste anyone's time
* **Accessible**: reading the texts here shouldn't require native English or a computer science degree
* **Thorough**: the documentation website should contain all information anyone needs to use Meilisearch
* **Open source**: this is a resource by Meilisearch users, for Meilisearch users
## How to contribute?
Both options below require a [GitHub account](https://github.com/signup). Create one if you don't have it yet.
The two most common ways to contribute are:
1. **Opening an [issue](https://github.com/meilisearch/documentation/issues/new)** — to report a problem, request an improvement, or suggest new content
2. **Updating the documentation content by opening a Pull Request (PR)** — either by creating the PR from your local text editor, or directly from GitHub. Before opening a PR, check our [open issues](https://github.com/meilisearch/documentation/issues) to see if one already exists for your change. In most cases, it's a good idea to [open an issue](https://github.com/meilisearch/documentation/issues/new) first so you can coordinate with the maintainers.
### Creating a PR from your local editor
To edit the docs in your preferred editor and open a PR, follow the detailed instructions in our [CONTRIBUTING.md](https://github.com/meilisearch/documentation/blob/main/CONTRIBUTING.md).
### Editing content directly on GitHub
The simplest way to update the docs is to use the "Edit this page" link at the bottom left of every page. Follow these steps:
1. Go to the documentation page you'd like to edit, scroll down, and click **"Edit this page"** at the bottom left of the screen. This will take you to GitHub
2. You may be prompted to [fork the repository](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo)
3. Use GitHub's text editor to update the page
4. Scroll down until you reach the box named **"Propose changes"**
5. Fill in the first field with a short, descriptive title (e.g. "Fix typo in search API reference")
6. Use the second field to add a brief explanation of your changes
7. Click **"Propose changes"**. You should see a "Comparing changes" page
8. Check that the base repository is `meilisearch/documentation` and the base branch is `main`
9. Click **"Create pull request"**
10. A documentation maintainer will review your PR shortly. If everything looks good, your changes will be merged and published. You're now a Meilisearch contributor! 🚀
## How we review contributions
### How we review issues
When **reviewing issues**, we consider a few criteria:
1. Is this task a priority for the documentation maintainers?
2. Is the documentation website the best place for this information? Sometimes an idea might work better on our blog than the docs, or it might be more effective to link to an external resource than write and maintain it ourselves
3. If it's a bug report, can we reproduce the error?
If users show interest in an issue by upvoting or reporting similar problems, it is more likely the documentation will dedicate resources to that task.
### How we review PRs
For **reviewing contributor PRs**, we start by making sure the PR is up to our **quality standard**.
We ask the following questions:
1. Is the information **accurate**?
2. Is it **easy to understand**?
3. Do the code samples run without errors? Do they help users understand what we are explaining?
4. Is the English **clear and concise**? Can a non-native speaker understand it?
5. Is the grammar perfect? Are there any typos?
6. Can we shorten text **without losing any important information**?
7. Do the suggested changes require updating other pages in the documentation website?
8. In the case of new content, is the article in the right place? Should other articles in the documentation link to it?
Nothing makes us happier than a thoughtful and helpful PR. Your PRs often save us time and effort, and they make the documentation **even stronger**.
Our only major requirement for PR contributions is that the author responds to communication requests within a reasonable time frame.
Once you've opened a PR in this repository, one of our team members will stop by shortly to review it. If your PR is approved, nothing further is required from you. However, **if in seven days you have not responded to a request for further changes or more information, we will consider the PR abandoned and close it**.
If this happens to you and you think there has been some mistake, please let us know and we will try to rectify the situation.
## Contributing to Meilisearch
There are many ways to contribute to Meilisearch directly as well, such as:
* Contributing to the [main engine](https://github.com/meilisearch/meilisearch/blob/main/CONTRIBUTING.md)
* Contributing to [our integrations](https://github.com/meilisearch/integration-guides)
* [Creating an integration](https://github.com/meilisearch/integration-guides/blob/main/resources/build-integration.md)
* Share your feedback and usecases on our [GitHub Discussions](https://github.com/orgs/meilisearch/discussions)
* Creating written or video content (tutorials, blog posts, etc.)
There are also many valuable ways of supporting the above repositories:
* Giving feedback
* Suggesting features
* Creating tests
* Fixing bugs
* Adding content
* Developing features
# Experimental features overview
Source: https://www.meilisearch.com/docs/learn/resources/experimental_features_overview
This article covers how to activate activate and configure Meilisearch experimental features.
Meilisearch periodically introduces new experimental features. Experimental features are not always ready for production, but offer functionality that might benefit some users.
An experimental feature's API can change significantly and become incompatible between releases. Keep this in mind when using experimental features in a production environment.
Meilisearch makes experimental features available expecting they will become stable in a future release, but this is not guaranteed.
## Activating experimental features
Experimental features fall into two groups based on how they are activated or deactivated:
1. Those that are activated at launch with a command-line flag or environment variable
2. Those that are activated with the [`/experimental-features` API route](/reference/api/experimental-features/list-experimental-features).
## Activating experimental features at launch
Some experimental features can be [activated at launch](/learn/self_hosted/configure_meilisearch_at_launch), for example with a command-line flag:
```sh theme={null}
./meilisearch --experimental-enable-metrics
```
Flags and environment variables for experimental features are not included in the [regular configuration options list](/learn/self_hosted/configure_meilisearch_at_launch#all-instance-options). Instead, consult the specific documentation page for the feature you are interested in, which can be found in the experimental section.
Command-line flags for experimental features are always prefixed with `--experimental`. Environment variables for experimental features are always prefixed with `MEILI_EXPERIMENTAL`.
Activating or deactivating experimental features this way requires you to relaunch Meilisearch.
### Activating experimental features during runtime
Some experimental features can be activated via an HTTP call using the [`/experimental-features` API route](/reference/api/experimental-features/list-experimental-features):
```bash cURL theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/experimental-features/' \
-H 'Content-Type: application/json' \
--data-binary '{
"metrics": true
}'
```
```python Python theme={null}
client.update_experimental_features({"metrics": True})
```
```ruby Ruby theme={null}
client.update_experimental_features(metrics: true)
```
```go Go theme={null}
client.ExperimentalFeatures().SetMetrics(true).Update()
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("apiKey"));
let features = ExperimentalFeatures::new(&client);
features.set_metrics(true)
let res = features
.update()
.await
.unwrap();
```
Activating or deactivating experimental features this way does not require you to relaunch Meilisearch.
## Current experimental features
| Name | Description | How to configure |
| --------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | ------------------------------------------- |
| [Limit task batch size](/learn/self_hosted/configure_meilisearch_at_launch) | Limits number of tasks processed in a single batch | CLI flag or environment variable |
| [Log customization](/reference/api/logs) | Customize log output and set up log streams | CLI flag or environment variable, API route |
| [Metrics API](/reference/api/metrics) | Exposes Prometheus-compatible analytics data | CLI flag or environment variable, API route |
| [Reduce indexing memory usage](/learn/self_hosted/configure_meilisearch_at_launch) | Optimizes indexing performance | CLI flag or environment variable |
| [Replication parameters](/learn/self_hosted/configure_meilisearch_at_launch) | Alters task processing for clustering compatibility | CLI flag or environment variable |
| [Search queue size](/learn/self_hosted/configure_meilisearch_at_launch) | Configure maximum number of concurrent search requests | CLI flag or environment variable |
| [`CONTAINS` filter operator](/learn/filtering_and_sorting/filter_expression_reference#contains) | Enables usage of `CONTAINS` with the `filter` search parameter | API route |
| [Edit documents with function](/reference/api/documents/edit-documents-by-function) | Use a RHAI function to edit documents directly in the Meilisearch database | API route |
| [`/network` route](/reference/api/network/get-network) | Enable `/network` route | API route |
| [Dumpless upgrade](/learn/self_hosted/configure_meilisearch_at_launch#dumpless-upgrade) | Upgrade Meilisearch without generating a dump | API route |
| [Composite embedders](/reference/api/settings/get-embedders) | Enable composite embedders | API route |
| [Search query embedding cache](/learn/self_hosted/configure_meilisearch_at_launch#search-query-embedding-cache) | Enable a cache for search query embeddings | CLI flag or environment variable |
| [Uncompressed snapshots](/learn/self_hosted/configure_meilisearch_at_launch#uncompressed-snapshots) | Disable snapshot compaction | CLI flag or environment variable |
| [Maximum batch payload size](/learn/self_hosted/configure_meilisearch_at_launch#maximum-batch-payload-size) | Limit batch payload size | CLI flag or environment variable |
| [Multimodal search](/reference/api/settings/list-all-settings) | Enable multimodal search | API route |
| [Disable new indexer](/learn/self_hosted/configure_meilisearch_at_launch) | Use previous settings indexer | CLI flag or environment variable |
| [Experimental vector store](/reference/api/settings/list-all-settings) | Enables index setting to use experimental vector store | API route |
| [Search personalization](/learn/personalization/making_personalized_search_queries) | Enables search personalization | CLI flag or environment variable |
# FAQ
Source: https://www.meilisearch.com/docs/learn/resources/faq
Frequently asked questions
## I have never used a search engine before. Can I use Meilisearch anyway?
Of course! No knowledge of ElasticSearch or Solr is required to use Meilisearch.
Meilisearch is really **easy to use** and thus accessible to all kinds of developers.
[Take a quick tour](/learn/self_hosted/getting_started_with_self_hosted_meilisearch) to learn the basics of Meilisearch!
We also provide a lot of tools, including [SDKs](/learn/resources/sdks), to help you integrate easily Meilisearch in your project. We're adding new tools every day!
Plus, you can [contact us](https://discord.meilisearch.com) if you need any help.
## How to know if Meilisearch perfectly fits my use cases?
Since Meilisearch is an open-source and easy-to-use tool, you can give it a try using your data. Follow this [guide](/learn/self_hosted/getting_started_with_self_hosted_meilisearch) to get a quick start!
Besides, we published a [comparison between Meilisearch and other search engines](/learn/resources/comparison_to_alternatives) with the goal of providing an overview of Meilisearch alternatives.
## I am trying to add my documents but I keep receiving a `400 - Bad Request` response
The `400 - Bad request` response often means that your data is not in an expected format. You might have extraneous commas, mismatched brackets, missing quotes, etc. Meilisearch API accepts JSON, CSV, and NDJSON formats.
When [adding or replacing documents](/reference/api/documents/add-or-replace-documents), you must enclose them in an array even if there is only one new document.
## I have uploaded my documents, but I get no result when I search in my index
Your document upload probably failed. To understand why, please check the status of the document addition task using the returned [`taskUid`](/reference/api/async-task-management/get-task). If the task failed, the response should contain an `error` object.
Here is an example of a failed task:
```json theme={null}
{
"uid": 1,
"indexUid": "movies",
"status": "failed",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 67493,
"indexedDocuments": 0
},
"error": {
"message": "Document does not have a `:primaryKey` attribute: `:documentRepresentation`.",
"code": "internal",
"type": "missing_document_id",
"link": "https://docs.meilisearch.com/errors#missing-document-id",
},
"duration": "PT1S",
"enqueuedAt": "2021-08-10T14:29:17.000000Z",
"startedAt": "2021-08-10T14:29:18.000000Z",
"finishedAt": "2021-08-10T14:29:19.000000Z"
}
```
Check your error message for more information.
## Is killing a Meilisearch process safe?
Killing Meilisearch is **safe**, even in the middle of a process (ex: adding a batch of documents). When you restart the server, it will start the task from the beginning.
More information in the [asynchronous operations guide](/learn/async/asynchronous_operations).
## Do you provide a public roadmap for Meilisearch and its integration tools?
Yes, we maintain a [public roadmap](https://roadmap.meilisearch.com/). You will be able to see what we plan to do in the future and our on-going projects.
## What are the recommended requirements for hosting a Meilisearch instance?
**The short answer:**
The recommended requirements for hosting a Meilisearch instance will depend on many factors, such as the number of documents, the size of those documents, the number of filters/sorts you will need, and more. For a quick estimate to start with, try to use a machine that has at least ten times the disk space of your dataset.
**The long answer:**
Indexing documents is a complex process, making it difficult to accurately estimate the size and memory use of a Meilisearch database. There are a few aspects to keep in mind when optimizing your instance.
### Memory usage
There are two things that can cause your memory usage (RAM) to spike:
1. Adding documents
2. Updating index settings (if index contains documents)
To reduce memory use and indexing time, follow this best practice: **always update index settings before adding your documents**. This avoids unnecessary double-indexing.
### Disk usage
The following factors have a great impact on the size of your database (in no particular order):
* The number of documents
* The size of documents
* The number of searchable fields
* The number of filterable fields
* The size of each update
* The number of different words present in the dataset
Beware heavily multi-lingual datasets and datasets with many unique words, such as IDs or URLs, as they can slow search speed and greatly increase database size. If you do have ID or URL fields, [make them non-searchable](/reference/api/settings/update-searchableattributes) unless they are useful as search criteria.
### Search speed
Because Meilisearch uses a [memory map](/learn/engine/storage#lmdb), **search speed is based on the ratio between RAM and database size**. In other words:
* A big database + a small amount of RAM => slow search
* A small database + tons of RAM => lightning fast search
Meilisearch also uses disk space as [virtual memory](/learn/engine/storage#memory-usage). This disk space does not correspond to database size; rather, it provides speed and flexibility to the engine by allowing it to go over the limits of physical RAM.
At this time, the number of CPU cores has no direct impact on index or search speed. However, **the more cores you provide to the engine, the more search queries it will be able to process at the same time**.
#### Speeding up Meilisearch
Meilisearch is designed to be fast (≤50ms response time), so speeding it up is rarely necessary. However, if you find that your Meilisearch instance is querying slowly, there are two primary methods to improve search performance:
1. Increase the amount of RAM (or virtual memory)
2. Reduce the size of the database
In general, we recommend the former. However, if you need to reduce the size of your database for any reason, keep in mind that:
* **More relevancy rules => a larger database**
* The proximity [ranking rule](/learn/relevancy/ranking_rules) alone can be responsible for almost 80% of database size
* Adding many attributes to [`filterableAttributes`](/reference/api/settings/get-filterableattributes) also consumes a large amount of disk space
* Multi-lingual datasets are costly, so split your dataset—one language per index
* [Stop words](/reference/api/settings/get-stopwords) are essential to reducing database size
* Not all attributes need to be [searchable](/learn/relevancy/displayed_searchable_attributes#searchable-fields). Avoid indexing unique IDs.
## Why does Meilisearch send data to Segment? Does Meilisearch track its users?
**Meilisearch will never track or identify individual users**. That being said, we do use Segment to collect anonymous data about user trends, feature usage, and bugs.
You can read more about what metrics we collect, why we collect them, and how to disable it on our [telemetry page](/learn/resources/telemetry). Issues of transparency and privacy are very important to us, so if you feel we are lacking in this area please [open an issue](https://github.com/meilisearch/documentation/issues/new/choose) or send an email to our dedicated email address: [privacy@meilisearch.com](mailto:privacy@meilisearch.com).
# Known limitations
Source: https://www.meilisearch.com/docs/learn/resources/known_limitations
Meilisearch has a number of known limitations. These are hard limits you cannot change and should take into account when designing your application.
Meilisearch has a number of known limitations. Some of these limitations are the result of intentional design trade-offs, while others can be attributed to [LMDB](/learn/engine/storage), the key-value store that Meilisearch uses under the hood.
This article covers hard limits that cannot be altered. Meilisearch also has some default limits that *can* be changed, such as a [default payload limit of 100MB](/learn/self_hosted/configure_meilisearch_at_launch#payload-limit-size) and a [default search limit of 20 hits](/reference/api/search/search-with-post#body-limit).
## Maximum Meilisearch Cloud upload size
**Limitation:** The maximum file upload size when using the Meilisearch Cloud interface is 20mb.
**Explanation:** Handling large files may result in degraded user experience and performance issues. To add datasets larger than 20mb to a Meilisearch Cloud project, use the [add documents endpoint](/reference/api/documents/add-or-replace-documents) or [`meilisearch-importer`](https://github.com/meilisearch/meilisearch-importer).
## Maximum number of query words
**Limitation:** The maximum number of terms taken into account for each [search query](/reference/api/search/search-with-post#body-q) is 10. If a search query includes more than 10 words, all words after the 10th will be ignored.
**Explanation:** Queries with many search terms can lead to long response times. This goes against our goal of providing a fast search-as-you-type experience.
## Maximum number of words per attribute
**Limitation:** Meilisearch can index a maximum of 65535 positions per attribute. Any words exceeding the 65535 position limit will be silently ignored.
**Explanation:** This limit is enforced for relevancy reasons. The more words there are in a given attribute, the less relevant the search queries will be.
### Example
Suppose you have three similar queries: `Hello World`, `Hello, World`, and `Hello - World`. Due to how our tokenizer works, each one of them will be processed differently and take up a different number of "positions" in our internal database.
If your query is `Hello World`:
* `Hello` takes the position `0` of the attribute
* `World` takes the position `1` of the attribute
If your query is `Hello, World`:
* `Hello` takes the position `0` of the attribute
* `,` takes the position `8` of the attribute
* `World` takes the position `9` of the attribute
`,` takes 8 positions as it is a hard separator. You can read more about word separators in our [article about data types](/learn/engine/datatypes#string).
If your query is `Hello - World`:
* `Hello` takes the position `0` of the attribute
* `-` takes the position `1` of the attribute
* `World` takes the position `2` of the attribute
`-` takes 1 position as it is a soft separator. You can read more about word separators in our [article about data types](/learn/engine/datatypes#string).
## Maximum number of attributes per document
**Limitation:** Meilisearch can index a maximum of **65,536 attributes per document**. If a document contains more than 65,536 attributes, an error will be thrown.
**Explanation:** This limit is enforced for performance and storage reasons. Overly large internal data structures—resulting from documents with too many fields—lead to overly large databases on disk, and slower search performance.
## Maximum number of documents in an index
**Limitation:** An index can contain no more than 4,294,967,296 documents.
**Explanation:** This is the largest possible value for a 32-bit unsigned integer. Since Meilisearch's engine uses unsigned integers to identify documents internally, this is the maximum number of documents that can be stored in an index.
## Maximum number of concurrent search requests
**Limitation:** Meilisearch handles a maximum of 1000 concurrent search requests.
**Explanation:** This limit exists to prevent Meilisearch from queueing an unlimited number of requests and potentially consuming an unbounded amount of memory. If Meilisearch receives a new request when the queue is already full, it drops a random search request and returns a 503 `too_many_search_requests` error with a `Retry-After` header set to 10 seconds. Configure this limit with [`--experimental-search-queue-size`](/learn/self_hosted/configure_meilisearch_at_launch).
## Length of primary key values
**Limitation:** Primary key values are limited to 511 bytes.
**Explanation:** Meilisearch stores primary key values as LMDB keys, a data type whose size is limited to 511 bytes. If a primary key value exceeds 511 bytes, the task containing these documents will fail.
## Length of individual `filterableAttributes` values
**Limitation:** Individual `filterableAttributes` values are limited to 468 bytes.
**Explanation:** Meilisearch stores `filterableAttributes` values as keys in LMDB, a data type whose size is limited to 511 bytes, to which Meilisearch adds a margin of 44 bytes. Note that this only applies to individual values—for example, a `genres` attribute can contain any number of values such as `horror`, `comedy`, or `cyberpunk` as long as each one of them is smaller than 468 bytes.
## Maximum filter depth
**Limitation:** searches using the [`filter` search parameter](/reference/api/search/search-with-post#body-filter) may have a maximum filtering depth of 2000.
**Explanation:** mixing and alternating `AND` and `OR` operators filters creates nested logic structures. Excessive nesting can lead to stack overflow.
### Example
The following filter is composed of a number of filter expressions. Since these statements are all chained with `OR` operators, there is no nesting:
```sql theme={null}
genre = "romance" OR genre = "horror" OR genre = "adventure"
```
Replacing `OR` with `AND` does not change the filter structure. The following filter's nesting level remains 1:
```sql theme={null}
genre = "romance" AND genre = "horror" AND genre = "adventure"
```
Nesting only occurs when alternating `AND` and `OR` operators. The following example fetches documents that either belong only to `user` `1`, or belong to users `2` and `3`:
```sql theme={null}
# AND is nested inside OR, creating a second level of nesting
user = 1 OR user = 2 AND user = 3
```
Adding parentheses can help visualizing nesting depth:
```sql theme={null}
# Depth 2
user = 1 OR (user = 2 AND user = 3)
# Depth 4
user = 1 OR (user = 2 AND (user = 3 OR (user = 4 AND user = 5)))
# Though this filter is longer, its nesting depth is still 2
user = 1 OR (user = 2 AND user = 3) OR (user = 4 AND user = 5) OR user = 6
```
## Size of integer fields
**Limitation:** Meilisearch can only exactly represent integers between -2⁵³ and 2⁵³.
**Explanation:** Meilisearch stores numeric values as double-precision floating-point numbers. This allows for greater precision and increases the range of magnitudes that Meilisearch can represent, but leads to inaccuracies in [values beyond certain thresholds](https://en.wikipedia.org/wiki/Double-precision_floating-point_format#Precision_limitations_on_integer_values).
## Maximum number of results per search
**Limitation:** By default, Meilisearch returns up to 1000 documents per search.
**Explanation:** Meilisearch limits the maximum amount of returned search results to protect your database from malicious scraping. You may change this by using the `maxTotalHits` property of the [pagination index settings](/reference/api/settings/update-pagination). `maxTotalHits` only applies to the [search route](/reference/api/search/search-with-post) and has no effect on the [get documents with POST](/reference/api/documents/list-documents-with-post) and [get documents with GET](/reference/api/documents/list-documents-with-get) endpoints.
## Large datasets and internal errors
**Limitation:** Meilisearch might throw an internal error when indexing large batches of documents.
**Explanation:** Indexing a large batch of documents, such as a JSON file over 3.5GB in size, can result in Meilisearch opening too many file descriptors. Depending on your machine, this might reach your system's default resource usage limits and trigger an internal error. Use [`ulimit`](https://www.ibm.com/docs/en/aix/7.1?topic=u-ulimit-command) or a similar tool to increase resource consumption limits before running Meilisearch. For example, call `ulimit -Sn 3000` in a UNIX environment to raise the number of allowed open file descriptors to 3000.
## Maximum database size
**Limitation:** Meilisearch supports a maximum index size of around 80TiB on Linux environments. For performance reasons, Meilisearch recommends keeping indexes under 2TiB.
**Explanation:** Meilisearch can accommodate indexes of any size as long the combined size of active databases is below the maximum virtual address space the OS devotes to a single process. On 64-bit Linux, this limit is approximately 80TiB.
## Maximum task database size
**Limitation:** Meilisearch supports a maximum task database size of 10GiB.
**Explanation:** Depending on your setup, 10GiB should correspond to 5M to 15M tasks. Once the task database contains over 1M entries (roughly 1GiB on average), Meilisearch tries to automatically delete finished tasks while continuing to enqueue new tasks as usual. This ensures the task database does not use an excessive amount of resources. If your database reaches the 10GiB limit, Meilisearch will log a warning indicating the engine is not working properly and refuse to enqueue new tasks.
## Maximum number of indexes in an instance
**Limitation:** Meilisearch can accommodate an arbitrary number of indexes as long as their size does not exceed 2TiB. When dealing with larger indexes, Meilisearch can accommodate up to 20 indexes as long as their combined size does not exceed the OS's virtual address space limit.
**Explanation:** While Meilisearch supports an arbitrary number of indexes under 2TiB, accessing hundreds of different databases in short periods of time might lead to decreased performance and should be avoided when possible.
## Facet Search limitation
**Limitation:** When [searching for facet values](/reference/api/facet-search/search-in-facets), Meilisearch returns a maximum of 100 facets.
**Explanation:** the limit to the maximum number of returned facets has been implemented to offer a good balance between usability and comprehensive results. Facet search allows users to filter a large list of facets so they may quickly find categories relevant to their query. This is different from searching through an index of documents. Faceting index settings such as the `maxValuesPerFacet` limit do not impact facet search and only affect queries searching through documents.
# Language
Source: https://www.meilisearch.com/docs/learn/resources/language
Meilisearch is compatible with datasets in any language. Additionally, it features optimized support for languages using whitespace to separate words, Chinese, Hebrew, Japanese, Korean, and Thai.
Meilisearch is multilingual, featuring optimized support for:
* Any language that uses whitespace to separate words
* Chinese
* Hebrew
* Japanese
* Khmer
* Korean
* Swedish
* Thai
We aim to provide global language support, and your feedback helps us move closer to that goal. If you notice inconsistencies in your search results or the way your documents are processed, please [open an issue in the Meilisearch repository](https://github.com/meilisearch/meilisearch/issues/new/choose).
[Read more about our tokenizer](/learn/indexing/tokenization)
## Improving our language support
While we have employees from all over the world at Meilisearch, we don't speak every language. We rely almost entirely on feedback from external contributors to understand how our engine is performing across different languages.
If you'd like to request optimized support for a language, please upvote the related [discussion in our product repository](https://github.com/meilisearch/product/discussions?discussions_q=label%3Ascope%3Atokenizer+) or [open a new one](https://github.com/meilisearch/product/discussions/new?category=feedback-feature-proposal) if it doesn't exist.
If you'd like to help by developing a tokenizer pipeline yourself: first of all, thank you! We recommend that you take a look at the [tokenizer contribution guide](https://github.com/meilisearch/charabia/blob/main/CONTRIBUTING.md) before making a PR.
## FAQ
### What do you mean when you say Meilisearch offers *optimized* support for a language?
Optimized support for a language means Meilisearch has implemented internal processes specifically tailored to parsing that language, leading to more relevant results.
### My language does not use whitespace to separate words. Can I still use Meilisearch?
Yes, but search results might be less relevant than in one of the fully optimized languages.
### My language does not use the Roman alphabet. Can I still use Meilisearch?
Yes—our users work with many different alphabets and writing systems, such as Cyrillic, Thai, and Japanese.
### Does Meilisearch plan to support additional languages in the future?
Yes, we definitely do. The more feedback we get from native speakers, the easier it is for us to understand how to improve performance for those languages. Similarly, the more requests we get to improve support for a specific language, the more likely we are to devote resources to that project.
# Official SDKs and libraries
Source: https://www.meilisearch.com/docs/learn/resources/sdks
Meilisearch SDKs are available in many popular programming languages and frameworks. Consult this page for a full list of officially supported libraries.
Our team and community have worked hard to bring Meilisearch to almost all popular web development languages, frameworks, and deployment options.
New integrations are constantly in development. If you'd like to contribute, [see below](/learn/resources/sdks#contributing).
## SDKs
You can use Meilisearch API wrappers in your favorite language. These libraries support all API routes.
* [.NET](https://github.com/meilisearch/meilisearch-dotnet)
* [Dart](https://github.com/meilisearch/meilisearch-dart)
* [Golang](https://github.com/meilisearch/meilisearch-go)
* [Java](https://github.com/meilisearch/meilisearch-java)
* [JavaScript](https://github.com/meilisearch/meilisearch-js)
* [PHP](https://github.com/meilisearch/meilisearch-php)
* [Python](https://github.com/meilisearch/meilisearch-python)
* [Ruby](https://github.com/meilisearch/meilisearch-ruby)
* [Rust](https://github.com/meilisearch/meilisearch-rust)
* [Swift](https://github.com/meilisearch/meilisearch-swift)
## Framework integrations
* Laravel: the official [Laravel-Scout](https://github.com/laravel/scout) package supports Meilisearch.
* [Ruby on Rails](https://github.com/meilisearch/meilisearch-rails)
* [Symfony](https://github.com/meilisearch/meilisearch-symfony)
## Front-end tools
* [React](https://github.com/meilisearch/meilisearch-react)
* [Vue](https://github.com/meilisearch/meilisearch-vue)
* [Instant Meilisearch](https://github.com/meilisearch/meilisearch-js-plugins/tree/main/packages/instant-meilisearch)
* [Autocomplete client](https://github.com/meilisearch/meilisearch-js-plugins/tree/main/packages/autocomplete-client)
## DevOps tools
* [meilisearch-kubernetes](https://github.com/meilisearch/meilisearch-kubernetes)
## Platform plugins
* [Strapi plugin](https://github.com/meilisearch/strapi-plugin-meilisearch/)
* [Firebase](https://github.com/meilisearch/firestore-meilisearch)
## AI Assistant tools
* [meilisearch-mcp](https://github.com/meilisearch/meilisearch-mcp): Model Context Protocol server for integrating Meilisearch with AI assistants and tools
* Guide: [Model Context Protocol integration](/guides/ai/mcp)
## Other tools
* [docs-scraper](https://github.com/meilisearch/docs-scraper): a scraper tool to automatically read the content of your documentation and store it into Meilisearch.
## Contributing
If you want to build a new integration for Meilisearch, you are more than welcome to and we would be happy to help you!
We are proud that some of our libraries were developed and are still maintained by external contributors! ♥️
We recommend to follow [these guidelines](https://github.com/meilisearch/integrations-guides) so that it will be easier to integrate your work.
# Telemetry
Source: https://www.meilisearch.com/docs/learn/resources/telemetry
Meilisearch collects anonymized data from users in order to improve our product. Consult this page for an exhaustive list of collected data and instructions on how to deactivate telemetry.
Meilisearch collects anonymized data from users in order to improve our product. This can be [deactivated at any time](#how-to-disable-data-collection), and any data that has already been collected can be [deleted on request](#how-to-delete-all-collected-data).
## What tools do we use to collect and visualize data?
We use [Segment](https://segment.com/), a platform for data collection and management, to collect usage data. We then feed that data into [Amplitude](https://amplitude.com/), a tool for graphing and highlighting data, so that we can build visualizations according to our needs.
## What kind of data do we collect?
Our data collection is focused on the following categories:
* **System** metrics, such as the technical specs of the device running Meilisearch, the software version, and the OS
* **Performance** metrics, such as the success rate of search requests and the average latency
* **Usage** metrics, aimed at evaluating our newest features. These change with each new version
See below for the [complete list of metrics we currently collect](#exhaustive-list-of-all-collected-data).
**We will never:**
* Identify or track users
* Collect personal information such as IP addresses, email addresses, or website URLs
* Store data from documents added to a Meilisearch instance
## Why collect telemetry data?
We collect telemetry data for only two reasons: so that we can improve our product, and so that we can continue working on this project full-time.
In order to create a better product, we need reliable quantitative information. The data we collect helps us fix bugs, evaluate the success of features, and better understand our users' needs.
We also need to prove that people are actually using Meilisearch. Usage metrics help us justify our existence to investors so that we can keep this project alive.
## Why should you trust us?
**Don't trust us—hold us accountable.** We feel that it is understandable, and in fact wise, to be distrustful of tech companies when it comes to your private data. That is why we attempt to maintain [complete transparency about our data collection](#exhaustive-list-of-all-collected-data), provide an [opt-out](#how-to-disable-data-collection), and enable users to [request the deletion of all their collected data](#how-to-delete-all-collected-data) at any time. In the absence of global data protection laws, we believe that this is the only ethical way to approach data collection.
No company is perfect. If you ever feel that we are being anything less than 100% transparent or collecting data that is infringing on your personal privacy, please let us know by emailing our dedicated account: [privacy@meilisearch.com](mailto:privacy@meilisearch.com). Similarly, if you discover a data rights initiative or data protection tool that you think is relevant to us, please share it. We are passionate about this subject and take it very seriously.
## How to disable data collection
Data collection can be disabled at any time by setting a command-line option or environment variable, then restarting the Meilisearch instance.
```bash theme={null}
meilisearch --no-analytics
```
```bash theme={null}
export MEILI_NO_ANALYTICS=true
meilisearch
```
```bash theme={null}
# First, open /etc/systemd/system/meilisearch.service with a text editor:
nano /etc/systemd/system/meilisearch.service
# Then add --no-analytics at the end of the command in ExecStart
# Don't forget to save and quit!
# Finally, run the following two commands:
systemctl daemon-reload
systemctl restart meilisearch
```
For more information about configuring Meilisearch, read our [configuration reference](/learn/self_hosted/configure_meilisearch_at_launch).
## How to delete all collected data
We, the Meilisearch team, provide an email address so that users can request the complete removal of their data from all of our tools.
To do so, send an email to [privacy@meilisearch.com](mailto:privacy@meilisearch.com) containing the unique identifier generated for your Meilisearch installation (`Instance UID` when launching Meilisearch). Any questions regarding the management of the data we collect can also be sent to this email address.
## Exhaustive list of all collected data
Whenever an event is triggered that collects some piece of data, Meilisearch does not send it immediately. Instead, it bundles it with other data in a batch of up to `500kb`. Batches are sent either every hour, or after reaching `500kb`—whichever occurs first. This is done in order to improve performance and reduce network traffic.
This list is liable to change with every new version of Meilisearch. It's not because we're trying to be sneaky! It's because when we add new features we need to collect additional data points to see how they perform.
| Metric name | Description | Example |
| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
| `context.app.version` | Meilisearch version number | 1.3.0 |
| `infos.env` | Value of `--env`/`MEILI_ENV` | production |
| `infos.db_path` | `true` if `--db-path`/`MEILI_DB_PATH` is specified | true |
| `infos.import_dump` | `true` if `--import-dump` is specified | true |
| `infos.dump_dir` | `true` if `--dump-dir`/`MEILI_DUMP_DIR` is specified | true |
| `infos.ignore_missing_dump` | `true` if `--ignore-missing-dump` is activated | true |
| `infos.ignore_dump_if_db_exists` | `true` if `--ignore-dump-if-db-exists` is activated | true |
| `infos.import_snapshot` | `true` if `--import-snapshot` is specified | true |
| `infos.schedule_snapshot` | Value of `--schedule_snapshot`/`MEILI_SCHEDULE_SNAPSHOT` if set, otherwise `None` | 86400 |
| `infos.snapshot_dir` | `true` if `--snapshot-dir`/`MEILI_SNAPSHOT_DIR` is specified | true |
| `infos.ignore_missing_snapshot` | `true` if `--ignore-missing-snapshot` is activated | true |
| `infos.ignore_snapshot_if_db_exists` | `true` if `--ignore-snapshot-if-db-exists` is activated | true |
| `infos.http_addr` | `true` if `--http-addr`/`MEILI_HTTP_ADDR` is specified | true |
| `infos.http_payload_size_limit` | Value of `--http-payload-size-limit`/`MEILI_HTTP_PAYLOAD_SIZE_LIMIT` in bytes | 336042103 |
| `infos.log_level` | Value of `--log-level`/`MEILI_LOG_LEVEL` | debug |
| `infos.max_indexing_memory` | Value of `--max-indexing-memory`/`MEILI_MAX_INDEXING_MEMORY` in bytes | 336042103 |
| `infos.max_indexing_threads` | Value of `--max-indexing-threads`/`MEILI_MAX_INDEXING_THREADS` in integer | 4 |
| `infos.log_level` | Value of `--log-level`/`MEILI_LOG_LEVEL` | debug |
| `infos.ssl_auth_path` | `true` if `--ssl-auth-path`/`MEILI_SSL_AUTH_PATH` is specified | false |
| `infos.ssl_cert_path` | `true` if `--ssl-cert-path`/`MEILI_SSL_CERT_PATH` is specified | false |
| `infos.ssl_key_path` | `true` if `--ssl-key-path`/`MEILI_SSL_KEY_PATH` is specified | false |
| `infos.ssl_ocsp_path` | `true` if `--ssl-ocsp-path`/`MEILI_SSL_OCSP_PATH` is specified | false |
| `infos.ssl_require_auth` | Value of `--ssl-require-auth`/`MEILI_SSL_REQUIRE_AUTH` as a boolean | false |
| `infos.ssl_resumption` | `true` if `--ssl-resumption`/`MEILI_SSL_RESUMPTION` is specified | false |
| `infos.ssl_tickets` | `true` if `--ssl-tickets`/`MEILI_SSL_TICKETS` is specified | false |
| `system.distribution` | Distribution on which Meilisearch is launched | Arch Linux |
| `system.kernel_version` | Kernel version on which Meilisearch is launched | 5.14.10 |
| `system.cores` | Number of cores | 24 |
| `system.ram_size` | Total RAM capacity. Expressed in `KB` | 16777216 |
| `system.disk_size` | Total capacity of the largest disk. Expressed in `Bytes` | 1048576000 |
| `system.server_provider` | Value of `MEILI_SERVER_PROVIDER` environment variable | AWS |
| `stats.database_size` | Database size. Expressed in `Bytes` | 2621440 |
| `stats.indexes_number` | Number of indexes | 2 |
| `start_since_days` | Number of days since instance was launched | 365 |
| `user_agent` | User-agent header encountered during API calls | \["Meilisearch Ruby (2.1)", "Ruby (3.0)"] |
| `requests.99th_response_time` | Highest latency from among the fastest 99% of successful search requests | 57ms |
| `requests.total_succeeded` | Total number of successful requests | 3456 |
| `requests.total_failed` | Total number of failed requests | 24 |
| `requests.total_received` | Total number of received search requests | 3480 |
| `requests.total_degraded` | Total number of searches canceled after reaching search time cut-off | 100 |
| `requests.total_used_negative_operator` | Count searches using either a negative word or a negative phrase operator | 173 |
| `sort.with_geoPoint` | `true` if the sort rule `_geoPoint` is specified | true |
| `sort.avg_criteria_number` | Average number of sort criteria among all search requests containing the `sort` parameter | 2 |
| `filter.with_geoBoundingBox` | `true` if the filter rule `_geoBoundingBox` is specified | false |
| `filter.with_geoRadius` | `true` if the filter rule `_geoRadius` is specified | false |
| `filter.most_used_syntax` | Most used filter syntax among all search requests containing the `filter` parameter | string |
| `filter.on_vectors` | `true` if the filter rule includes `_vector` | false |
| `q.max_terms_number` | Highest number of terms given for the `q` parameter | 5 |
| `pagination.max_limit` | Highest value given for the `limit` parameter | 60 |
| `pagination.max_offset` | Highest value given for the `offset` parameter | 1000 |
| `formatting.max_attributes_to_retrieve` | Maximum number of attributes to retrieve | 100 |
| `formatting.max_attributes_to_highlight` | Maximum number of attributes to highlight | 100 |
| `formatting.highlight_pre_tag` | `true` if `highlightPreTag` is specified | false |
| `formatting.highlight_post_tag` | `true` if `highlightPostTag` is specified | false |
| `formatting.max_attributes_to_crop` | Maximum number of attributes to crop | 100 |
| `formatting.crop_length` | `true` if `cropLength` is specified | false |
| `formatting.crop_marker` | `true` if `cropMarker` is specified | false |
| `formatting.show_matches_position` | `true` if `showMatchesPosition` is used in this batch | false |
| `facets.avg_facets_number` | Average number of facets | 10 |
| `primary_key` | Name of primary key when explicitly set. Otherwise `null` | id |
| `payload_type` | All values encountered in the `Content-Type` header, including invalid ones | \["application/json", "text/plain", "application/x-ndjson"] |
| `index_creation` | `true` if a document addition or update request triggered index creation | true |
| `ranking_rules.words_position` | Position of the `words` ranking rule if any, otherwise `null` | 1 |
| `ranking_rules.typo_position` | Position of the `typo` ranking rule if any, otherwise `null` | 2 |
| `ranking_rules.proximity_position` | Position of the `proximity` ranking rule if any, otherwise `null` | 3 |
| `ranking_rules.attribute_position` | Position of the `attribute` ranking rule if any, otherwise `null` | 4 |
| `ranking_rules.attribute_rank_position` | Position of the `attributeRank` ranking rule if any, otherwise `null` | 5 |
| `ranking_rules.attribute_position_position` | Position of the `wordPosition` ranking rule if any, otherwise `null` | 6 |
| `ranking_rules.sort_position` | Position of the `sort` ranking rule | 7 |
| `ranking_rules.exactness_position` | Position of the `exactness` ranking rule if any, otherwise `null` | 8 |
| `ranking_rules.values` | A string representing the ranking rules without the custom asc-desc rules | "words, typo, attributeRank, sort, wordPosition, exactness" |
| `sortable_attributes.total` | Number of sortable attributes | 3 |
| `sortable_attributes.has_geo` | `true` if `_geo` is set as a sortable attribute | true |
| `filterable_attributes.total` | Number of filterable attributes | 3 |
| `filterable_attributes.has_geo` | `true` if `_geo` is set as a filterable attribute | false |
| `filterable_attributes.has_patterns` | `true` if `filterableAttributes` uses `attributePatterns` | true |
| `searchable_attributes.total` | Number of searchable attributes | 4 |
| `searchable_attributes.with_wildcard` | `true` if `*` is specified as a searchable attribute | false |
| `per_task_uid` | `true` if a `uids` is used to fetch a particular task resource | true |
| `filtered_by_uid` | `true` if tasks are filtered by the `uids` query parameter | false |
| `filtered_by_index_uid` | `true` if tasks are filtered by the `indexUids` query parameter | false |
| `filtered_by_type` | `true` if tasks are filtered by the `types` query parameter | false |
| `filtered_by_status` | `true` if tasks are filtered by the `statuses` query parameter | false |
| `filtered_by_canceled_by` | `true` if tasks are filtered by the `canceledBy` query parameter | false |
| `filtered_by_before_enqueued_at` | `true` if tasks are filtered by the `beforeEnqueuedAt` query parameter | false |
| `filtered_by_after_enqueued_at` | `true` if tasks are filtered by the `afterEnqueuedAt` query parameter | false |
| `filtered_by_before_started_at` | `true` if tasks are filtered by the `beforeStartedAt` query parameter | false |
| `filtered_by_after_started_at` | `true` if tasks are filtered by the `afterStartedAt` query parameter | false |
| `filtered_by_before_finished_at` | `true` if tasks are filtered by the `beforeFinishedAt` query parameter | false |
| `filtered_by_after_finished_at` | `true` if tasks are filtered by the `afterFinishedAt` query parameter | false |
| `typo_tolerance.enabled` | `true` if typo tolerance is enabled | true |
| `typo_tolerance.disable_on_attributes` | `true` if at least one value is defined for `disableOnAttributes` | false |
| `typo_tolerance.disable_on_words` | `true` if at least one value is defined for `disableOnWords` | false |
| `typo_tolerance.min_word_size_for_typos.one_typo` | The defined value for the `minWordSizeForTypos.oneTypo` parameter | 5 |
| `typo_tolerance.min_word_size_for_typos.two_typos` | The defined value for the `minWordSizeForTypos.twoTypos` parameter | 9 |
| `pagination.max_total_hits` | The defined value for the `pagination.maxTotalHits` property | 1000 |
| `faceting.max_values_per_facet` | The defined value for the `faceting.maxValuesPerFacet` property | 100 |
| `distinct_attribute.set` | `true` if a field name is specified | false |
| `distinct` | `true` if a distinct was specified in an aggregated list of requests | true |
| `proximity_precision.set` | `true` if the setting has been manually set. | false |
| `proximity_precision.value` | `byWord` or `byAttribute`. | byWord |
| `facet_search.set` | `facetSearch` has been changed by the user | true |
| `facet_search.value` | `facetSearch` value set by the user | true |
| `prefix_search.set` | `prefixSearch` has been changed by the user | true |
| `prefix_search.value` | `prefixSearch` value set by the user | indexingTime |
| `displayed_attributes.total` | Number of displayed attributes | 3 |
| `displayed_attributes.with_wildcard` | `true` if `*` is specified as a displayed attribute | false |
| `stop_words.total` | Number of stop words | 3 |
| `separator_tokens.total` | Number of separator tokens | 3 |
| `non_separator_tokens.total` | Number of non-separator tokens | 3 |
| `dictionary.total` | Number of words in the dictionary | 3 |
| `synonyms.total` | Number of synonyms | 3 |
| `per_index_uid` | `true` if the `uid` is used to fetch an index stat resource | false |
| `searches.avg_search_count` | The average number of search queries received per call for the aggregated event | 4.2 |
| `searches.total_search_count` | The total number of search queries received for the aggregated event | 16023 |
| `indexes.avg_distinct_index_count` | The average number of queried indexes received per call for the aggregated event | 1.2 |
| `indexes.total_distinct_index_count` | The total number of distinct index queries for the aggregated event | 6023 |
| `indexes.total_single_index` | The total number of calls when only one index is queried | 2007 |
| `matching_strategy.most_used_strategy` | Most used word matching strategy | last |
| `infos.with_configuration_file` | `true` if the instance is launched with a configuration file | false |
| `infos.experimental_composite_embedders` | `true` if the `compositeEmbedders` feature is set to `true` for this instance | false |
| `infos.experimental_contains_filter` | `true` if the `containsFilter` experimental feature is enabled | false |
| `infos.experimental_edit_documents_by_function` | `true` if the `editDocumentsByFunction` experimental feature is enabled | false |
| `infos.experimental_enable_metrics` | `true` if `--experimental-enable-metrics` is specified at launch | false |
| `infos.experimental_embedding_cache_entries` | Size of configured embedding cache | 100 |
| `infos.experimental_multimodal` | `true` when multimodal search feature is enabled | true |
| `infos.experimental_no_edition_2024_for_settings` | `true` if instance disabled new indexer | false |
| `infos.experimental_replication_parameters` | `true` if `--experimental-replication-parameters` is specified at launch | false |
| `infos.experimental_reduce_indexing_memory_usage` | `true` if `--experimental-reduce-indexing-memory-usage` is specified at launch | false |
| `infos.experimental_logs_mode` | `human` or `json` depending on the value specified | human |
| `infos.experimental_enable_logs_route` | `true` if `--experimental-enable-logs-route` is specified at launch | false |
| `infos.gpu_enabled` | `true` if Meilisearch was compiled with CUDA support | false |
| `swap_operation_number` | Number of swap operations | 2 |
| `pagination.most_used_navigation` | Most used search results navigation | estimated |
| `per_document_id` | `true` if the `DELETE /indexes/:indexUid/documents/:documentUid` endpoint was used | false |
| `per_filter` | `true` if `POST /indexes/:indexUid/documents/fetch`, `GET /indexes/:indexUid/documents/`, or `POST /indexes/:indexUid/documents/delete` endpoints were used | false |
| `clear_all` | `true` if `DELETE /indexes/:indexUid/documents` endpoint was used | false |
| `per_batch` | `true` if the `POST /indexes/:indexUid/documents/delete-batch` endpoint was used | false |
| `facets.total_distinct_facet_count` | Total number of distinct facets queried for the aggregated event | false |
| `facets.additional_search_parameters_provided` | `true` if additional search parameters were provided for the aggregated event | false |
| `faceting.sort_facet_values_by_star_count` | `true` if all fields are set to be sorted by count | false |
| `faceting.sort_facet_values_by_total` | The number of different values that were set | 10 |
| `scoring.show_ranking_score` | `true` if `showRankingScore` used in the aggregated event | true |
| `scoring.show_ranking_score_details` | `true` if `showRankingScoreDetails` was used in the aggregated event | true |
| `scoring.ranking_score_threshold` | `true` if rankingScoreThreshold was specified in an aggregated list of requests | true |
| `attributes_to_search_on.total_number_of_uses` | Total number of queries where `attributesToSearchOn` is set | 5 |
| `vector.max_vector_size` | Highest number of dimensions given for the `vector` parameter in this batch | 1536 |
| `vector.retrieve_vectors` | `true` if the retrieve\_vectors parameter has been used in this batch. | false |
| `hybrid.enabled` | `true` if hybrid search been used in the aggregated event | true |
| `hybrid.semantic_ratio` | `true` if semanticRatio was used in this batch, otherwise false | false |
| `hybrid.total_media` | Aggregated number of search requests where `media` is not `null` | 42 |
| `embedders.total` | Numbers of defined embedders | 2 |
| `embedders.sources` | An array representing the different provided sources | \["huggingFace", "userProvided"] |
| `embedders.document_template_used` | A boolean indicating if one of the provided embedders has a custom template defined | true |
| `embedders.document_template_max_bytes` | A value indicating the largest value for document TemplateMaxBytes across all embedder | 400 |
| `embedders.binary_quantization_used` | `true` if the user updated the binary quantized field of the embedded settings | false |
| `infos.task_queue_webhook` | `true` if the instance is launched with a task queue webhook | false |
| `infos.experimental_search_queue_size` | Size of the search queue | 750 |
| `infos.experimental_dumpless_upgrade` | `true` if instance is launched with the parameter | true |
| `locales` | List of locales used with `/search` and `/settings` routes | \["fra", "eng"] |
| `federation.use_federation` | `true` when at least one multi-search request contains a top-level federation object | false |
| `network_has_self` | `true` if the network object has a non-null self field | true |
| `network_size` | Number of declared remotes | 0 |
| `network` | `true` when the network experimental feature is enabled | true |
| `experimental_network` | `true` when the network experimental feature is enabled | true |
| `remotes.total_distinct_remote_count` | Sum of the number of distinct remotes appearing in each search request of the aggregate | 48 |
| `remotes.avg_distinct_remote_count` | Average number of distinct remotes appearing in a search request of the aggregate | 2.33 |
| `multimodal` | `true` when multimodal search is enabled via the `/experimental-features` route | true |
| `export.total_received` | Number of exports received in this batch | `152` |
| `export.has_api_key` | Number of exports with an API Key set | `89` |
| `export.avg_index_patterns` | Average number of index patterns set per export | `3.2` |
| `export.avg_patterns_with_filter` | Average number of index patterns with filters per export | `1.7` |
| `export.avg_payload_size` | Average payload size per export | `512` |
| `webhooks_created` | Number of webhooks created in an instance | `2` |
| `webhooks.updated` | Number of times all webhooks in an instance have been updated | `5` |
| `with_vector_filter` | `true` when a document fetch request used a vector filter | `false` |
# Versioning policy
Source: https://www.meilisearch.com/docs/learn/resources/versioning
This article describes the system behind Meilisearch's SDK and engine version numbering and compatibility.
This article describes the system behind Meilisearch's version numbering, compatibility between Meilisearch versions, and how Meilisearch version numbers relate to SDK and documentation versions.
## Engine versioning
Release versions follow the MAJOR.MINOR.PATCH format and adhere to the [Semantic Versioning 2.0.0 convention](https://semver.org/#semantic-versioning-200).
* MAJOR versions contain changes that break compatibility between releases
* MINOR versions introduce new features that are backwards compatible
* PATCH versions only contain high-priority bug fixes and security updates
### Release schedule
Meilisearch releases new versions between four and six times a year. This number does not include PATCH releases.
### Support for previous versions
Meilisearch only maintains the latest engine release. Currently, there are no EOL (End of Life) or LTS (Long-Term Support) policies.
Consult the [engine versioning policy](https://github.com/meilisearch/engine-team/blob/main/resources/versioning-policy.md) for more information.
## SDK versioning
Meilisearch version numbers have no relationship to SDK version numbers. SDKs follow their own release schedules and must address issues beyond compatibility with Meilisearch.
When using an SDK, always consult its repository README, release description, and any dedicated documentation to determine which Meilisearch versions and features it supports.
## Documentation versioning
This Meilisearch documentation website follows the latest Meilisearch version. We do not maintain documentation for past releases.
# Securing your project
Source: https://www.meilisearch.com/docs/learn/security/basic_security
This tutorial will show you how to secure your Meilisearch project.
This tutorial will show you how to secure your Meilisearch project. You will see how to manage your master key and how to safely send requests to the Meilisearch API using an API key.
## Creating the master key
The master key is the first and most important step to secure your Meilisearch project.
### Creating the master key in Meilisearch Cloud
Meilisearch Cloud automatically generates a master key for each project. This means Meilisearch Cloud projects are secure by default.
You can view your master key by visiting your project settings, then clicking "API Keys" on the sidebar:
### Creating the master key in a self-hosted instance
To protect your self-hosted instance, relaunch it using the `--master-key` command-line option or the `MEILI_MASTER_KEY` environment variable:
```sh theme={null}
./meilisearch --master-key="MASTER_KEY"
```
UNIX:
```sh theme={null}
export MEILI_MASTER_KEY="MASTER_KEY"
./meilisearch
```
Windows:
```sh theme={null}
set MEILI_MASTER_KEY="MASTER_KEY"
./meilisearch
```
The master key must be at least 16-bytes-long and composed of valid UTF-8 characters. Use one of the following tools to generate a secure master key:
* [`uuidgen`](https://www.digitalocean.com/community/tutorials/workflow-command-line-basics-generating-uuids)
* [`openssl rand`](https://www.openssl.org/docs/man1.0.2/man1/rand.html)
* [`shasum`](https://www.commandlinux.com/man-page/man1/shasum.1.html)
* [randomkeygen.com](https://randomkeygen.com/)
Meilisearch will launch as usual. The start up log should include a message informing you the instance is protected:
```
A master key has been set. Requests to Meilisearch won't be authorized unless you provide an authentication key.
```
If you supplied an insecure key, Meilisearch will display a warning and suggest you relaunch your instance with an autogenerated alternative:
```
We generated a new secure master key for you (you can safely use this token):
>> --master-key E8H-DDQUGhZhFWhTq263Ohd80UErhFmLIFnlQK81oeQ <<
Restart Meilisearch with the argument above to use this new and secure master key.
```
## Obtaining API keys
When your project is protected, Meilisearch automatically generates four API keys: `Default Search API Key`, `Default Admin API Key`, `Default Read-Only Admin API Key`, and `Default Chat API Key`. API keys are authorization tokens designed to safely communicate with the Meilisearch API.
### Obtaining API keys in Meilisearch Cloud
Find your API keys by visiting your project settings, then clicking "API Keys" on the sidebar:
### Obtaining API keys in a self-hosted instance
Use your master key to query the `/keys` endpoint to view all API keys in your instance:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/keys' \
-H 'Authorization: Bearer MASTER_KEY'
```
```javascript JS theme={null}
const client = new MeiliSearch({ host: 'MEILISEARCH_URL', apiKey: 'masterKey' })
client.getKeys()
```
```python Python theme={null}
client = Client('MEILISEARCH_URL', 'masterKey')
client.get_keys()
```
```php PHP theme={null}
$client = new Client('MEILISEARCH_URL', 'masterKey');
$client->getKeys();
```
```java Java theme={null}
Client client = new Client(new Config("MEILISEARCH_URL", "masterKey"));
client.getKeys();
```
```ruby Ruby theme={null}
client = MeiliSearch::Client.new('MEILISEARCH_URL', 'masterKey')
client.keys
```
```go Go theme={null}
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
client.GetKeys(nil);
```
```csharp C# theme={null}
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "masterKey");
var keys = await client.GetKeysAsync();
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("MASTER_KEY")); let keys = client .get_keys() .await .unwrap();
```
```swift Swift theme={null}
client = try MeiliSearch(host: "MEILISEARCH_URL", apiKey: "masterKey")
client.getKeys { result in
switch result {
case .success(let keys):
print(keys)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
var client = MeiliSearchClient('MEILISEARCH_URL', 'masterKey');
await client.getKeys();
```
Only use the master key to manage API keys. Never use the master key to perform searches or other common operations.
Meilisearch's response will include at least the default API keys:
```json theme={null}
{
"results": [
{
"name": "Default Search API Key",
"description": "Use it to search from the frontend",
"key": "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33",
"uid": "74c9c733-3368-4738-bbe5-1d18a5fecb37",
"actions": [
"search"
],
"indexes": [
"*"
],
"expiresAt": null,
"createdAt": "2024-01-25T16:19:53.949636Z",
"updatedAt": "2024-01-25T16:19:53.949636Z"
},
{
"name": "Default Admin API Key",
"description": "Use it for anything that is not a search operation. Caution! Do not expose it on a public frontend",
"key": "62cdb7020ff920e5aa642c3d4066950dd1f01f4d",
"uid": "20f7e4c4-612c-4dd1-b783-7934cc038213",
"actions": [
"*"
],
"indexes": [
"*"
],
"expiresAt": null,
"createdAt": "2024-01-25T16:19:53.94816Z",
"updatedAt": "2024-01-25T16:19:53.94816Z"
},
{
"name": "Default Read-Only Admin API Key",
"description": "Use it to read information across the whole database. Caution! Do not expose this key on a public frontend",
"key": "9e32fb64e3569a749b0b87900d1026074e798743",
"uid": "7dc1ec09-94fb-49b5-b77b-03ce75af89a0",
"actions": [
"*.get",
"keys.get"
],
"indexes": [
"*"
],
"expiresAt": null,
"createdAt": "2024-01-25T16:19:53.94716Z",
"updatedAt": "2024-01-25T16:19:53.94716Z"
},
{
"name": "Default Chat API Key",
"description": "Use it to chat and search from the frontend",
"key": "0acaa4f3d57517e4b4d7c0052b02772620bd375a",
"uid": "d4e13ace-2a00-428c-90d1-b1c99eec98bd",
"actions": [
"chatCompletions",
"search"
],
"indexes": [
"*"
],
"expiresAt": null,
"createdAt": "2024-01-25T16:19:53.94606Z",
"updatedAt": "2024-01-25T16:19:53.94606Z"
}
],
…
}
```
## Sending secure API requests to Meilisearch
Now you have your API keys, you can safely query the Meilisearch API. Add API keys to requests using an `Authorization` bearer token header.
Use the `Default Admin API Key` to perform sensitive operations, such as creating a new index:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer DEFAULT_ADMIN_API_KEY' \
--data-binary '{
"uid": "medical_records",
"primaryKey": "id"
}'
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("DEFAULT_ADMIN_API_KEY"));
let task = client
.create_index("medical_records", Some("id"))
.await
.unwrap();
```
Then use the `Default Search API Key` to perform search operations in the index you just created:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/medical_records/search' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer DEFAULT_SEARCH_API_KEY' \
--data-binary '{ "q": "appointments" }'
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("DEFAULT_SEARCH_API_KEY"));
let index = client.index("medical_records");
index
.search()
.with_query("appointments")
.execute::()
.await
.unwrap();
```
### Admin API keys
Meilisearch provides two admin API keys for managing your instance:
* The `Default Admin API Key` grants full access to all Meilisearch operations except API key management. Use it to configure index settings, add documents, and perform other administrative tasks.
* The `Default Read-Only Admin API Key` allows read-only access to the whole database. Use it when you need to retrieve information from your Meilisearch instance without being able to modify it.
Do not expose admin API keys on a public frontend.
### Chat API key
The `Default Chat API Key` is designed for frontend usage with [conversational search](/learn/chat/getting_started_with_chat). It has access to both `search` and `chatCompletions` actions, allowing users to both perform searches and interact with the chat completions feature.
## Conclusion
You have successfully secured Meilisearch by configuring a master key. You then saw how to access the Meilisearch API by adding an API key to your request's authorization header.
# Differences between the master key and API keys
Source: https://www.meilisearch.com/docs/learn/security/differences_master_api_keys
This article explains the main usage differences between the two types of security keys in Meilisearch: master key and API keys.
This article explains the main usage differences between the two types of security keys in Meilisearch: master key and API keys.
## Master key
The master key grants full control over an instance and is the only key with access to endpoints for creating and deleting API keys by default. Since the master key is not an API key, it cannot be configured and listed through the `/keys` endpoints.
**Use the master key to create, update, and delete API keys. Do not use it for other operations.**
Consult the [basic security tutorial](/learn/security/basic_security) to learn more about correctly handling your master key.
Exposing the master key can give malicious users complete control over your Meilisearch project. To minimize risks, **only use the master key when managing API keys**.
## API keys
API keys grant access to a specific set of indexes, routes, and endpoints. You can also configure them to expire after a certain date. Use the [`/keys` route](/reference/api/keys/list-api-keys) to create, configure, and delete API keys.
**Use API keys for all API operations except API key management.** This includes search, configuring index settings, managing indexes, and adding and updating documents.
In many cases, the default API keys are all you need to safely manage your Meilisearch project:
* Use the `Default Search API Key` for search operations from the frontend
* Use the `Default Admin API Key` to configure index settings, add documents, and other operations. Do not expose it on a public frontend
* Use the `Default Read-Only Admin API Key` for read-only access to all indexes, documents, and settings. Do not expose it on a public frontend
* Use the `Default Chat API Key` for [conversational search](/learn/chat/getting_started_with_chat). It can be safely used from the frontend
# Generate a tenant token without a library
Source: https://www.meilisearch.com/docs/learn/security/generate_tenant_token_scratch
This guide shows you the main steps when creating tenant tokens without using any libraries.
Generating tenant tokens without a library is possible, but not recommended. This guide summarizes the necessary steps.
The full process requires you to create a token header, prepare the data payload with at least one set of search rules, and then sign the token with an API key.
## Prepare token header
The token header must specify a `JWT` type and an encryption algorithm. Supported tenant token encryption algorithms are `HS256`, `HS384`, and `HS512`.
```json theme={null}
{
"alg": "HS256",
"typ": "JWT"
}
```
## Build token payload
First, create a set of search rules:
```json theme={null}
{
"INDEX_NAME": {
"filter": "ATTRIBUTE = VALUE"
}
}
```
Next, find your default search API key. Query the [get API keys endpoint](/reference/api/keys/get-api-key) and inspect the `uid` field to obtain your API key's UID:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/keys' \
-H 'Authorization: Bearer MASTER_KEY'
```
```javascript JS theme={null}
const client = new MeiliSearch({ host: 'MEILISEARCH_URL', apiKey: 'masterKey' })
client.getKeys()
```
```python Python theme={null}
client = Client('MEILISEARCH_URL', 'masterKey')
client.get_keys()
```
```php PHP theme={null}
$client = new Client('MEILISEARCH_URL', 'masterKey');
$client->getKeys();
```
```java Java theme={null}
Client client = new Client(new Config("MEILISEARCH_URL", "masterKey"));
client.getKeys();
```
```ruby Ruby theme={null}
client = MeiliSearch::Client.new('MEILISEARCH_URL', 'masterKey')
client.keys
```
```go Go theme={null}
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
client.GetKeys(nil);
```
```csharp C# theme={null}
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "masterKey");
var keys = await client.GetKeysAsync();
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("MASTER_KEY")); let keys = client .get_keys() .await .unwrap();
```
```swift Swift theme={null}
client = try MeiliSearch(host: "MEILISEARCH_URL", apiKey: "masterKey")
client.getKeys { result in
switch result {
case .success(let keys):
print(keys)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
var client = MeiliSearchClient('MEILISEARCH_URL', 'masterKey');
await client.getKeys();
```
For maximum security, you should also set an expiry date for your tenant tokens. The following Node.js example configures the token to expire 20 minutes after its creation:
```js theme={null}
parseInt(Date.now() / 1000) + 20 * 60
```
Lastly, assemble all parts of the payload in a single object:
```json theme={null}
{
"exp": UNIX_TIMESTAMP,
"apiKeyUid": "API_KEY_UID",
"searchRules": {
"INDEX_NAME": {
"filter": "ATTRIBUTE = VALUE"
}
}
}
```
Consult the [token payload reference](/learn/security/tenant_token_reference) for more information on the requirements for each payload field.
## Encode header and payload
You must then encode both the header and the payload into `base64`, concatenate them, and generate the token by signing it using your chosen encryption algorithm.
## Make a search request using a tenant token
After signing the token, you can use it to make search queries in the same way you would use an API key.
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/patient_medical_records/search' \
-H 'Authorization: Bearer TENANT_TOKEN'
```
# Multitenancy and tenant tokens
Source: https://www.meilisearch.com/docs/learn/security/generate_tenant_token_sdk
This guide shows you the main steps when creating tenant tokens using Meilisearch's official SDKs.
There are two steps to use tenant tokens with an official SDK: generating the tenant token, and making a search request using that token.
## Requirements
* a working Meilisearch project
* an application supporting authenticated users
* one of Meilisearch's official SDKs installed
## Generate a tenant token with an official SDK
First, import the SDK. Then create a set of [search rules](/learn/security/tenant_token_reference#search-rules):
```json theme={null}
{
"patient_medical_records": {
"filter": "user_id = 1"
}
}
```
Search rules must be an object where each key corresponds to an index in your instance. You may configure any number of filters for each index.
Next, find your default search API key. Query the [get API keys endpoint](/reference/api/keys/get-api-key) and inspect the `uid` field to obtain your API key's UID:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/keys' \
-H 'Authorization: Bearer MASTER_KEY'
```
```javascript JS theme={null}
const client = new MeiliSearch({ host: 'MEILISEARCH_URL', apiKey: 'masterKey' })
client.getKeys()
```
```python Python theme={null}
client = Client('MEILISEARCH_URL', 'masterKey')
client.get_keys()
```
```php PHP theme={null}
$client = new Client('MEILISEARCH_URL', 'masterKey');
$client->getKeys();
```
```java Java theme={null}
Client client = new Client(new Config("MEILISEARCH_URL", "masterKey"));
client.getKeys();
```
```ruby Ruby theme={null}
client = MeiliSearch::Client.new('MEILISEARCH_URL', 'masterKey')
client.keys
```
```go Go theme={null}
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
client.GetKeys(nil);
```
```csharp C# theme={null}
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "masterKey");
var keys = await client.GetKeysAsync();
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("MASTER_KEY")); let keys = client .get_keys() .await .unwrap();
```
```swift Swift theme={null}
client = try MeiliSearch(host: "MEILISEARCH_URL", apiKey: "masterKey")
client.getKeys { result in
switch result {
case .success(let keys):
print(keys)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
var client = MeiliSearchClient('MEILISEARCH_URL', 'masterKey');
await client.getKeys();
```
For maximum security, you should also define an expiry date for tenant tokens.
Finally, send this data to your chosen SDK's tenant token generator:
```javascript JS theme={null}
import { generateTenantToken } from 'meilisearch/token'
const searchRules = {
patient_medical_records: {
filter: 'user_id = 1'
}
}
const apiKey = 'B5KdX2MY2jV6EXfUs6scSfmC...'
const apiKeyUid = '85c3c2f9-bdd6-41f1-abd8-11fcf80e0f76'
const expiresAt = new Date('2025-12-20') // optional
const token = await generateTenantToken({ apiKey, apiKeyUid, searchRules, expiresAt })
```
```python Python theme={null}
uid = '85c3c2f9-bdd6-41f1-abd8-11fcf80e0f76';
api_key = 'B5KdX2MY2jV6EXfUs6scSfmC...'
expires_at = datetime(2025, 12, 20)
search_rules = {
'patient_medical_records': {
'filter': 'user_id = 1'
}
}
token = client.generate_tenant_token(api_key_uid=uid, search_rules=search_rules, api_key=api_key, expires_at=expires_at)
```
```php PHP theme={null}
$apiKeyUid = '85c3c2f9-bdd6-41f1-abd8-11fcf80e0f76';
$searchRules = (object) [
'patient_medical_records' => (object) [
'filter' => 'user_id = 1',
]
];
$options = [
'apiKey' => 'B5KdX2MY2jV6EXfUs6scSfmC...',
'expiresAt' => new DateTime('2025-12-20'),
];
$token = $client->generateTenantToken($apiKeyUid, $searchRules, $options);
```
```java Java theme={null}
Map filters = new HashMap();
filters.put("filter", "user_id = 1");
Map searchRules = new HashMap();
searchRules.put("patient_medical_records", filters);
Date expiresAt = new SimpleDateFormat("yyyy-MM-dd").parse("2025-12-20");
TimeZone.setDefault(TimeZone.getTimeZone("UTC"));
TenantTokenOptions options = new TenantTokenOptions();
options.setApiKey("B5KdX2MY2jV6EXfUs6scSfmC...");
options.setExpiresAt(expiresAt);
String token = client.generateTenantToken("85c3c2f9-bdd6-41f1-abd8-11fcf80e0f76", searchRules, options);
```
```ruby Ruby theme={null}
uid = '85c3c2f9-bdd6-41f1-abd8-11fcf80e0f76'
api_key = 'B5KdX2MY2jV6EXfUs6scSfmC...'
expires_at = Time.new(2025, 12, 20).utc
search_rules = {
'patient_medical_records' => {
'filter' => 'user_id = 1'
}
}
token = client.generate_tenant_token(uid, search_rules, api_key: api_key, expires_at: expires_at)
```
```go Go theme={null}
searchRules := map[string]interface{}{
"patient_medical_records": map[string]string{
"filter": "user_id = 1",
},
}
options := &meilisearch.TenantTokenOptions{
APIKey: "B5KdX2MY2jV6EXfUs6scSfmC...",
ExpiresAt: time.Date(2025, time.December, 20, 0, 0, 0, 0, time.UTC),
}
token, err := client.GenerateTenantToken(searchRules, options);
```
```csharp C# theme={null}
var apiKey = "B5KdX2MY2jV6EXfUs6scSfmC...";
var expiresAt = new DateTime(2025, 12, 20);
var searchRules = new TenantTokenRules(new Dictionary {
{ "patient_medical_records", new Dictionary { { "filter", "user_id = 1" } } }
});
token = client.GenerateTenantToken(
searchRules,
apiKey: apiKey // optional,
expiresAt: expiresAt // optional
);
```
```rust Rust theme={null}
let api_key = "B5KdX2MY2jV6EXfUs6scSfmC...";
let api_key_uid = "6062abda-a5aa-4414-ac91-ecd7944c0f8d";
let expires_at = time::macros::datetime!(2025 - 12 - 20 00:00:00 UTC);
let search_rules = json!({ "patient_medical_records": { "filter": "user_id = 1" } });
let token = client
.generate_tenant_token(api_key_uid, search_rules, api_key, expires_at)
.unwrap();
```
```swift Swift theme={null}
let apiKey = "B5KdX2MY2jV6EXfUs6scSfmC..."
let expiresAt = Date.distantFuture
let searchRules = SearchRulesGroup(SearchRules("patient_medical_records", filter: "user_id = 1"))
client.generateTenantToken(
searchRules,
apiKey: apiKey, // optional
expiresAt: expiresAt // optional
) { (result: Result) in
switch result {
case .success(let token):
print(token)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
final uid = '85c3c2f9-bdd6-41f1-abd8-11fcf80e0f76';
final apiKey = 'B5KdX2MY2jV6EXfUs6scSfmC...';
final expiresAt = DateTime.utc(2025, 12, 20);
final searchRules = {
'patient_medical_records': {
'filter': 'user_id = 1'
}
};
final token = client.generateTenantToken(
uid,
searchRules,
apiKey: apiKey, // optional
expiresAt: expiresAt // optional
);
```
The SDK will return a valid tenant token.
## Make a search request using a tenant token
After creating a token, you must send it your application's front end. Exactly how to do that depends on your specific setup.
Once the tenant token is available, use it to authenticate search requests as if it were an API key:
```javascript JS theme={null}
const frontEndClient = new MeiliSearch({ host: 'MEILISEARCH_URL', apiKey: token })
frontEndClient.index('patient_medical_records').search('blood test')
```
```python Python theme={null}
front_end_client = Client('MEILISEARCH_URL', token)
front_end_client.index('patient_medical_records').search('blood test')
```
```php PHP theme={null}
$frontEndClient = new Client('MEILISEARCH_URL', $token);
$frontEndClient->index('patient_medical_records')->search('blood test');
```
```java Java theme={null}
Client frontEndClient = new Client(new Config("MEILISEARCH_URL", token));
frontEndClient.index("patient_medical_records").search("blood test");
```
```ruby Ruby theme={null}
front_end_client = MeiliSearch::Client.new('MEILISEARCH_URL', token)
front_end_client.index('patient_medical_records').search('blood test')
```
```go Go theme={null}
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
client.Index("patient_medical_records").Search("blood test", &meilisearch.SearchRequest{});
```
```csharp C# theme={null}
frontEndClient = new MeilisearchClient("MEILISEARCH_URL", token);
var searchResult = await frontEndClient.Index("patient_medical_records").SearchAsync("blood test");
```
```rust Rust theme={null}
let front_end_client = Client::new("MEILISEARCH_URL", Some(token));
let results: SearchResults = front_end_client
.index("patient_medical_records")
.search()
.with_query("blood test")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let frontEndClient = MeiliSearch(host: "MEILISEARCH_URL", apiKey: token)
client.index("patient_medical_records")
.search(parameters) { (result: Result, Swift.Error>) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
final frontEndClient = MeiliSearchClient('MEILISEARCH_URL', token);
await frontEndClient.index('patient_medical_records').search('blood test');
```
Applications may use tenant tokens and API keys interchangeably when searching. For example, the same application might use a default search API key for queries on public indexes and a tenant token for logged-in users searching on private data.
# Generate tenant tokens without a Meilisearch SDK
Source: https://www.meilisearch.com/docs/learn/security/generate_tenant_token_third_party
This guide shows you the main steps when creating tenant tokens without using Meilisearch's official SDKs.
This guide shows you the main steps when creating tenant tokens using [`node-jsonwebtoken`](https://www.npmjs.com/package/jsonwebtoken), a third-party library.
## Requirements
* a working Meilisearch project
* a JavaScript application supporting authenticated users
* `jsonwebtoken` v9.0
## Generate a tenant token with `jsonwebtoken`
### Build the tenant token payload
First, create a set of search rules:
```json theme={null}
{
"INDEX_NAME": {
"filter": "ATTRIBUTE = VALUE"
}
}
```
Next, find your default search API key. Query the [get API keys endpoint](/reference/api/keys/get-api-key) and inspect the `uid` field to obtain your API key's UID:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/keys' \
-H 'Authorization: Bearer MASTER_KEY'
```
```javascript JS theme={null}
const client = new MeiliSearch({ host: 'MEILISEARCH_URL', apiKey: 'masterKey' })
client.getKeys()
```
```python Python theme={null}
client = Client('MEILISEARCH_URL', 'masterKey')
client.get_keys()
```
```php PHP theme={null}
$client = new Client('MEILISEARCH_URL', 'masterKey');
$client->getKeys();
```
```java Java theme={null}
Client client = new Client(new Config("MEILISEARCH_URL", "masterKey"));
client.getKeys();
```
```ruby Ruby theme={null}
client = MeiliSearch::Client.new('MEILISEARCH_URL', 'masterKey')
client.keys
```
```go Go theme={null}
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
client.GetKeys(nil);
```
```csharp C# theme={null}
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "masterKey");
var keys = await client.GetKeysAsync();
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("MASTER_KEY")); let keys = client .get_keys() .await .unwrap();
```
```swift Swift theme={null}
client = try MeiliSearch(host: "MEILISEARCH_URL", apiKey: "masterKey")
client.getKeys { result in
switch result {
case .success(let keys):
print(keys)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
var client = MeiliSearchClient('MEILISEARCH_URL', 'masterKey');
await client.getKeys();
```
For maximum security, you should also set an expiry date for your tenant tokens. The following example configures the token to expire 20 minutes after its creation:
```js theme={null}
parseInt(Date.now() / 1000) + 20 * 60
```
### Create tenant token
First, include `jsonwebtoken` in your application. Next, assemble the token payload and pass it to `jsonwebtoken`'s `sign` method:
```js theme={null}
const jwt = require('jsonwebtoken');
const apiKey = 'API_KEY';
const apiKeyUid = 'API_KEY_UID';
const currentUserID = 'USER_ID';
const expiryDate = parseInt(Date.now() / 1000) + 20 * 60; // 20 minutes
const tokenPayload = {
searchRules: {
'INDEX_NAME': {
'filter': `user_id = ${currentUserID}`
}
},
apiKeyUid: apiKeyUid,
exp: expiryDate
};
const token = jwt.sign(tokenPayload, apiKey, {algorithm: 'HS256'});
```
`sign` requires the payload, a Meilisearch API key, and an encryption algorithm. Meilisearch supports the following encryption algorithms: `HS256`, `HS384`, and `HS512`.
Your tenant token is now ready to use.
Though this example used `jsonwebtoken`, a Node.js package, you may use any JWT-compatible library in whatever language you feel comfortable.
## Make a search request using a tenant token
After signing the token, you can use it to make search queries in the same way you would use an API key.
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/patient_medical_records/search' \
-H 'Authorization: Bearer TENANT_TOKEN'
```
# Multitenancy and tenant tokens
Source: https://www.meilisearch.com/docs/learn/security/multitenancy_tenant_tokens
In this article you'll read what multitenancy is and how tenant tokens help managing complex applications and sensitive data.
In this article you'll read what multitenancy is and how tenant tokens help managing complex applications and sensitive data.
## What is multitenancy?
In software development, multitenancy means that multiple users or tenants share the same computing resources with different levels of access to system-wide data. Proper multitenancy is crucial in cloud computing services such as [DigitalOcean's Droplets](https://www.digitalocean.com/products/droplets) and [Amazon's AWS](https://aws.amazon.com/).
If your Meilisearch application stores sensitive data belonging to multiple users in the same index, you are managing a multi-tenant index. In this context, it is very important to make sure users can only search through their own documents. This can be accomplished with **tenant tokens**.
## What is a tenant token?
Tenant tokens are small packages of encrypted data presenting proof a user can access a certain index. They contain not only security credentials, but also instructions on which documents within that index the user is allowed to see. **Tenant tokens only give access to the search endpoints.** They are meant to be short-lived, so Meilisearch does not store nor keep track of generated tokens.
## What is the difference between tenant tokens and API keys?
API keys give general access to specific actions in an index. An API key with search permissions for a given index can access all information in that index.
Tenant tokens add another layer of control over API keys. They can restrict which information a specific user has access to in an index. If you store private data from multiple customers in a single index, tenant tokens allow you to prevent one user from accessing another's data.
## How to integrate tenant tokens with an application?
Tenant tokens do not require any specific Meilisearch configuration. You can use them exactly the same way as you would use any API key with search permissions.
You must generate tokens in your application. The quickest method to generate tenant tokens is [using an official SDK](/learn/security/generate_tenant_token_sdk). It is also possible to [generate a token with a third-party library](/learn/security/generate_tenant_token_third_party).
## Sample application
Meilisearch developed an in-app search demo using multi-tenancy in a SaaS CRM. It only allows authenticated users to search through contacts, companies, and deals belonging to their organization.
Check out this [sample application](https://saas.meilisearch.com/?utm_source=docs) Its code is publicly available in a dedicated [GitHub repository](https://github.com/meilisearch/saas-demo/).
You can also use tenant tokens in role-based access control (RBAC) systems. Consult [How to implement RBAC with Meilisearch](https://blog.meilisearch.com/role-based-access-guide/) on Meilisearch's official blog for more information.
# Protected and unprotected Meilisearch projects
Source: https://www.meilisearch.com/docs/learn/security/protected_unprotected
This article explains the differences between protected and unprotected Meilisearch projects and instances.
This article explains the differences between protected and unprotected Meilisearch projects and instances.
## Protected projects
In protected projects, all Meilisearch API routes and endpoints can only be accessed by requests bearing an API key. The only exception to this rule is the `/health` endpoint, which may still be queried with unauthorized requests.
**Meilisearch Cloud projects are protected by default**. Self-hosted instances are only protected if you launch them with a master key.
Consult the [basic security tutorial](/learn/security/basic_security) for instructions on how to communicate with protected projects.
## Unprotected projects
In unprotected projects and self-hosted instances, any user may access any API endpoint. Never leave a publicly accessible instance unprotected. Only use unprotected instances in safe development environments.
Meilisearch Cloud projects are always protected. Meilisearch self-hosted instances are unprotected by default.
# Resetting the master key
Source: https://www.meilisearch.com/docs/learn/security/resetting_master_key
This guide shows you how to reset the master key in Meilisearch Cloud and self-hosted instances.
This guide shows you how to manage the master key in Meilisearch Cloud and self-hosted instances. Resetting the master key may be necessary if an unauthorized party obtains access to your master key.
## Resetting the master key in Meilisearch Cloud
Meilisearch Cloud does not give users control over the master key. If you need to change your master key, contact support through the Cloud interface or on the official [Meilisearch Discord server](https://discord.meilisearch.com).
Resetting the master key automatically invalidates all API keys. Meilisearch Cloud will generate new default API keys automatically.
## Resetting the master key in self-hosted instances
To reset your master key in a self-hosted instance, relaunch your instance and pass a new value to `--master-key` or `MEILI_MASTER_KEY`.
Resetting the master key automatically invalidates all API keys. Meilisearch Cloud will generate new default API keys automatically.
# Tenant token payload reference
Source: https://www.meilisearch.com/docs/learn/security/tenant_token_reference
Meilisearch's tenant tokens are JSON web tokens (JWTs). Their payload is made of three elements: search rules, an API key UID, and an optional expiration date.
Meilisearch's tenant tokens are JSON web tokens (JWTs). Their payload is made of three elements: [search rules](#search-rules), an [API key UID](#api-key-uid), and an optional [expiration date](#expiry-date).
## Example payload
```json theme={null}
{
"exp": 1646756934,
"apiKeyUid": "at5cd97d-5a4b-4226-a868-2d0eb6d197ab",
"searchRules": {
"INDEX_NAME": {
"filter": "attribute = value"
}
}
}
```
## Search rules
The search rules object are a set of instructions defining search parameters Meilisearch will enforced in every query made with a specific tenant token.
### Search rules object
`searchRules` must be a JSON object. Each key must correspond to one or more indexes:
```json theme={null}
{
"searchRules": {
"*": {},
"INDEX_*": {},
"INDEX_NAME_A": {}
}
}
```
Each search rule object may contain a single `filter` key. This `filter`'s value must be a [filter expression](/learn/filtering_and_sorting/filter_expression_reference):
```json theme={null}
{
"*": {
"filter": "attribute_A = value_X AND attribute_B = value_Y"
}
}
```
Meilisearch applies the filter to all searches made with that tenant token. A token only has access to the indexes present in the `searchRules` object.
A token may contain rules for any number of indexes. **Specific rulesets take precedence and overwrite `*` rules.**
Because tenant tokens are generated in your application, Meilisearch cannot check if search rule filters are valid. Invalid search rules return throw errors when searching.
Consult the search API reference for [more information on Meilisearch filter syntax](/reference/api/search/search-with-post#body-filter).
The search rule may also be an empty object. In this case, the tenant token will have access to all documents in an index:
```json theme={null}
{
"INDEX_NAME": {}
}
```
### Examples
#### Single filter
In this example, the user will only receive `medical_records` documents whose `user_id` equals `1`:
```json theme={null}
{
"medical_records": {
"filter": "user_id = 1"
}
}
```
#### Multiple filters
In this example, the user will only receive `medical_records` documents whose `user_id` equals `1` and whose `published` field equals `true`:
```json theme={null}
{
"medical_records": {
"filter": "user_id = 1 AND published = true"
}
}
```
#### Give access to all documents in an index
In this example, the user has access to all documents in `medical_records`:
```json theme={null}
{
"medical_records": {}
}
```
#### Target multiple indexes with a partial wildcard
In this example, the user will receive documents from any index starting with `medical`. This includes indexes such as `medical_records` and `medical_patents`:
```json theme={null}
{
"medical*": {
"filter": "user_id = 1"
}
}
```
#### Target all indexes with a wildcard
In this example, the user will receive documents from any index in the whole instance:
```json theme={null}
{
"*": {
"filter": "user_id = 1"
}
}
```
### Target multiple indexes manually
In this example, the user has access to documents with `user_id = 1` for all indexes, except one. When querying `medical_records`, the user will only have access to published documents:
```json theme={null}
{
"*": {
"filter": "user_id = 1"
},
"medical_records": {
"filter": "user_id = 1 AND published = true",
}
}
```
## API key UID
Tenant token payloads must include an API key UID to validate requests. The UID is an alphanumeric string identifying an API key:
```json theme={null}
{
"apiKeyUid": "at5cd97d-5a4b-4226-a868-2d0eb6d197ab"
}
```
Query the [get one API key endpoint](/reference/api/keys/get-api-key) to obtain an API key's UID.
The UID must indicate an API key with access to [the search action](/reference/api/keys/create-api-key#body-actions). A token has access to the same indexes and routes as the API key used to generate it.
Since a master key is not an API key, **you cannot use a master key to create a tenant token**. Avoid exposing API keys and **always generate tokens on your application's back end**.
If an API key expires, any tenant tokens created with it will become invalid. The same applies if the API key is deleted or regenerated due to a changed master key.
## Expiry date
The expiry date must be a UNIX timestamp or `null`:
```json theme={null}
{
"exp": 1646756934
}
```
A token's expiration date cannot exceed its parent API key's expiration date.
Setting a token expiry date is optional, but highly recommended. Tokens without an expiry date remain valid indefinitely and may be a security liability.
The only way to revoke a token without an expiry date is to [delete](/reference/api/keys/delete-api-key) its parent API key.
Changing an instance's master key forces Meilisearch to regenerate all API keys and will also render all existing tenant tokens invalid.
# Configure Meilisearch at launch
Source: https://www.meilisearch.com/docs/learn/self_hosted/configure_meilisearch_at_launch
Configure Meilisearch at launch with command-line options, environment variables, or a configuration file.
When self-hosting Meilisearch, you can configure your instance at launch with **command-line options**, **environment variables**, or a **configuration file**.
These startup options affect your entire Meilisearch instance, not just a single index. For settings that affect search within a single index, see [index settings](/reference/api/settings/list-all-settings).
## Command-line options and flags
Pass **command-line options** and their respective values when launching a Meilisearch instance.
```bash theme={null}
./meilisearch --db-path ./meilifiles --http-addr 'localhost:7700'
```
In the previous example, `./meilisearch` is the command that launches a Meilisearch instance, while `--db-path` and `--http-addr` are options that modify this instance's behavior.
Meilisearch also has a number of **command-line flags.** Unlike command-line options, **flags don't take values**. If a flag is given, it is activated and changes Meilisearch's default behavior.
```bash theme={null}
./meilisearch --no-analytics
```
The above flag disables analytics for the Meilisearch instance and does not accept a value.
**Both command-line options and command-line flags take precedence over environment variables.** All command-line options and flags are prepended with `--`.
## Environment variables
To configure a Meilisearch instance using environment variables, set the environment variable prior to launching the instance. If you are unsure how to do this, read more about [setting and listing environment variables](https://linuxize.com/post/how-to-set-and-list-environment-variables-in-linux/), or [use a command-line option](#command-line-options-and-flags) instead.
```sh theme={null}
export MEILI_DB_PATH=./meilifiles
export MEILI_HTTP_ADDR=localhost:7700
./meilisearch
```
```sh theme={null}
set MEILI_DB_PATH=./meilifiles
set MEILI_HTTP_ADDR=127.0.0.1:7700
./meilisearch
```
In the previous example, `./meilisearch` is the command that launches a Meilisearch instance, while `MEILI_DB_PATH` and `MEILI_HTTP_ADDR` are environment variables that modify this instance's behavior.
Environment variables for command-line flags accept `n`, `no`, `f`, `false`, `off`, and `0` as `false`. An absent environment variable will also be considered as `false`. Any other value is considered `true`.
Environment variables are always identical to the corresponding command-line option, but prepended with `MEILI_` and written in all uppercase.
## Configuration file
Meilisearch accepts a configuration file in the `.toml` format as an alternative to command-line options and environment variables. Configuration files can be easily shared and versioned, and allow you to define multiple options.
**When used simultaneously, environment variables override the configuration file, and command-line options override environment variables.**
You can download a default configuration file using the following command:
```sh theme={null}
curl https://raw.githubusercontent.com/meilisearch/meilisearch/latest/config.toml > config.toml
```
By default, Meilisearch will look for a `config.toml` file in the working directory. If it is present, it will be used as the configuration file. You can verify this when you launch Meilisearch:
```
888b d888 d8b 888 d8b 888
8888b d8888 Y8P 888 Y8P 888
88888b.d88888 888 888
888Y88888P888 .d88b. 888 888 888 .d8888b .d88b. 8888b. 888d888 .d8888b 88888b.
888 Y888P 888 d8P Y8b 888 888 888 88K d8P Y8b "88b 888P" d88P" 888 "88b
888 Y8P 888 88888888 888 888 888 "Y8888b. 88888888 .d888888 888 888 888 888
888 " 888 Y8b. 888 888 888 X88 Y8b. 888 888 888 Y88b. 888 888
888 888 "Y8888 888 888 888 88888P' "Y8888 "Y888888 888 "Y8888P 888 888
Config file path: "./config.toml"
```
If the `Config file path` is anything other than `"none"`, it means that a configuration file was successfully located and used to start Meilisearch.
You can override the default location of the configuration file using the `MEILI_CONFIG_FILE_PATH` environment variable or the `--config-file-path` CLI option:
```sh theme={null}
./meilisearch --config-file-path="./config.toml"
```
UNIX:
```sh theme={null}
export MEILI_CONFIG_FILE_PATH="./config.toml"
./meilisearch
```
Windows:
```sh theme={null}
set MEILI_CONFIG_FILE_PATH="./config.toml"
./meilisearch
```
### Configuration file formatting
You can configure any environment variable or CLI option using a configuration file. In configuration files, options must be written in [snake case](https://en.wikipedia.org/wiki/Snake_case). For example, `--import-dump` would be written as `import_dump`.
```toml theme={null}
import_dump = "./example.dump"
```
Specifying the `config_file_path` option within the configuration file will throw an error. This is the only configuration option that cannot be set within a configuration file.
## Configuring cloud-hosted instances
To configure Meilisearch with command-line options in a cloud-hosted instance, edit its [service file](/guides/running_production#step-4-run-meilisearch-as-a-service). The default location of the service file is `/etc/systemd/system/meilisearch.service`.
To configure Meilisearch with environment variables in a cloud-hosted instance, modify Meilisearch's `env` file. Its default location is `/var/opt/meilisearch/env`.
After editing your configuration options, relaunch the Meilisearch service:
```sh theme={null}
systemctl restart meilisearch
```
[Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=instance-options) offers an optimal pre-configured environment. You do not need to use any of the configuration options listed in this page when hosting your project on Meilisearch Cloud.
## All instance options
### Configuration file path
**Environment variable**: `MEILI_CONFIG_FILE_PATH`
**CLI option**: `--config-file-path`
**Default**: `./config.toml`
**Expected value**: a filepath
Designates the location of the configuration file to load at launch.
Specifying this option in the configuration file itself will throw an error (assuming Meilisearch is able to find your configuration file).
### Database path
**Environment variable**: `MEILI_DB_PATH`
**CLI option**: `--db-path`
**Default value**: `"data.ms/"`
**Expected value**: a filepath
Designates the location where database files will be created and retrieved.
### Environment
**Environment variable**: `MEILI_ENV`
**CLI option**: `--env`
**Default value**: `development`
**Expected value**: `production` or `development`
Configures the instance's environment. Value must be either `production` or `development`.
`production`:
* Setting a [master key](/learn/security/basic_security) of at least 16 bytes is **mandatory**. If no master key is provided or if it is under 16 bytes, Meilisearch will suggest a secure autogenerated master key
* The [search preview interface](/learn/getting_started/search_preview) is disabled
`development`:
* Setting a [master key](/learn/security/basic_security) is **optional**. If no master key is provided or if it is under 16 bytes, Meilisearch will suggest a secure autogenerated master key
* Search preview is enabled
When the server environment is set to `development`, providing a master key is not mandatory. This is useful when debugging and prototyping, but dangerous otherwise since API routes are unprotected.
### HTTP address & port binding
**Environment variable**: `MEILI_HTTP_ADDR`
**CLI option**: `--http-addr`
**Default value**: `"localhost:7700"`
**Expected value**: an HTTP address and port
Sets the HTTP address and port Meilisearch will use.
### Master key
**Environment variable**: `MEILI_MASTER_KEY`
**CLI option**: `--master-key`
**Default value**: `None`
**Expected value**: a UTF-8 string of at least 16 bytes
Sets the instance's master key, automatically protecting all routes except [`GET /health`](/reference/api/health/get-health). This means you will need a valid API key to access all other endpoints.
When `--env` is set to `production`, providing a master key is mandatory. If none is given, or it is under 16 bytes, Meilisearch will throw an error and refuse to launch.
When `--env` is set to `development`, providing a master key is optional. If none is given, all routes will be unprotected and publicly accessible.
If you do not supply a master key in `production` or `development` environments or it is under 16 bytes, Meilisearch will suggest a secure autogenerated master key you can use when restarting your instance.
[Learn more about Meilisearch's use of security keys.](/learn/security/basic_security)
### Disable analytics
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_NO_ANALYTICS`
**CLI option**: `--no-analytics`
Deactivates Meilisearch's built-in telemetry when provided.
Meilisearch automatically collects data from all instances that do not opt out using this flag. All gathered data is used solely for the purpose of improving Meilisearch, and can be [deleted at any time](/learn/resources/telemetry#how-to-delete-all-collected-data).
[Read more about our policy on data collection](/learn/resources/telemetry), or take a look at [the comprehensive list of all data points we collect](/learn/resources/telemetry#exhaustive-list-of-all-collected-data).
### Dumpless upgrade
**Environment variable**: `MEILI_EXPERIMENTAL_DUMPLESS_UPGRADE`
**CLI option**: `--experimental-dumpless-upgrade`
**Default value**: None
**Expected value**: None
Migrates the database to a new Meilisearch version after you have manually updated the binary.
[Learn more about updating Meilisearch to a new release](/learn/update_and_migration/updating).
#### Create a snapshot before a dumpless upgrade
Take a snapshot of your instance before performing a dumpless upgrade.
Dumpless upgrade are not currently atomic. It is possible some processes fail and Meilisearch still finalizes the upgrade. This may result in a corrupted database and data loss.
### Dump directory
**Environment variable**: `MEILI_DUMP_DIR`
**CLI option**: `--dump-dir`
**Default value**: `dumps/`
**Expected value**: a filepath pointing to a valid directory
Sets the directory where Meilisearch will create dump files.
[Learn more about creating dumps](/reference/api/backups/create-dump).
### Import dump
**Environment variable**: `MEILI_IMPORT_DUMP`
**CLI option**: `--import-dump`
**Default value**: none
**Expected value**: a filepath pointing to a `.dump` file
Imports the dump file located at the specified path. Path must point to a `.dump` file. If a database already exists, Meilisearch will throw an error and abort launch.
Meilisearch will only launch once the dump data has been fully indexed. The time this takes depends on the size of the dump file.
### Ignore missing dump
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_IGNORE_MISSING_DUMP`
**CLI option**: `--ignore-missing-dump`
Prevents Meilisearch from throwing an error when `--import-dump` does not point to a valid dump file. Instead, Meilisearch will start normally without importing any dump.
This option will trigger an error if `--import-dump` is not defined.
### Ignore dump if DB exists
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_IGNORE_DUMP_IF_DB_EXISTS`
**CLI option**: `--ignore-dump-if-db-exists`
Prevents a Meilisearch instance with an existing database from throwing an error when using `--import-dump`. Instead, the dump will be ignored and Meilisearch will launch using the existing database.
This option will trigger an error if `--import-dump` is not defined.
### Log level
**Environment variable**: `MEILI_LOG_LEVEL`
**CLI option**: `--log-level`
**Default value**: `'INFO'`
**Expected value**: one of `ERROR`, `WARN`, `INFO`, `DEBUG`, `TRACE`, OR `OFF`
Defines how much detail should be present in Meilisearch's logs.
Meilisearch currently supports five log levels, listed in order of increasing verbosity:
* `'ERROR'`: only log unexpected events indicating Meilisearch is not functioning as expected
* `'WARN'`: log all unexpected events, regardless of their severity
* `'INFO'`: log all events. This is the default value of `--log-level`
* `'DEBUG'`: log all events and include detailed information on Meilisearch's internal processes. Useful when diagnosing issues and debugging
* `'TRACE'`: log all events and include even more detailed information on Meilisearch's internal processes. We do not advise using this level as it is extremely verbose. Use `'DEBUG'` before considering `'TRACE'`.
* `'OFF'`: disable logging
### Customize log output
**Environment variable**: `MEILI_EXPERIMENTAL_LOGS_MODE`
**CLI option**: `--experimental-logs-mode`
**Default value**: `'human'`
**Expected value**: one of `human` or `json`
Defines whether logs should output a human-readable text or JSON data.
### Max indexing memory
**Environment variable**: `MEILI_MAX_INDEXING_MEMORY`
**CLI option**: `--max-indexing-memory`
**Default value**: 2/3 of the available RAM
**Expected value**: an integer (`104857600`) or a human readable size (`'100Mb'`)
Sets the maximum amount of RAM Meilisearch can use when indexing. By default, Meilisearch uses no more than two thirds of available memory.
The value must either be given in bytes or explicitly state a base unit: `107374182400`, `'107.7Gb'`, or `'107374 Mb'`.
It is possible that Meilisearch goes over the exact RAM limit during indexing. In most contexts and machines, this should be a negligible amount with little to no impact on stability and performance.
Setting `--max-indexing-memory` to a value bigger than or equal to your machine's total memory is likely to cause your instance to crash.
### Reduce indexing memory usage
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_EXPERIMENTAL_REDUCE_INDEXING_MEMORY_USAGE`
**CLI option**: `--experimental-reduce-indexing-memory-usage`
**Default value**: `None`
Enables `MDB_WRITEMAP`, an LMDB option. Activating this option may reduce RAM usage in some UNIX and UNIX-like setups. However, it may also negatively impact write speeds and overall performance.
### Max indexing threads
**Environment variable**: `MEILI_MAX_INDEXING_THREADS`
**CLI option**: `--max-indexing-threads`
**Default value**: half of the available threads
**Expected value**: an integer
Sets the maximum number of threads Meilisearch can use during indexing. By default, the indexer avoids using more than half of a machine's total processing units. This ensures Meilisearch is always ready to perform searches, even while you are updating an index.
If `--max-indexing-threads` is higher than the real number of cores available in the machine, Meilisearch uses the maximum number of available cores.
In single-core machines, Meilisearch has no choice but to use the only core available for indexing. This may lead to a degraded search experience during indexing.
Avoid setting `--max-indexing-threads` to the total of your machine's processor cores. Though doing so might speed up indexing, it is likely to severely impact search experience.
### Payload limit size
**Environment variable**: `MEILI_HTTP_PAYLOAD_SIZE_LIMIT`
**CLI option**: `--http-payload-size-limit`
**Default value**: `104857600` (\~100MB)
**Expected value**: an integer
Sets the maximum size of [accepted payloads](/learn/getting_started/documents#dataset-format). Value must be given in bytes or explicitly stating a base unit. For example, the default value can be written as `107374182400`, `'107.7Gb'`, or `'107374 Mb'`.
### Search queue size
**Environment variable**: `MEILI_EXPERIMENTAL_SEARCH_QUEUE_SIZE`
**CLI option**: `--experimental-search-queue-size`
**Default value**: `1000`
**Expected value**: an integer
Configure the maximum amount of simultaneous search requests. By default, Meilisearch queues up to 1000 search requests at any given moment. This limit exists to prevent Meilisearch from consuming an unbounded amount of RAM.
### Search query embedding cache
**Environment variable**: `MEILI_EXPERIMENTAL_EMBEDDING_CACHE_ENTRIES`
**CLI option**: `--experimental-embedding-cache-entries`
**Default value**: `0`
**Expected value**: an integer
Sets the size of the search query embedding cache. By default, Meilisearch generates an embedding for every new search query. When this option is set to an integer bigger than 0, Meilisearch returns a previously generated embedding if it recently performed the same query.
The least recently used entries are evicted first. Embedders with the same configuration share the same cache, even if they were declared in distinct indexes.
### Schedule snapshot creation
**Environment variable**: `MEILI_SCHEDULE_SNAPSHOT`
**CLI option**: `--schedule-snapshot`
**Default value**: disabled if not present, `86400` if present without a value
**Expected value**: `None` or an integer
Activates scheduled snapshots. Snapshots are disabled by default.
It is possible to use `--schedule-snapshot` without a value. If `--schedule-snapshot` is present when launching an instance but has not been assigned a value, Meilisearch takes a new snapshot every 24 hours.
For more control over snapshot scheduling, pass an integer representing the interval in seconds between each snapshot. When `--schedule-snapshot=3600`, Meilisearch takes a new snapshot every hour.
When using the configuration file, it is also possible to explicitly pass a boolean value to `schedule_snapshot`. Meilisearch takes a new snapshot every 24 hours when `schedule_snapshot=true`, and takes no snapshots when `schedule_snapshot=false`.
[Learn more about snapshots](/learn/data_backup/snapshots).
### Snapshot destination
**Environment variable**: `MEILI_SNAPSHOT_DIR`
**CLI option**: `--snapshot-dir`
**Default value**: `snapshots/`
**Expected value**: a filepath pointing to a valid directory
Sets the directory where Meilisearch will store snapshots.
### Uncompressed snapshots
**Environment variable**: `MEILI_EXPERIMENTAL_NO_SNAPSHOT_COMPACTION`
**CLI option**: `--experimental-no-snapshot-compaction`
Disables snapshot compression. This may significantly speed up snapshot creation at the cost of bigger snapshot files.
### Import snapshot
**Environment variable**: `MEILI_IMPORT_SNAPSHOT`
**CLI option**: `--import-snapshot`
**Default value**: `None`
**Expected value**: a filepath pointing to a snapshot file
Launches Meilisearch after importing a previously-generated snapshot at the given filepath.
This command will throw an error if:
* A database already exists
* No valid snapshot can be found in the specified path
This behavior can be modified with the [`--ignore-snapshot-if-db-exists`](#ignore-snapshot-if-db-exists) and [`--ignore-missing-snapshot`](#ignore-missing-snapshot) options, respectively.
### Ignore missing snapshot
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_IGNORE_MISSING_SNAPSHOT`
**CLI option**: `--ignore-missing-snapshot`
Prevents a Meilisearch instance from throwing an error when [`--import-snapshot`](#import-snapshot) does not point to a valid snapshot file.
This command will throw an error if `--import-snapshot` is not defined.
### Ignore snapshot if DB exists
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_IGNORE_SNAPSHOT_IF_DB_EXISTS`
**CLI option**: `--ignore-snapshot-if-db-exists`
Prevents a Meilisearch instance with an existing database from throwing an error when using `--import-snapshot`. Instead, the snapshot will be ignored and Meilisearch will launch using the existing database.
This command will throw an error if `--import-snapshot` is not defined.
### Task webhook URL
**Environment variable**: `MEILI_TASK_WEBHOOK_URL`
**CLI option**: `--task-webhook-url`
**Default value**: `None`
**Expected value**: a URL string
Notifies the configured URL whenever Meilisearch [finishes processing a task](/learn/async/asynchronous_operations#task-status) or batch of tasks. Meilisearch uses the URL as given, retaining any specified query parameters.
The webhook payload contains the list of finished tasks in [ndjson](https://github.com/ndjson/ndjson-spec). For more information, [consult the dedicated task webhook guide](/learn/async/task_webhook).
The task webhook option requires having access to a command-line interface. If you are using Meilisearch Cloud, use the [`/webhooks` API route](/reference/api/webhooks/list-webhooks) instead.
### Task webhook authorization header
**Environment variable**: `MEILI_TASK_WEBHOOK_AUTHORIZATION_HEADER`
**CLI option**: `--task-webhook-authorization-header`
**Default value**: `None`
**Expected value**: an authentication token string
Includes an authentication token in the authorization header when notifying the [webhook URL](#task-webhook-url).
### Maximum number of batched tasks
**Environment variable**: `MEILI_EXPERIMENTAL_MAX_NUMBER_OF_BATCHED_TASKS`
**CLI option**: `--experimental-max-number-of-batched-tasks`
**Default value**: `None`
**Expected value**: an integer
Limit the number of tasks Meilisearch performs in a single batch. May improve stability in systems handling a large queue of resource-intensive tasks.
### Maximum batch payload size
**Environment variable**: `MEILI_EXPERIMENTAL_LIMIT_BATCHED_TASKS_TOTAL_SIZE`
**CLI option**: `--experimental-limit-batched-tasks-total-size`
**Default value**: Half of total available memory, up to a maximum of 10 GiB
**Expected value**: an integer
Sets a maximum payload size for batches in bytes. Smaller batches are less efficient, but consume less RAM and reduce immediate latency.
### Replication parameters
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_EXPERIMENTAL_REPLICATION_PARAMETERS`
**CLI option**: `--experimental-replication-parameters`
**Default value**: `None`
Helps running Meilisearch in cluster environments. It does this by modifying task handling in three ways:
* Task auto-deletion is disabled
* Allows you to manually set task uids by adding a custom `TaskId` header to your API requests
* Allows you to dry register tasks by specifying a `DryRun: true` header in your request
### Disable new indexer
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_EXPERIMENTAL_NO_EDITION_2024_FOR_SETTINGS`
**CLI option**: `--experimental-no-edition-2024-for-settings`
**Default value**: `None`
Falls back to previous settings indexer.
### Search personalization
**Environment variable**: `MEILI_EXPERIMENTAL_PERSONALIZATION_API_KEY`
**CLI option**: `--experimental-personalization-api-key`
**Default value**: `None`
**Expected value**: a Cohere API key
Enables search personalization. Must be a valid Cohere API key in string format.
### S3 options
#### Bucket URL
**Environment variable**: `MEILI_S3_BUCKET_URL`
**CLI option**: `--s3-bucket-url`
**Default value**: `None`
The URL for your S3 bucket. The URL must follow the format `https://s3.REGION.amazonaws.com`.
#### Bucket region
**Environment variable**: `MEILI_S3_BUCKET_REGION`
**CLI option**: `--s3-bucket-region`
**Default value**: `None`
The region of your S3 bucket. Must be a valid AWS region, such as `us-east-1`.
#### Bucket name
**Environment variable**: `MEILI_S3_BUCKET_NAME`
**CLI option**: `--s3-bucket-name`
**Default value**: `None`
The name of your S3 bucket.
#### Snapshot prefix
**Environment variable**: `MEILI_S3_SNAPSHOT_PREFIX`
**CLI option**: `--s3-snapshot-prefix`
**Default value**: `None`
The path leading to the [snapshot directory](#snapshot-destination) in your S3 bucket. Uses normal slashes.
#### Access key
**Environment variable**: `MEILI_S3_ACCESS_KEY`
**CLI option**: `--s3-access-key`
**Default value**: `None`
Your S3 bucket's access key.
#### Secret key
**Environment variable**: `MEILI_S3_SECRET_KEY`
**CLI option**: `--s3-secret-key`
**Default value**: `None`
Your S3 bucket's secret key.
#### Maximum parallel in-flight requests
**Environment variable**: `MEILI_EXPERIMENTAL_S3_MAX_IN_FLIGHT_PARTS`
**CLI option**: `--experimental-s3-max-in-flight-parts`
**Default value**: `10`
The maximum number of in-flight multipart requests Meilisearch should send to S3 in parallel.
#### Compression level
**Environment variable**: `MEILI_EXPERIMENTAL_S3_COMPRESSION_LEVEL`
**CLI option**: `--experimental-s3-compression-level`
**Default value**: `0`
The compression level to use for the snapshot tarball. Defaults to 0, no compression.
#### Signature duration
**Environment variable**: `MEILI_EXPERIMENTAL_S3_SIGNATURE_DURATION_SECONDS`
**CLI option**: `--experimental-s3-signature-duration-seconds`
**Default value**: `28800`
The maximum duration processing a snapshot can take. Defaults to 8 hours.
#### Multipart section size
**Environment variable**: `MEILI_EXPERIMENTAL_S3_MULTIPART_PART_SIZE`
**CLI option**: `--experimental-s3-multipart-part-size`
**Default value**: `None`
The size of each multipart section. Must be >10MiB and \<8GiB. Defaults to 375MiB, which enables databases of up to 3.5TiB.
### SSL options
#### SSL authentication path
**Environment variable**: `MEILI_SSL_AUTH_PATH`
**CLI option**: `--ssl-auth-path`
**Default value**: `None`
**Expected value**: a filepath
Enables client authentication in the specified path.
#### SSL certificates path
**Environment variable**: `MEILI_SSL_CERT_PATH`
**CLI option**: `--ssl-cert-path`
**Default value**: `None`
**Expected value**: a filepath pointing to a valid SSL certificate
Sets the server's SSL certificates.
Value must be a path to PEM-formatted certificates. The first certificate should certify the KEYFILE supplied by `--ssl-key-path`. The last certificate should be a root CA.
#### SSL key path
**Environment variable**: `MEILI_SSL_KEY_PATH`
**CLI option**: `--ssl-key-path`
**Default value**: `None`
**Expected value**: a filepath pointing to a valid SSL key file
Sets the server's SSL key files.
Value must be a path to an RSA private key or PKCS8-encoded private key, both in PEM format.
#### SSL OCSP path
**Environment variable**: `MEILI_SSL_OCSP_PATH`
**CLI option**: `--ssl-ocsp-path`
**Default value**: `None`
**Expected value**: a filepath pointing to a valid OCSP certificate
Sets the server's OCSP file. *Optional*
Reads DER-encoded OCSP response from OCSPFILE and staple to certificate.
#### SSL require auth
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_SSL_REQUIRE_AUTH`
**CLI option**: `--ssl-require-auth`
**Default value**: `None`
Makes SSL authentication mandatory.
Sends a fatal alert if the client does not complete client authentication.
#### SSL resumption
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_SSL_RESUMPTION`
**CLI option**: `--ssl-resumption`
**Default value**: `None`
Activates SSL session resumption.
#### SSL tickets
🚩 This option does not take any values. Assigning a value will throw an error. 🚩
**Environment variable**: `MEILI_SSL_TICKETS`
**CLI option**: `--ssl-tickets`
**Default value**: `None`
Activates SSL tickets.
# Enterprise and Community editions
Source: https://www.meilisearch.com/docs/learn/self_hosted/enterprise_edition
Self-hosted users can choose between the Community Edition and the Enterprise Edition. The Community edition is free under the MIT license, while Enterprise offers advanced features under a BUSL license.
## What is the Meilisearch Community Edition?
The Meilisearch Community Edition (CE) is a free version of Meilisearch. It offers all essential Meilisearch features, such as full-text search and AI-powered search, under an MIT license.
## What is the Meilisearch Enterprise Edition?
The Enterprise Edition (EE) is a version of Meilisearch with advanced features. It is available under a BUSL license and cannot be freely used in production. EE is the Meilisearch version that powers Meilisearch Cloud.
The only feature exclusive to the Enterprise Edition is [sharding](/learn/multi_search/implement_sharding).
## When should you use each edition?
In most cases, using Meilisearch Cloud is the recommended way of integrating Meilisearch with your application.
Use the Meilisearch Community Edition when you want to host Meilisearch independently.
Meilisearch makes the Enterprise Edition binaries available for testing EE-only features before committing to a Meilisearch Cloud plan. If you want to self-host the Enterprise Edition in a production environment, [contact the sales team](mailto:sales@meilisearch.com).
# Getting started with self-hosted Meilisearch
Source: https://www.meilisearch.com/docs/learn/self_hosted/getting_started_with_self_hosted_meilisearch
Learn how to install Meilisearch, index a dataset, and perform your first search.
This quick start walks you through installing Meilisearch, adding documents, and performing your first search.
To follow this tutorial you need:
* A [command line terminal](https://www.learnenough.com/command-line-tutorial#sec-running_a_terminal)
* [cURL](https://curl.se)
Using Meilisearch Cloud? Check out the dedicated guide, [Getting started with Meilisearch Cloud](/learn/getting_started/cloud_quick_start).
## Setup and installation
First, you need to download and install Meilisearch. This command installs the latest Meilisearch version in your local machine:
```bash theme={null}
# Install Meilisearch
curl -L https://install.meilisearch.com | sh
```
The rest of this guide assumes you are using Meilisearch locally, but you may also use Meilisearch over a cloud service such as [Meilisearch Cloud](https://www.meilisearch.com/cloud).
Learn more about other installation options in the [installation guide](/learn/self_hosted/install_meilisearch_locally).
### Running Meilisearch
Next, launch Meilisearch by running the following command in your terminal:
```bash theme={null}
# Launch Meilisearch
./meilisearch --master-key="aSampleMasterKey"
```
This tutorial uses `aSampleMasterKey` as a master key, but you may change it to any alphanumeric string with 16 or more bytes. In most cases, one character corresponds to one byte.
You should see something like this in response:
```
888b d888 d8b 888 d8b 888
8888b d8888 Y8P 888 Y8P 888
88888b.d88888 888 888
888Y88888P888 .d88b. 888 888 888 .d8888b .d88b. 8888b. 888d888 .d8888b 88888b.
888 Y888P 888 d8P Y8b 888 888 888 88K d8P Y8b "88b 888P" d88P" 888 "88b
888 Y8P 888 88888888 888 888 888 "Y8888b. 88888888 .d888888 888 888 888 888
888 " 888 Y8b. 888 888 888 X88 Y8b. 888 888 888 Y88b. 888 888
888 888 "Y8888 888 888 888 88888P' "Y8888 "Y888888 888 "Y8888P 888 888
Database path: "./data.ms"
Server listening on: "localhost:7700"
```
You now have a Meilisearch instance running in your terminal window. Keep this window open for the rest of this tutorial.
The above command uses the `--master-key` configuration option to secure Meilisearch. Setting a master key is optional but strongly recommended in development environments. Master keys are mandatory in production environments.
To learn more about securing Meilisearch, refer to the [security tutorial](/learn/security/basic_security).
## Add documents
In this quick start, you will search through a collection of movies.
To follow along, first click this link to download the file: movies.json. Then, move the downloaded file into your working directory.
Meilisearch accepts data in JSON, NDJSON, and CSV formats.
Open a new terminal window and run the following command:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movies/documents?primaryKey=id' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer aSampleMasterKey' \
--data-binary @movies.json
```
```javascript JS theme={null}
// With npm:
// npm install meilisearch
// Or with pnpm:
// pnpm add meilisearch
// In your .js file:
// With the `require` syntax:
const { MeiliSearch } = require('meilisearch')
const movies = require('./movies.json')
// With the `import` syntax:
import { MeiliSearch } from 'meilisearch'
import movies from './movies.json'
const client = new MeiliSearch({
host: 'MEILISEARCH_URL',
apiKey: 'aSampleMasterKey'
})
client.index('movies').addDocuments(movies)
.then((res) => console.log(res))
```
```python Python theme={null}
# In the command line:
# pip3 install meilisearch
# In your .py file:
import meilisearch
import json
client = meilisearch.Client('MEILISEARCH_URL', 'aSampleMasterKey')
json_file = open('movies.json', encoding='utf-8')
movies = json.load(json_file)
client.index('movies').add_documents(movies)
```
```php PHP theme={null}
/**
* Using `meilisearch-php` with the Guzzle HTTP client, in the command line:
* composer require meilisearch/meilisearch-php \
* guzzlehttp/guzzle \
* http-interop/http-factory-guzzle:^1.0
*/
/**
* In your PHP file:
*/
index('movies')->addDocuments($movies);
```
```java Java theme={null}
// For Maven:
// Add the following code to the `` section of your project:
//
//
// com.meilisearch.sdk
// meilisearch-java
// 0.20.0
// pom
//
// For Gradle
// Add the following line to the `dependencies` section of your `build.gradle`:
//
// implementation 'com.meilisearch.sdk:meilisearch-java:0.20.0'
// In your .java file:
import com.meilisearch.sdk;
import java.nio.file.Files;
import java.nio.file.Path;
Path fileName = Path.of("movies.json");
String moviesJson = Files.readString(fileName);
Client client = new Client(new Config("MEILISEARCH_URL", "aSampleMasterKey"));
Index index = client.index("movies");
index.addDocuments(moviesJson);
```
```ruby Ruby theme={null}
# In the command line:
# bundle add meilisearch
# In your .rb file:
require 'json'
require 'meilisearch'
client = MeiliSearch::Client.new('MEILISEARCH_URL', 'aSampleMasterKey')
movies_json = File.read('movies.json')
movies = JSON.parse(movies_json)
client.index('movies').add_documents(movies)
```
```go Go theme={null}
// In the command line:
// go get -u github.com/meilisearch/meilisearch-go
// In your .go file:
package main
import (
"os"
"encoding/json"
"io"
"github.com/meilisearch/meilisearch-go"
)
func main() {
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
jsonFile, _ := os.Open("movies.json")
defer jsonFile.Close()
byteValue, _ := io.ReadAll(jsonFile)
var movies []map[string]interface{}
json.Unmarshal(byteValue, &movies)
_, err := client.Index("movies").AddDocuments(movies, nil)
if err != nil {
panic(err)
}
}
```
```csharp C# theme={null}
// In the command line:
// dotnet add package Meilisearch
// In your .cs file:
using System.IO;
using System.Text.Json;
using Meilisearch;
using System.Threading.Tasks;
using System.Collections.Generic;
namespace Meilisearch_demo
{
public class Movie
{
public string Id { get; set; }
public string Title { get; set; }
public string Poster { get; set; }
public string Overview { get; set; }
public IEnumerable Genres { get; set; }
}
internal class Program
{
static async Task Main(string[] args)
{
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "aSampleMasterKey");
var options = new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true
};
string jsonString = await File.ReadAllTextAsync("movies.json");
var movies = JsonSerializer.Deserialize>(jsonString, options);
var index = client.Index("movies");
await index.AddDocumentsAsync(movies);
}
}
}
```
```text Rust theme={null}
// In your .toml file:
[dependencies]
meilisearch-sdk = "0.32.0"
# futures: because we want to block on futures
futures = "0.3"
# serde: required if you are going to use documents
serde = { version="1.0", features = ["derive"] }
# serde_json: required in some parts of this guide
serde_json = "1.0"
// In your .rs file:
// Documents in the Rust library are strongly typed
#[derive(Serialize, Deserialize)]
struct Movie {
id: i64,
title: String,
poster: String,
overview: String,
release_date: i64,
genres: Vec
}
// You will often need this `Movie` struct in other parts of this documentation. (you will have to change it a bit sometimes)
// You can also use schemaless values, by putting a `serde_json::Value` inside your own struct like this:
#[derive(Serialize, Deserialize)]
struct Movie {
id: i64,
#[serde(flatten)]
value: serde_json::Value,
}
// Then, add documents into the index:
use meilisearch_sdk::{
indexes::*,
client::*,
search::*,
settings::*
};
use serde::{Serialize, Deserialize};
use std::{io::prelude::*, fs::File};
use futures::executor::block_on;
fn main() { block_on(async move {
let client = Client::new("MEILISEARCH_URL", Some("aSampleMasterKey"));
// Reading and parsing the file
let mut file = File::open("movies.json")
.unwrap();
let mut content = String::new();
file
.read_to_string(&mut content)
.unwrap();
let movies_docs: Vec = serde_json::from_str(&content)
.unwrap();
// Adding documents
client
.index("movies")
.add_documents(&movies_docs, None)
.await
.unwrap();
})}
```
```swift Swift theme={null}
// Add this to your `Package.swift`
dependencies: [
.package(url: "https://github.com/meilisearch/meilisearch-swift.git", from: "0.17.0")
]
// In your .swift file:
let path = Bundle.main.url(forResource: "movies", withExtension: "json")!
let documents: Data = try Data(contentsOf: path)
let client = try MeiliSearch(host: "MEILISEARCH_URL", apiKey: "aSampleMasterKey")
client.index("movies").addDocuments(documents: documents) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
// In the command line:
// dart pub add meilisearch
// In your .dart file:
import 'package:meilisearch/meilisearch.dart';
import 'dart:io';
import 'dart:convert';
var client = MeiliSearchClient('MEILISEARCH_URL', 'aSampleMasterKey');
final json = await File('movies.json').readAsString();
await client.index('movies').addDocumentsJson(json);
```
Meilisearch stores data in the form of discrete records, called [documents](/learn/getting_started/documents). Each document is an object composed of multiple fields, which are pairs of one attribute and one value:
```json theme={null}
{
"attribute": "value"
}
```
Documents are grouped into collections, called [indexes](/learn/getting_started/indexes).
The previous command added documents from `movies.json` to a new index called `movies`. It also set `id` as the primary key.
Every index must have a [primary key](/learn/getting_started/primary_key#primary-field), an attribute shared across all documents in that index. If you try adding documents to an index and even a single one is missing the primary key, none of the documents will be stored.
If you do not explicitly set the primary key, Meilisearch [infers](/learn/getting_started/primary_key#meilisearch-guesses-your-primary-key) it from your dataset.
After adding documents, you should receive a response like this:
```json theme={null}
{
"taskUid": 0,
"indexUid": "movies",
"status": "enqueued",
"type": "documentAdditionOrUpdate",
"enqueuedAt": "2021-08-11T09:25:53.000000Z"
}
```
Use the returned `taskUid` to [check the status](/reference/api/async-task-management/get-task) of your documents:
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/tasks/0' \
-H 'Authorization: Bearer aSampleMasterKey'
```
```javascript JS theme={null}
client.tasks.getTask(0)
```
```python Python theme={null}
client.get_task(0)
```
```php PHP theme={null}
$client->getTask(0);
```
```java Java theme={null}
client.getTask(0);
```
```ruby Ruby theme={null}
client.task(0)
```
```go Go theme={null}
client.GetTask(0)
```
```csharp C# theme={null}
TaskInfo task = await client.GetTaskAsync(0);
```
```rust Rust theme={null}
client
.get_task(0)
.await
.unwrap();
```
```swift Swift theme={null}
client.getTask(taskUid: 0) { (result) in
switch result {
case .success(let task):
print(task)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.getTask(0);
```
Most database operations in Meilisearch are [asynchronous](/learn/async/asynchronous_operations). Rather than being processed instantly, **API requests are added to a queue and processed one at a time**.
If the document addition is successful, the response should look like this:
```json theme={null}
{
"uid": 0,
"indexUid": "movies",
"status": "succeeded",
"type": "documentAdditionOrUpdate",
"canceledBy": null,
"details": {
"receivedDocuments": 19547,
"indexedDocuments": 19547
},
"error": null,
"duration": "PT0.030750S",
"enqueuedAt": "2021-12-20T12:39:18.349288Z",
"startedAt": "2021-12-20T12:39:18.352490Z",
"finishedAt": "2021-12-20T12:39:18.380038Z"
}
```
If `status` is `enqueued` or `processing`, all you have to do is wait a short time and check again. Proceed to the next step once the task `status` has changed to `succeeded`.
## Search
Now that you have Meilisearch set up, you can start searching!
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movies/search' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer aSampleMasterKey' \
--data-binary '{ "q": "botman" }'
```
```javascript JS theme={null}
client.index('movies').search('botman').then((res) => console.log(res))
```
```python Python theme={null}
client.index('movies').search('botman')
```
```php PHP theme={null}
$client->index('movies')->search('botman');
```
```java Java theme={null}
client.index("movies").search("botman");
```
```ruby Ruby theme={null}
client.index('movies').search('botman')
```
```go Go theme={null}
client.Index("movies").Search("botman", &meilisearch.SearchRequest{})
```
```csharp C# theme={null}
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "masterKey");
var index = client.Index("movies");
var movies = await index.SearchAsync("botman");
foreach (var movie in movies.Hits)
{
Console.WriteLine(movie.Title);
}
```
```rust Rust theme={null}
// You can build a `SearchQuery` and execute it later:
let query: SearchQuery = SearchQuery::new(&movies)
.with_query("botman")
.build();
let results: SearchResults = client
.index("movies")
.execute_query(&query)
.await
.unwrap();
// You can build a `SearchQuery` and execute it directly:
let results: SearchResults = SearchQuery::new(&movies)
.with_query("botman")
.execute()
.await
.unwrap();
// You can search in an index directly:
let results: SearchResults = client
.index("movies")
.search()
.with_query("botman")
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
client.index("movies").search(SearchParameters(query: "botman")) { (result) in
switch result {
case .success(let searchResult):
print(searchResult)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.index('movies').search('botman');
```
This tutorial queries Meilisearch with the master key. In production environments, this is a security risk. Prefer using API keys to access Meilisearch's API in any public-facing application.
In the above code sample, the parameter `q` represents the search query. This query instructs Meilisearch to search for `botman` in the documents you added in [the previous step](#add-documents):
```json theme={null}
{
"hits": [
{
"id": 29751,
"title": "Batman Unmasked: The Psychology of the Dark Knight",
"poster": "https://image.tmdb.org/t/p/w1280/jjHu128XLARc2k4cJrblAvZe0HE.jpg",
"overview": "Delve into the world of Batman and the vigilante justice tha",
"release_date": "2008-07-15"
},
{
"id": 471474,
"title": "Batman: Gotham by Gaslight",
"poster": "https://image.tmdb.org/t/p/w1280/7souLi5zqQCnpZVghaXv0Wowi0y.jpg",
"overview": "ve Victorian Age Gotham City, Batman begins his war on crime",
"release_date": "2018-01-12"
},
…
],
"estimatedTotalHits": 66,
"query": "botman",
"limit": 20,
"offset": 0,
"processingTimeMs": 12
}
```
By default, Meilisearch only returns the first 20 results for a search query. You can change this using the [`limit` parameter](/reference/api/search/search-with-post#body-limit).
## What's next?
You now know how to install Meilisearch, create an index, add documents, check the status of an asynchronous task, and make a search request.
If you'd like to search through the documents you just added using a clean browser interface rather than the terminal, you can do so with [our built-in search preview](/learn/getting_started/search_preview). You can also [learn how to quickly build a front-end interface](/guides/front_end/front_end_integration) of your own.
For a more advanced approach, consult the [API reference](/reference/api/requests).
# Install Meilisearch locally
Source: https://www.meilisearch.com/docs/learn/self_hosted/install_meilisearch_locally
Use Meilisearch with either Meilisearch Cloud, another cloud service, or install it locally.
## Meilisearch Cloud
[Meilisearch Cloud](https://www.meilisearch.com/cloud) simplifies installing, maintaining, and updating Meilisearch. [Get started with a 14-day free trial](https://cloud.meilisearch.com/register).
Take a look at the [Meilisearch Cloud tutorial](/learn/getting_started/cloud_quick_start) for more information on setting up and using Meilisearch's cloud service.
## Local installation
Download the **latest stable release** of Meilisearch with **cURL**.
Launch Meilisearch to start the server.
```bash theme={null}
# Install Meilisearch
curl -L https://install.meilisearch.com | sh
# Launch Meilisearch
./meilisearch
```
Download the **latest stable release** of Meilisearch with **[Homebrew](https://brew.sh/)**, a package manager for MacOS.
Launch Meilisearch to start the server.
```bash theme={null}
# Update brew and install Meilisearch
brew update && brew install meilisearch
# Launch Meilisearch
meilisearch
```
When using **Docker**, you can run [any tag available in our official Docker image](https://hub.docker.com/r/getmeili/meilisearch/tags).
These commands launch the **latest stable release** of Meilisearch.
```bash theme={null}
# Fetch the latest version of Meilisearch image from DockerHub
docker pull getmeili/meilisearch:v1.16
# Launch Meilisearch in development mode with a master key
docker run -it --rm \
-p 7700:7700 \
-e MEILI_ENV='development' \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:v1.16
# Use ${pwd} instead of $(pwd) in PowerShell
```
You can learn more about [using Meilisearch with Docker in our dedicated guide](/guides/docker).
Download the **latest stable release** of Meilisearch with **APT**.
Launch Meilisearch to start the server.
```bash theme={null}
# Add Meilisearch package
echo "deb [trusted=yes] https://apt.fury.io/meilisearch/ /" | sudo tee /etc/apt/sources.list.d/fury.list
# Update APT and install Meilisearch
sudo apt update && sudo apt install meilisearch
# Launch Meilisearch
meilisearch
```
Meilisearch is written in Rust. To compile it, [install the Rust toolchain](https://www.rust-lang.org/tools/install).
Once the Rust toolchain is installed, clone the repository on your local system and change it to your working directory.
```bash theme={null}
git clone https://github.com/meilisearch/meilisearch
cd meilisearch
```
Choose the release you want to use. You can find the full list [here](https://github.com/meilisearch/meilisearch/releases).
In the cloned repository, run the following command to access the most recent version of Meilisearch:
```bash theme={null}
git checkout latest
```
Finally, update the Rust toolchain, compile the project, and execute the binary.
```bash theme={null}
# Update the Rust toolchain to the latest version
rustup update
# Compile the project
cargo build --release
# Execute the binary
./target/release/meilisearch
```
To install Meilisearch on Windows, you can:
* Use Docker (see "Docker" tab above)
* Download the latest binary (see "Direct download" tab above)
* Use the installation script (see "cURL" tab above) if you have installed [Cygwin](https://www.cygwin.com/), [WSL](https://learn.microsoft.com/en-us/windows/wsl/), or equivalent
* Compile from source (see "Source" tab above)
To learn more about the Windows command prompt, follow this [introductory guide](https://www.makeuseof.com/tag/a-beginners-guide-to-the-windows-command-line/).
If none of the other installation options work for you, you can always download the Meilisearch binary directly on GitHub.
Go to the [latest Meilisearch release](https://github.com/meilisearch/meilisearch/releases/latest), scroll down to "Assets", and select the binary corresponding to your operating system.
```bash theme={null}
# Rename binary to meilisearch. Replace {meilisearch_os} with the name of the downloaded binary
mv {meilisearch_os} meilisearch
# Give the binary execute permission
chmod +x meilisearch
# Launch Meilisearch
./meilisearch
```
## Installing older versions of Meilisearch
We discourage the use of older Meilisearch versions. Before installing an older version, please [contact support](https://discord.meilisearch.com) to check if the latest version might work as well.
Download the binary of a specific version under "Assets" on our [GitHub changelog](https://github.com/meilisearch/meilisearch/releases).
```bash theme={null}
# Replace {meilisearch_version} and {meilisearch_os} with the specific version and OS you want to download
# For example, if you want to download v1.0 on macOS,
# replace {meilisearch_version} and {meilisearch_os} with v1.0 and meilisearch-macos-amd64 respectively
curl -OL https://github.com/meilisearch/meilisearch/releases/download/{meilisearch_version}/{meilisearch_os}
# Rename binary to meilisearch. Replace {meilisearch_os} with the name of the downloaded binary
mv {meilisearch_os} meilisearch
# Give the binary execute permission
chmod +x meilisearch
# Launch Meilisearch
./meilisearch
```
When using **Docker**, you can run [any tag available in our official Docker image](https://hub.docker.com/r/getmeili/meilisearch/tags).
```bash theme={null}
# Fetch specific version of Meilisearch image from DockerHub. Replace vX.Y.Z with the version you want to use
docker pull getmeili/meilisearch:vX.Y.Z
# Launch Meilisearch in development mode with a master key
docker run -it --rm \
-p 7700:7700 \
-e MEILI_ENV='development' \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:vX.Y.Z
# Use ${pwd} instead of $(pwd) in PowerShell
```
Learn more about [using Meilisearch with Docker in our dedicated guide](/guides/docker).
Meilisearch is written in Rust. To compile it, [install the Rust toolchain](https://www.rust-lang.org/tools/install).
Once the Rust toolchain is installed, clone the repository on your local system and change it to your working directory.
```bash theme={null}
git clone https://github.com/meilisearch/meilisearch
cd meilisearch
```
Choose the release you want to use. You can find the full list [here](https://github.com/meilisearch/meilisearch/releases).
In the cloned repository, run the following command to access a specific version of Meilisearch:
```bash theme={null}
# Replace vX.Y.Z with the specific version you want to use
git checkout vX.Y.Z
```
Finally, update the Rust toolchain, compile the project, and execute the binary.
```bash theme={null}
# Update the Rust toolchain to the latest version
rustup update
# Compile the project
cargo build --release
# Execute the binary
./target/release/meilisearch
```
Download the binary of a specific version under "Assets" on our [GitHub changelog](https://github.com/meilisearch/meilisearch/releases).
```bash theme={null}
# Rename binary to meilisearch. Replace {meilisearch_os} with the name of the downloaded binary
mv {meilisearch_os} meilisearch
# Give the binary execute permission
chmod +x meilisearch
# Launch Meilisearch
./meilisearch
```
# Supported operating systems
Source: https://www.meilisearch.com/docs/learn/self_hosted/supported_os
Meilisearch officially supports Windows, MacOS, and many Linux distributions.
Meilisearch officially supports Windows, MacOS, and many Linux distributions. Consult the [installation guide](/learn/self_hosted/install_meilisearch_locally) for more instructions.
Meilisearch binaries might still run in supported environments without official support.
Use [Meilisearch Cloud](https://www.meilisearch.com/cloud) to integrate Meilisearch with applications hosted in unsupported operating systems.
## Linux
The Meilisearch binary works on all Linux distributions with `amd64/x86_64` or `aarch64/arm64` architecture using glibc 2.35 and later. You can check your glibc version using:
```
ldd --version
```
## macOS
The Meilisearch binary works with macOS 14 Sonoma and later with `amd64` or `arm64` architecture. Older macOS versions might be compatible with Meilisearch but are not officially supported.
## Windows
The Meilisearch binary works on Windows Server 2022 and later.
It is likely the Meilisearch binary also works with Windows OS 10 and later. However, due to the differences between Windows OS and Windows Server, Meilisearch does not officially support Windows OS.
## Troubleshooting
If the provided [binaries](https://github.com/meilisearch/meilisearch/releases) do not work on your operating system, try building Meilisearch [from source](/learn/self_hosted/install_meilisearch_locally#local-installation). If compilation fails, Meilisearch is not compatible with your machine.
# Meilisearch Cloud teams
Source: https://www.meilisearch.com/docs/learn/teams/teams
Meilisearch Cloud teams helps collaboration between project stakeholders with different skillsets and responsibilities.
Meilisearch Cloud teams are groups of users who all have access a to specific set of projects. This feature is designed to help collaboration between project stakeholders with different skillsets and responsibilities.
When you open a new account, Meilisearch Cloud automatically creates a default team. A team may have any number of team members.
## Team roles and permissions
There are two types of team members in Meilisearch Cloud teams: owners and regular team members.
Team owners have full control over a project's administrative details. Only team owners may change a project's billing plan or update its billing information. Additionally, only team owners may rename a team, add and remove members from a team, or transfer team ownership.
A team may only have one owner.
## Multiple teams in the same account
If you are responsible for different applications belonging to multiple organizations, it might be useful to create separate teams. There are no limits for the amount of teams a single user may create.
It is not possible to delete a team once you have created it. However, Meilisearch Cloud billing is based on projects and there are no costs associated with creating multiple teams.
# Migrating from Algolia to Meilisearch
Source: https://www.meilisearch.com/docs/learn/update_and_migration/algolia_migration
This guide will take you step-by-step through the creation of a Node.js script to upload data indexed by Algolia to Meilisearch.
This page aims to help current users of Algolia make the transition to Meilisearch.
For a high-level comparison of the two search companies and their products, see [our analysis of the search market](/learn/resources/comparison_to_alternatives#meilisearch-vs-algolia).
## Overview
This guide will take you step-by-step through the creation of a [Node.js](https://nodejs.org/en/) script to upload Algolia index data to Meilisearch. [You can also skip directly to the finished script](#finished-script).
The migration process consists of three steps:
1. [Export your data stored in Algolia](#export-your-algolia-data)
2. [Import your data into Meilisearch](#import-your-data-into-meilisearch)
3. [Configure your Meilisearch index settings (optional)](#configure-your-index-settings)
To help with the transition, we have also included a comparison of Meilisearch and Algolia's [API methods](#api-methods) and [front-end components](#front-end-components).
Before continuing, make sure you have both Meilisearch and Node.js installed and have access to a command-line terminal. If you're unsure how to install Meilisearch, see our [quick start](/learn/self_hosted/getting_started_with_self_hosted_meilisearch).
This guide was tested with the following package versions:
* [`node.js`](https://nodejs.org/en/): `16.16` or later
* [`algoliasearch`](https://www.npmjs.com/package/algoliasearch): `4.13`
* [`meilisearch-js`](https://www.npmjs.com/package/meilisearch): compatible with Meilisearch v1.0 and above
* [`meilisearch`](https://github.com/meilisearch/meilisearch): v1.0 or later
## Export your Algolia data
### Initialize project
Start by creating a directory `algolia-meilisearch-migration` and generating a `package.json` file with `npm`:
```bash theme={null}
mkdir algolia-meilisearch-migration
cd algolia-meilisearch-migration
npm init -y
```
This will set up the environment we need to install dependencies.
Next, create a `script.js` file:
```bash theme={null}
touch script.js
```
This file will contain our migration script.
### Install dependencies
To get started, you'll need two different packages. The first is `algoliasearch`, the JavaScript client for the Algolia API, and the second is `meilisearch`, the JavaScript client for the Meilisearch API.
```bash theme={null}
npm install -s algoliasearch@4.13 meilisearch
```
### Create Algolia client
You'll need your **Application ID** and **Admin API Key** to start the Algolia client. Both can be found in your [Algolia account](https://www.algolia.com/account/api-keys).
Paste the below code in `script.js`:
```js theme={null}
const algoliaSearch = require("algoliasearch");
const algoliaClient = algoliaSearch(
"APPLICATION_ID",
"ADMIN_API_KEY"
);
const algoliaIndex = algoliaClient.initIndex("INDEX_NAME");
```
Replace `APPLICATION_ID` and `ADMIN_API_KEY` with your Algolia application ID and admin API key respectively.
Replace `INDEX_NAME` with the name of the Algolia index you would like to migrate to Meilisearch.
### Fetch data from Algolia
To fetch all Algolia index data at once, use Algolia's [`browseObjects`](https://www.algolia.com/doc/api-reference/api-methods/browse/) method.
```js theme={null}
let records = [];
await algoliaIndex.browseObjects({
batch: (hits) => {
records = records.concat(hits);
}
});
```
The `batch` callback method is invoked on each batch of hits and the content is concatenated in the `records` array. We will use `records` again later in the upload process.
## Import your data into Meilisearch
### Create Meilisearch client
Create a Meilisearch client by passing the host URL and API key of your Meilisearch instance. The easiest option is to use the automatically generated [admin API key](/learn/security/basic_security).
```js theme={null}
const { MeiliSearch } = require("meilisearch");
const meiliClient = new MeiliSearch({
host: "MEILI_HOST",
apiKey: "MEILI_API_KEY",
});
const meiliIndex = meiliClient.index("MEILI_INDEX_NAME");
```
Replace `MEILI_HOST`,`MEILI_API_KEY`, and `MEILI_INDEX_NAME` with your Meilisearch host URL, Meilisearch API key, and the index name where you would like to add documents. Meilisearch will create the index if it doesn't already exist.
### Upload data to Meilisearch
Next, use the Meilisearch JavaScript method [`addDocumentsInBatches`](https://github.com/meilisearch/meilisearch-js#documents-) to upload all your records in batches of 100,000.
```js theme={null}
const BATCH_SIZE = 100000;
await meiliIndex.addDocumentsInBatches(records, BATCH_SIZE);
```
That's all! When you're ready to run the script, enter the below command:
```bash theme={null}
node script.js
```
### Finished script
```js theme={null}
const algoliaSearch = require("algoliasearch");
const { MeiliSearch } = require("meilisearch");
const BATCH_SIZE = 1000;
(async () => {
const algoliaClient = algoliaSearch("APPLICATION_ID", "ADMIN_API_KEY");
const algoliaIndex = algoliaClient.initIndex("INDEX_NAME");
let records = [];
await algoliaIndex.browseObjects({
batch: (hits) => {
records = records.concat(hits);
}
});
const meiliClient = new MeiliSearch({
host: "MEILI_HOST",
apiKey: "MEILI_API_KEY",
});
const meiliIndex = meiliClient.index("MEILI_INDEX_NAME");
await meiliIndex.addDocumentsInBatches(records, BATCH_SIZE);
})();
```
## Configure your index settings
Meilisearch's default settings are designed to deliver a fast and relevant search experience that works for most use-cases.
To customize your index settings, we recommend following [this guide](/learn/getting_started/indexes#index-settings). To learn more about the differences between settings in Algolia and Meilisearch, read on.
### Index settings vs. search parameters
One of the key usage differences between Algolia and Meilisearch is how they approach index settings and search parameters.
**In Algolia,** [API parameters](https://www.algolia.com/doc/api-reference/api-parameters/) is a flexible category that includes both index settings and search parameters. Many API parameters can be used both at indexing time—to set default behavior—or at search time—to override that behavior.
**In Meilisearch,** [index settings](/reference/api/settings/list-all-settings) and [search parameters](/reference/api/search/search-with-post) are two distinct categories. Settings affect all searches on an index, while parameters affect the results of a single search.
Some Meilisearch parameters require index settings to be configured beforehand. For example, you must first configure the index setting `sortableAttributes` to use the search parameter `sort`. However, unlike in Algolia, an index setting can never be used as a parameter and vice versa.
### Settings and parameters comparison
The below table compares Algolia's **API parameters** with the equivalent Meilisearch **setting** or **search parameter**.
| Algolia | Meilisearch |
| :---------------------------------- | :------------------------------------------------------------------------------- |
| `query` | `q` |
| `attributesToRetrieve` | `attributesToRetrieve` |
| `filters` | `filter` |
| `facets` | `facetDistribution` |
| `attributesToHighlight` | `attributesToHighlight` |
| `offset` | `offset` |
| `length` | `limit` |
| `typoTolerance` | `typoTolerance` |
| `snippetEllipsisText` | `cropMarker` |
| `searchableAttributes` | `searchableAttributes` |
| `attributesForFaceting` | `filterableAttributes` |
| `unretrievableAttributes` | No direct equivalent; achieved by removing attributes from `displayedAttributes` |
| `attributesToRetrieve` | `displayedAttributes` |
| `attributeForDistinct` | `distinctAttribute` |
| `ranking` | `rankingRules` |
| `customRanking` | Integrated within `rankingRules` |
| `removeStopWords` | `stopWords` |
| `synonyms` | `synonyms` |
| Sorting(using replicas) | `sortableAttributes` (no replicas required) |
| `removeWordsIfNoResults` | Automatically supported, but not customizable |
| `disableTypoToleranceOnAttributes` | `typoTolerance.disableOnAttributes` |
| `separatorsToIndex` | Not Supported |
| `disablePrefixOnAttributes` | Not Supported |
| `relevancyStrictness` | Not Supported |
| `maxValuesPerFacet` | `maxValuesPerFacet` |
| `sortFacetValuesBy` | `sortFacetValuesBy` |
| `restrictHighlightAndSnippetArrays` | Not Supported |
## API methods
This section compares Algolia and Meilisearch's respective API methods, using JavaScript for reference.
| Method | Algolia | Meilisearch |
| :-------------------- | :---------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------- |
| Index Instantiation | `client.initIndex()` Here, client is an Algolia instance. | `client.index()` Here, client is a Meilisearch instance. |
| Create Index | Algolia automatically creates an index the first time you add a record or settings. | The same applies to Meilisearch, but users can also create an index explicitly: `client.createIndex(string indexName)` |
| Get All Indexes | `client.listIndices()` | `client.getIndexes()` |
| Get Single Index | No method available | `client.getIndex(string indexName)` |
| Delete Index | `index.delete()` | `client.deleteIndex(string indexName)` |
| Get Index Settings | `index.getSettings()` | `index().getSettings()` |
| Update Index Settings | `index.setSettings(object settings)` | `index().updateSettings(object settings)` |
| Search Method | `index.search(string query, { searchParameters, requestOptions })` | `index.search(string query, object searchParameters)` |
| Add Object | `index.saveObjects(array objects)` | `index.addDocuments(array objects)` |
| Partial Update Object | `index.partialUpdateObjects(array objects)` | `index.updateDocuments(array objects)` |
| Delete All Objects | `index.deleteObjects(array objectIDs)` | `index.deleteAllDocuments()` |
| Delete One Object | `index.deleteObject(string objectID)` | `index.deleteDocument(string id)` |
| Get All Objects | `index.getObjects(array objectIDs)` | `index.getDocuments(object params)` |
| Get Single Object | `index.getObject(str objectID)` | `index.getDocument(string id)` |
| Get API Keys | `client.listApiKeys()` | `client.getKeys()` |
| Get API Key Info | `client.getApiKey(string apiKey)` | `client.getKey(string apiKey)` |
| Create API Key | `client.addApiKey(array acl)` | `client.createKey(object configuration)` |
| Update API Key | `client.updateApiKey(string apiKey, object configuration)` | `client.updateKey(string apiKey, object configuration)` |
| Delete API Key | `client.deleteApiKey(string apiKey)` | `client.deleteKey(string apiKey)` |
## Front-end components
[InstantSearch](https://github.com/algolia/instantsearch.js) is a collection of open-source tools maintained by Algolia and used to generate front-end search UI components. To use InstantSearch with Meilisearch, you must use [Instant Meilisearch](https://github.com/meilisearch/meilisearch-js-plugins/tree/main/packages/instant-meilisearch).
Instant Meilisearch is a plugin connecting your Meilisearch instance with InstantSearch, giving you access to many of the same front-end components as Algolia users. You can find an up-to-date list of [the components supported by Instant Meilisearch](https://github.com/meilisearch/meilisearch-js-plugins/tree/main/packages/instant-meilisearch#-api-resources) in the GitHub project's README.
# Migrating to Meilisearch Cloud — Meilisearch Documentation
Source: https://www.meilisearch.com/docs/learn/update_and_migration/migrating_cloud
Meilisearch Cloud is the recommended way of using Meilisearch. This guide walks you through migrating Meilisearch from a self-hosted installation to Meilisearch Cloud.
Meilisearch Cloud is the recommended way of using Meilisearch. This guide walks you through migrating Meilisearch from a self-hosted installation to Meilisearch Cloud.
## Requirements
To follow this guide you need:
* A running Meilisearch instance
* A command-line terminal
* A Meilisearch Cloud account
## Export a dump from your self-hosted installation
To migrate Meilisearch, you must first [export a dump](/learn/data_backup/dumps). A dump is a compressed file containing all your indexes, documents, and settings.
To export a dump, make sure your self-hosted Meilisearch instance is running. Then, open your terminal and run the following command, replacing `MEILISEARCH_URL` with your instance's address:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/dumps'
```
```javascript JS theme={null}
client.createDump()
```
```python Python theme={null}
client.create_dump()
```
```php PHP theme={null}
$client->createDump();
```
```java Java theme={null}
client.createDump();
```
```ruby Ruby theme={null}
client.create_dump
```
```go Go theme={null}
resp, err := client.CreateDump()
```
```csharp C# theme={null}
await client.CreateDumpAsync();
```
```rust Rust theme={null}
client
.create_dump()
.await
.unwrap();
```
```swift Swift theme={null}
client.createDump { result in
switch result {
case .success(let dumpStatus):
print(dumpStatus)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
await client.createDump();
```
Meilisearch will return a summarized task object and begin creating the dump. [Use the returned object's `taskUid` to monitor its progress.](/learn/async/asynchronous_operations)
Once the task has been completed, you can find the dump in your project's dump directory. By default, this is `/dumps`.
Instance configuration options and experimental features that can only be activated at launch are not included in dumps.
Once you have successfully migrated your data to Meilisearch Cloud, use the project overview interface to reactivate available options. Not all instance options are supported in the Cloud.
## Create a Meilisearch Cloud project and import dump
Navigate to Meilisearch Cloud in your browser and log in. If you don't have a Meilisearch Cloud account yet, [create one for free](https://cloud.meilisearch.com/register?utm_campaign=oss\&utm_source=docs\&utm_medium=migration-guide).
You can only import dumps into new Meilisearch Cloud projects. If this is your first time using Meilisearch Cloud, create a new project by clicking on the "Create a project" button. Otherwise, click on the "New project" button:
Fill in your project name, choose a server location, and select your plan. Then, click on the "Import .dump" button and select the dump file you generated in the previous step:
Meilisearch will start creating a new project and importing your data. This might take a few moments depending on the size of your dataset. Monitor the project creation status in the project overview page.
Meilisearch Cloud automatically generates a new master key during project creation. If you are using [security keys](/learn/security/basic_security), update your application so it uses the newly created Meilisearch Cloud API keys.
## Search preview
Once your project is ready, click on it to enter the project overview. From there, click on "Search preview" in the top bar menu. This will bring you to the search preview interface. Run a few test searches to ensure all data was migrated successfully.
Congratulations, you have now migrated to Meilisearch Cloud, the recommended way to use Meilisearch. If you encountered any problems during this process, reach out to our support team on [Discord](https://discord.meilisearch.com).
# Update to the latest Meilisearch version
Source: https://www.meilisearch.com/docs/learn/update_and_migration/updating
Learn how to migrate to the latest Meilisearch release.
Currently, Meilisearch databases are only compatible with the version of Meilisearch used to create them. The following guide will walk you through using a [dump](/learn/data_backup/dumps) to migrate an existing database from an older version of Meilisearch to the most recent one.
If you're updating your Meilisearch instance on cloud platforms like DigitalOcean or AWS, ensure that you can connect to your cloud instance via SSH. Depending on the user you are connecting with (root, admin, etc.), you may need to prefix some commands with `sudo`.
If migrating to the latest version of Meilisearch will cause you to skip multiple versions, this may require changes to your codebase. [Refer to our version-specific update warnings for more details](#version-specific-warnings).
## Updating Meilisearch Cloud
Log into your Meilisearch Cloud account and navigate to the project you want to update.
Click on the project you want to update. Look for the "General settings" section at the top of the page.
Whenever a new version of Meilisearch is available, you will see an update button next to the "Meilisearch version" field.
To update to the latest Meilisearch release, click the "Update to v.X.Y.Z" button.
This will open a pop-up with more information about the update process. Read it, then click on "Update". The "Status" of your project will change from "running" to "updating".
Once the project has been successfully updated, you will receive an email confirming the update and "Status" will change back to "running".
## Updating a self-hosted Meilisearch instance
You may update a self-hosted instance in one of two ways: with or without a dump.
### Dumpless upgrade
Dumpless upgrades are available when upgrading from Meilisearch >=v1.12 to Meilisearch >=v1.13
#### Step 1: Make a backup
Dumpless upgrades are an experimental feature. Because of that, it may in rare occasions partially fail and result in a corrupted database. To prevent data loss, create a snapshot of your instance:
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/snapshots'
```
```javascript JS theme={null}
client.createSnapshot()
```
```python Python theme={null}
client.create_snapshot()
```
```php PHP theme={null}
$client->createSnapshot();
```
```java Java theme={null}
client.createSnapshot();
```
```ruby Ruby theme={null}
client.create_snapshot
```
```go Go theme={null}
client.CreateSnapshot()
```
```csharp C# theme={null}
await client.CreateSnapshotAsync();
```
```rust Rust theme={null}
client
.create_snapshot()
.await
.unwrap();
```
```swift Swift theme={null}
let task = try await self.client.createSnapshot()
```
Meilisearch will respond with a partial task object. Use its `taskUid` to monitor the snapshot creation status. Once the task is completed, proceed to the next step.
### Step 2: Stop the Meilisearch instance
Next, stop your Meilisearch instance.
If you're running Meilisearch locally, stop the program by pressing `Ctrl + c`.
If you're running Meilisearch as a `systemctl` service, connect via SSH to your cloud instance and execute the following command to stop Meilisearch:
```bash theme={null}
systemctl stop meilisearch
```
You may need to prefix the above command with `sudo` if you are not connected as root.
#### Step 3: Install the new Meilisearch binary
Install the latest version of Meilisearch using:
```bash theme={null}
curl -L https://install.meilisearch.com | sh
```
```sh theme={null}
# replace MEILISEARCH_VERSION with the version of your choice. Use the format: `vX.X.X`
curl "https://github.com/meilisearch/meilisearch/releases/download/MEILISEARCH_VERSION/meilisearch-linux-amd64" --output meilisearch --location --show-error
```
Give execute permission to the Meilisearch binary:
```
chmod +x meilisearch
```
For **cloud platforms**, move the new Meilisearch binary to the `/usr/bin` directory:
```
mv meilisearch /usr/bin/meilisearch
```
#### Step 4: Relaunch Meilisearch
Execute the command below to import the dump at launch:
```bash theme={null}
./meilisearch --experimental-dumpless-upgrade
```
```sh theme={null}
meilisearch --experimental-dumpless-upgrade
```
Meilisearch should launch normally and immediately create a new `UpgradeDatabase` task. This task is processed immediately and cannot be canceled. You may follow its progress by using the `GET /tasks?types=UpgradeDatabase` endpoint to obtain its `taskUid`, then querying `GET /tasks/TASK_UID`.
While the task is processing, you may continue making search queries. You may also enqueue new tasks. Meilisearch will only process new tasks once `UpgradeDatabase` is completed.
#### Rolling back an update
If the upgrade is taking too long, or if after the upgrade is completed its task status is set to `failed`, you can cancel the upgrade task.
Cancelling the update task automatically rolls back your database to its state before the upgrade began.
After launching Meilisearch with `--experimental-dumpless-upgrade` flag:
1. Cancel the `upgradeDatabase` task
2. If you cancelled the update before it failed, skip to the next step. If the update failed, relaunch Meilisearch using the binary of the version you were upgrading to
3. Wait for Meilisearch to process your cancellation request
4. Replace the new binary with the binary of the previous version
5. Relaunch Meilisearch
If you are upgrading Meilisearch to \<= v1.14, you must instead [restart your instance from the snapshot](/learn/data_backup/snapshots#starting-from-a-snapshot) you generated during step 1. You may then retry the upgrade, or upgrade using a dump. You are also welcome to open an issue on the [Meilisearch repository](https://github.com/meilisearch/meilisearch).
### Using a dump
#### Step 1: Export data
##### Verify your database version
First, verify the version of Meilisearch that's compatible with your database using the get version endpoint:
```bash cURL theme={null}
curl \
-X GET 'http:///version' \
-H 'Authorization: Bearer API_KEY'
```
The response should look something like this:
```json theme={null}
{
"commitSha": "stringOfLettersAndNumbers",
"commitDate": "YYYY-MM-DDTimestamp",
"pkgVersion": "x.y.z"
}
```
Proceed to [creating the dump](/reference/api/backups/create-dump).
##### Create the dump
Before creating your dump, make sure that your [dump directory](/learn/self_hosted/configure_meilisearch_at_launch#dump-directory) is somewhere accessible. By default, dumps are created in a folder called `dumps` at the root of your Meilisearch directory.
**Cloud platforms** like DigitalOcean and AWS are configured to store dumps in the `/var/opt/meilisearch/dumps` directory.
If you're unsure where your Meilisearch directory is located, try this:
```bash theme={null}
which meilisearch
```
It should return something like this:
```bash theme={null}
/absolute/path/to/your/meilisearch/directory
```
```bash theme={null}
where meilisearch
```
It should return something like this:
```bash theme={null}
/absolute/path/to/your/meilisearch/directory
```
```bash theme={null}
(Get-Command meilisearch).Path
```
It should return something like this:
```bash theme={null}
/absolute/path/to/your/meilisearch/directory
```
You can then create a dump of your database using the [create a dump endpoint](/reference/api/backups/create-dump):
```bash cURL theme={null}
curl \
-X POST 'http:///dumps' \
-H 'Authorization: Bearer API_KEY'
```
The server should return a response that looks like this:
```json theme={null}
{
"taskUid": 1,
"indexUid": null,
"status": "enqueued",
"type": "dumpCreation",
"enqueuedAt": "2022-06-21T16:10:29.217688Z"
}
```
Use the `taskUid` to [track the status](/reference/api/async-task-management/get-task) of your dump. Keep in mind that the process can take some time to complete.
Once the `dumpCreation` task shows `"status": "succeeded"`, you're ready to move on.
#### Step 2: Prepare for migration
##### Stop the Meilisearch instance
Stop your Meilisearch instance.
If you're running Meilisearch locally, you can stop the program with `Ctrl + c`.
If you're running Meilisearch as a `systemctl` service, connect via SSH to your cloud instance and execute the following command to stop Meilisearch:
```bash theme={null}
systemctl stop meilisearch
```
You may need to prefix the above command with `sudo` if you are not connected as root.
##### Create a backup
Instead of deleting `data.ms`, we suggest creating a backup in case something goes wrong. `data.ms` should be at the root of the Meilisearch binary unless you chose [another location](/learn/self_hosted/configure_meilisearch_at_launch#database-path).
On **cloud platforms**, you will find the `data.ms` folder at `/var/lib/meilisearch/data.ms`.
Move the binary of the current Meilisearch installation and database to the `/tmp` folder:
```
mv /path/to/your/meilisearch/directory/meilisearch/data.ms /tmp/
mv /path/to/your/meilisearch/directory/meilisearch /tmp/
```
```
mv /usr/bin/meilisearch /tmp/
mv /var/lib/meilisearch/data.ms /tmp/
```
##### Install the desired version of Meilisearch
Install the latest version of Meilisearch using:
```bash theme={null}
curl -L https://install.meilisearch.com | sh
```
```sh theme={null}
# replace {meilisearch_version} with the version of your choice. Use the format: `vX.X.X`
curl "https://github.com/meilisearch/meilisearch/releases/download/{meilisearch_version}/meilisearch-linux-amd64" --output meilisearch --location --show-error
```
Give execute permission to the Meilisearch binary:
```
chmod +x meilisearch
```
For **cloud platforms**, move the new Meilisearch binary to the `/usr/bin` directory:
```
mv meilisearch /usr/bin/meilisearch
```
#### Step 3: Import data
##### Launch Meilisearch and import the dump
Execute the command below to import the dump at launch:
```bash theme={null}
# replace {dump_uid.dump} with the name of your dump file
./meilisearch --import-dump dumps/{dump_uid.dump} --master-key="MASTER_KEY"
# Or, if you chose another location for data files and dumps before the update, also add the same parameters
./meilisearch --import-dump dumps/{dump_uid.dump} --master-key="MASTER_KEY" --db-path PATH_TO_DB_DIR/data.ms --dump-dir PATH_TO_DUMP_DIR/dumps
```
```sh theme={null}
# replace {dump_uid.dump} with the name of your dump file
meilisearch --db-path /var/lib/meilisearch/data.ms --import-dump "/var/opt/meilisearch/dumps/{dump_uid.dump}"
```
Importing a dump requires indexing all the documents it contains. Depending on the size of your dataset, this process can take a long time and cause a spike in memory usage.
##### Restart Meilisearch as a service
If you're running a **cloud instance**, press `Ctrl`+`C` to stop Meilisearch once your dump has been correctly imported. Next, execute the following command to run the script to configure Meilisearch and restart it as a service:
```
meilisearch-setup
```
If required, set `displayedAttributes` back to its previous value using the [update displayed attributes endpoint](/reference/api/settings/update-displayedattributes).
### Conclusion
Now that your updated Meilisearch instance is up and running, verify that the dump import was successful and no data was lost.
If everything looks good, then congratulations! You successfully migrated your database to the latest version of Meilisearch. Be sure to check out the [changelogs](https://github.com/meilisearch/MeiliSearch/releases).
If something went wrong, you can always roll back to the previous version. Feel free to [reach out for help](https://discord.meilisearch.com) if the problem continues. If you successfully migrated your database but are having problems with your codebase, be sure to check out our [version-specific warnings](#version-specific-warnings).
#### Delete backup files or rollback (*optional*)
Delete the Meilisearch binary and `data.ms` folder created by the previous steps. Next, move the backup files back to their previous location using:
```
mv /tmp/meilisearch /path/to/your/meilisearch/directory/meilisearch
mv /tmp/data.ms /path/to/your/meilisearch/directory/meilisearch/data.ms
```
```
mv /tmp/meilisearch /usr/bin/meilisearch
mv /tmp/data.ms /var/lib/meilisearch/data.ms
```
For **cloud platforms** run the configuration script at the root of your Meilisearch directory:
```
meilisearch-setup
```
If all went well, you can delete the backup files using:
```
rm -r /tmp/meilisearch
rm -r /tmp/data.ms
```
You can also delete the dump file if desired:
```
rm /path/to/your/meilisearch/directory/meilisearch/dumps/{dump_uid.dump}
```
```
rm /var/opt/meilisearch/dumps/{dump_uid.dump}
```
## Version-specific warnings
After migrating to the most recent version of Meilisearch, your codebase may require some changes. For version-specific changes and full changelogs, see the [releases tab on GitHub](https://github.com/meilisearch/meilisearch/releases).
# Authorization
Source: https://www.meilisearch.com/docs/reference/api/authorization
How to authenticate with the Meilisearch API using API keys and the Authorization header.
If you are new to Meilisearch, check out the [getting started guide](/learn/self_hosted/getting_started_with_self_hosted_meilisearch).
By [providing Meilisearch with a master key at launch](/learn/security/basic_security), you protect your instance from unauthorized requests. The provided master key must be at least 16 bytes. From then on, you must include the `Authorization` header along with a valid API key to access protected routes (all routes except [`/health`](/reference/api/health)).
```bash cURL theme={null}
curl \
-X GET 'MEILISEARCH_URL/keys' \
-H 'Authorization: Bearer MASTER_KEY'
```
```javascript JS theme={null}
const client = new MeiliSearch({ host: 'MEILISEARCH_URL', apiKey: 'masterKey' })
client.getKeys()
```
```python Python theme={null}
client = Client('MEILISEARCH_URL', 'masterKey')
client.get_keys()
```
```php PHP theme={null}
$client = new Client('MEILISEARCH_URL', 'masterKey');
$client->getKeys();
```
```java Java theme={null}
Client client = new Client(new Config("MEILISEARCH_URL", "masterKey"));
client.getKeys();
```
```ruby Ruby theme={null}
client = MeiliSearch::Client.new('MEILISEARCH_URL', 'masterKey')
client.keys
```
```go Go theme={null}
client := meilisearch.New("MEILISEARCH_URL", meilisearch.WithAPIKey("masterKey"))
client.GetKeys(nil);
```
```csharp C# theme={null}
MeilisearchClient client = new MeilisearchClient("MEILISEARCH_URL", "masterKey");
var keys = await client.GetKeysAsync();
```
```rust Rust theme={null}
let client = Client::new("MEILISEARCH_URL", Some("MASTER_KEY")); let keys = client .get_keys() .await .unwrap();
```
```swift Swift theme={null}
client = try MeiliSearch(host: "MEILISEARCH_URL", apiKey: "masterKey")
client.getKeys { result in
switch result {
case .success(let keys):
print(keys)
case .failure(let error):
print(error)
}
}
```
```dart Dart theme={null}
var client = MeiliSearchClient('MEILISEARCH_URL', 'masterKey');
await client.getKeys();
```
The [`/keys`](/reference/api/keys) route can only be accessed using the master key. For security reasons, we recommend using regular API keys for all other routes.
[To learn more about keys and security, refer to our security tutorial.](/learn/security/basic_security)
# Headers
Source: https://www.meilisearch.com/docs/reference/api/headers
Content-Type, Content-Encoding, Accept-Encoding, and Meili-Include-Metadata headers for the Meilisearch API.
## Content type
Any API request with a payload (`--data-binary`) requires a `Content-Type` header. Content type headers indicate the media type of the resource, helping the client process the response body correctly.
Meilisearch currently supports the following formats:
* `Content-Type: application/json` for JSON
* `Content-Type: application/x-ndjson` for NDJSON
* `Content-Type: text/csv` for CSV
Only the [add documents](/reference/api/documents#add-or-replace-documents) and [update documents](/reference/api/documents#add-or-update-documents) endpoints accept NDJSON and CSV. For all others, use `Content-Type: application/json`.
## Content encoding
The `Content-Encoding` header indicates the media type is compressed by a given algorithm. Compression improves transfer speed and reduces bandwidth consumption by sending and receiving smaller payloads. The `Accept-Encoding` header, instead, indicates the compression algorithm the client understands.
Meilisearch supports the following compression methods:
* `br`: uses the [Brotli](https://en.wikipedia.org/wiki/Brotli) algorithm
* `deflate`: uses the [zlib](https://en.wikipedia.org/wiki/Zlib) structure with the [deflate](https://en.wikipedia.org/wiki/DEFLATE) compression algorithm
* `gzip`: uses the [gzip](https://en.wikipedia.org/wiki/Gzip) algorithm
### Request compression
The code sample below uses the `Content-Encoding: gzip` header, indicating that the request body is compressed using the `gzip` algorithm:
```
cat ~/movies.json | gzip | curl -X POST 'MEILISEARCH_URL/indexes/movies/documents' --data-binary @- -H 'Content-Type: application/json' -H 'Content-Encoding: gzip'
```
### Response compression
Meilisearch compresses a response if the request contains the `Accept-Encoding` header. The code sample below uses the `gzip` algorithm:
```
curl -sH 'Accept-encoding: gzip' 'MEILISEARCH_URL/indexes/movies/search' | gzip -dc
```
## Search metadata
You may use an optional `Meili-Include-Metadata` header when performing search and multi-search requests:
```
curl -X POST 'http://localhost:7700/indexes/INDEX_NAME/search' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer MEILISEARCH_API_KEY' \
-H 'Meili-Include-Metadata: true' \
-d '{"q": ""}'
```
Meilisearch Cloud includes this header by default.
Responses will include a `metadata` object:
```json theme={null}
{
"hits": [ … ],
"metadata": {
"queryUid": "0199a41a-8186-70b3-b6e1-90e8cb582f35",
"indexUid": "INDEX_NAME",
"primaryKey": "INDEX_PRIMARY_KEY"
}
}
```
`metadata` contains the following fields:
| Field | Type | Description |
| :----------: | :-----: | :--------------------------------------------------------: |
| `queryUid` | UUID v7 | Unique identifier for the query |
| `indexUid` | String | Index identifier |
| `primaryKey` | String | Primary key field name, if index has a primary key |
| `remote` | String | Remote instance name, if request targets a remote instance |
A search refers to a single HTTP search request. Every search request is assigned a `requestUid`. A query UID is a combination of `q` and `indexUid`.
In the context of multi-search, for any given `searchUid` there may be multiple `queryUid` values.
# Create index
Source: https://www.meilisearch.com/docs/reference/api/indexes/create-index
assets/open-api/meilisearch-openapi-mintlify.json post /indexes
Create a new index with an optional [primary key](https://www.meilisearch.com/docs/learn/getting_started/primary_key).
If no primary key is provided, Meilisearch will [infer one](https://www.meilisearch.com/docs/learn/getting_started/primary_key#meilisearch-guesses-your-primary-key) from the first batch of documents.
# Get index
Source: https://www.meilisearch.com/docs/reference/api/indexes/get-index
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}
Retrieve the metadata of a single index: its uid, [primary key](https://www.meilisearch.com/docs/learn/getting_started/primary_key), and creation/update timestamps.
# List all indexes
Source: https://www.meilisearch.com/docs/reference/api/indexes/list-all-indexes
assets/open-api/meilisearch-openapi-mintlify.json get /indexes
Returns a paginated list of indexes. Use the `offset` and `limit` query parameters to page through results.
# OpenAPI specifications
Source: https://www.meilisearch.com/docs/reference/api/openapi
Meilisearch OpenAPI specifications and where to find them.
You can download the OpenAPI specification for the latest Meilisearch version.
For a specific Meilisearch version, get the specification from the [Meilisearch releases on GitHub](https://github.com/meilisearch/meilisearch/releases). Each release includes `meilisearch-openapi.json` in its assets.
# Pagination
Source: https://www.meilisearch.com/docs/reference/api/pagination
How Meilisearch paginates GET routes and the structure of paginated responses.
Meilisearch paginates all GET routes that return multiple resources, for example, GET `/indexes`, GET `/documents`, GET `/keys`, etc. This allows you to work with manageable chunks of data. All these routes return 20 results per page, but you can configure it using the `limit` query parameter. You can move between pages using `offset`.
All paginated responses contain the following fields:
| Name | Type | Description |
| :----------- | :------ | :--------------------------- |
| **`offset`** | Integer | Number of resources skipped |
| **`limit`** | Integer | Number of resources returned |
| **`total`** | Integer | Total number of resources |
## `/tasks` endpoint
Since the `/tasks` endpoint uses a different type of pagination, the response contains different fields. You can read more about it in the [tasks API reference](/reference/api/async-task-management/list-tasks).
# Requests
Source: https://www.meilisearch.com/docs/reference/api/requests
Parameters, requests & response bodies, and data types for the Meilisearch API.
## Parameters
Parameters are options you can pass to an API endpoint to modify its response. There are three main types of parameters in Meilisearch's API: request body parameters, path parameters, and query parameters.
### Request body parameters
These parameters are mandatory parts of POST, PUT, and PATCH requests. They accept a wide variety of values and data types depending on the resource you're modifying. You must add these parameters to your request's data payload.
### Path parameters
These are parameters you pass to the API in the endpoint's path. They are used to identify a resource uniquely. You can have multiple path parameters, for example, `/indexes/{index_uid}/documents/{document_id}`.
If an endpoint does not take any path parameters, this section is not present in that endpoint's documentation.
### Query parameters
These optional parameters are a sequence of key-value pairs and appear after the question mark (`?`) in the endpoint. You can list multiple query parameters by separating them with an ampersand (`&`). The order of query parameters does not matter. They are mostly used with GET endpoints.
If an endpoint does not take any query parameters, this section is not present in that endpoint's documentation.
## Request body
The request body is data sent to the API. It is used with PUT, POST, and PATCH methods to create or update a resource. You must provide request bodies in JSON.
## Response body
Meilisearch is an **asynchronous API**. This means that in response to most write requests, you will receive a summarized version of the `task` object:
```json theme={null}
{
"taskUid": 1,
"indexUid": "movies",
"status": "enqueued",
"type": "indexUpdate",
"enqueuedAt": "2021-08-11T09:25:53.000000Z"
}
```
You can use this `taskUid` to get more details on [the status of the task](/reference/api/tasks#get-one-task).
See more information about [asynchronous operations](/learn/async/asynchronous_operations).
## Data types
The Meilisearch API supports [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp).
# Cancel tasks
Source: https://www.meilisearch.com/docs/reference/api/async-task-management/cancel-tasks
assets/open-api/meilisearch-openapi-mintlify.json post /tasks/cancel
Cancel enqueued and/or processing [tasks](https://www.meilisearch.com/docs/learn/async/asynchronous_operations). You must provide at least one filter (e.g. `uids`, `indexUids`, `statuses`) to specify which tasks to cancel.
# Delete tasks
Source: https://www.meilisearch.com/docs/reference/api/async-task-management/delete-tasks
assets/open-api/meilisearch-openapi-mintlify.json delete /tasks
Permanently delete [tasks](https://docs.meilisearch.com/learn/advanced/asynchronous_operations.html) matching the given filters. You must provide at least one filter (e.g. `uids`, `indexUids`, `statuses`) to specify which tasks to delete.
# Get batch
Source: https://www.meilisearch.com/docs/reference/api/async-task-management/get-batch
assets/open-api/meilisearch-openapi-mintlify.json get /batches/{batch_id}
Meilisearch groups compatible tasks ([asynchronous operations](https://www.meilisearch.com/docs/learn/async/asynchronous_operations)) into batches for efficient processing.
For example, multiple document additions to the same index may be batched together. Retrieve a single batch by its unique identifier to monitor its progress and performance.
# Get task
Source: https://www.meilisearch.com/docs/reference/api/async-task-management/get-task
assets/open-api/meilisearch-openapi-mintlify.json get /tasks/{task_id}
Retrieve a single [task](https://www.meilisearch.com/docs/learn/async/asynchronous_operations) by its uid.
# Get task's documents
Source: https://www.meilisearch.com/docs/reference/api/async-task-management/get-tasks-documents
assets/open-api/meilisearch-openapi-mintlify.json get /tasks/{task_id}/documents
Retrieve the list of documents that were processed or affected by a given [task](https://www.meilisearch.com/docs/learn/async/asynchronous_operations). Only available for document-related tasks.
# List batches
Source: https://www.meilisearch.com/docs/reference/api/async-task-management/list-batches
assets/open-api/meilisearch-openapi-mintlify.json get /batches
Meilisearch groups compatible tasks ([asynchronous operations](https://www.meilisearch.com/docs/learn/async/asynchronous_operations)) into batches for efficient processing.
For example, multiple document additions to the same index may be batched together. List batches to monitor their progress and performance.
Batches are always returned in descending order of uid. This means that by default, the most recently created batch objects appear first. Batch results are paginated and can be filtered with query parameters.
# List tasks
Source: https://www.meilisearch.com/docs/reference/api/async-task-management/list-tasks
assets/open-api/meilisearch-openapi-mintlify.json get /tasks
The `/tasks` route returns information about [asynchronous operations](https://docs.meilisearch.com/learn/advanced/asynchronous_operations.html) (indexing, document updates, settings changes, and so on).
Tasks are returned in descending order of uid by default, so the most recently created or updated tasks appear first. Results are paginated and can be filtered using query parameters such as `indexUids`, `statuses`, `types`, and date ranges.
# Delete a chat workspace
Source: https://www.meilisearch.com/docs/reference/api/chats/delete-a-chat-workspace
assets/open-api/meilisearch-openapi-mintlify.json delete /chats/{workspace_uid}
# Get a chat workspace
Source: https://www.meilisearch.com/docs/reference/api/chats/get-a-chat-workspace
assets/open-api/meilisearch-openapi-mintlify.json get /chats/{workspace_uid}
# Get settings of a chat workspace
Source: https://www.meilisearch.com/docs/reference/api/chats/get-settings-of-a-chat-workspace
assets/open-api/meilisearch-openapi-mintlify.json get /chats/{workspace_uid}/settings
# List chat workspaces
Source: https://www.meilisearch.com/docs/reference/api/chats/list-chat-workspaces
assets/open-api/meilisearch-openapi-mintlify.json get /chats
# Request a chat completion
Source: https://www.meilisearch.com/docs/reference/api/chats/request-a-chat-completion
assets/open-api/meilisearch-openapi-mintlify.json post /chats/{workspace_uid}/chat/completions
# Reset the settings of a chat workspace
Source: https://www.meilisearch.com/docs/reference/api/chats/reset-the-settings-of-a-chat-workspace
assets/open-api/meilisearch-openapi-mintlify.json delete /chats/{workspace_uid}/settings
# Update settings of a chat workspace
Source: https://www.meilisearch.com/docs/reference/api/chats/update-settings-of-a-chat-workspace
assets/open-api/meilisearch-openapi-mintlify.json patch /chats/{workspace_uid}/settings
# Add or replace documents
Source: https://www.meilisearch.com/docs/reference/api/documents/add-or-replace-documents
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/documents
Add a list of documents or replace them if they already exist.
If you send an already existing document (same id) the whole existing
document will be overwritten by the new document. Fields previously in the
document not present in the new document are removed.
If the provided index does not exist, it will be created.
For a partial update of the document see [add or update documents route](/reference/api/documents/add-or-update-documents).
> Use the reserved `_geo` object to add geo coordinates to a document.
> `_geo` is an object made of `lat` and `lng` field.
# Add or update documents
Source: https://www.meilisearch.com/docs/reference/api/documents/add-or-update-documents
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/documents
Add a list of documents or update them if they already exist.
If you send an already existing document (same id) the old document will
be only partially updated according to the fields of the new document.
Thus, any fields not present in the new document are kept and remained
unchanged.
If the provided index does not exist, it will be created.
To completely overwrite a document, see [add or replace documents route](/reference/api/documents/add-or-replace-documents).
> Use the reserved `_geo` object to add geo coordinates to a document.
> `_geo` is an object made of `lat` and `lng` field.
# Delete all documents
Source: https://www.meilisearch.com/docs/reference/api/documents/delete-all-documents
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/documents
Permanently delete all documents in the specified index. Settings and index metadata are preserved.
# Delete document
Source: https://www.meilisearch.com/docs/reference/api/documents/delete-document
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/documents/{document_id}
Delete a single document by its [primary key](https://www.meilisearch.com/docs/learn/getting_started/primary_key).
# Delete documents by batch
Source: https://www.meilisearch.com/docs/reference/api/documents/delete-documents-by-batch
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/documents/delete-batch
Delete multiple documents in one request by providing an array of [primary key](https://www.meilisearch.com/docs/learn/getting_started/primary_key) values.
# Delete documents by filter
Source: https://www.meilisearch.com/docs/reference/api/documents/delete-documents-by-filter
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/documents/delete
Delete all documents in the index that match the given filter expression.
# Edit documents by function
Source: https://www.meilisearch.com/docs/reference/api/documents/edit-documents-by-function
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/documents/edit
Use a [RHAI function](https://rhai.rs/book/engine/hello-world.html) to edit one or more documents directly in Meilisearch. The function receives each document and returns the modified document.
This feature is experimental and must be enabled through the experimental route.
# Get document
Source: https://www.meilisearch.com/docs/reference/api/documents/get-document
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/documents/{document_id}
Retrieve a single document by its [primary key](https://www.meilisearch.com/docs/learn/getting_started/primary_key) value.
# List documents with GET
Source: https://www.meilisearch.com/docs/reference/api/documents/list-documents-with-get
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/documents
Retrieve documents in batches using query parameters for offset, limit, and optional filtering. Suited for browsing or exporting index contents.
# List documents with POST
Source: https://www.meilisearch.com/docs/reference/api/documents/list-documents-with-post
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/documents/fetch
Retrieve a set of documents with optional filtering, sorting, and pagination. Use the request body to specify filters, sort order, and which fields to return.
# Search in facets
Source: https://www.meilisearch.com/docs/reference/api/facet-search/search-in-facets
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/facet-search
Search for facet values within a given facet.
> Use this to build autocomplete or refinement UIs for facet filters.
# Delete index
Source: https://www.meilisearch.com/docs/reference/api/indexes/delete-index
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}
Permanently delete an index and all its documents, settings, and task history.
# List index fields
Source: https://www.meilisearch.com/docs/reference/api/indexes/list-index-fields
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/fields
Returns a paginated list of fields in the index with their metadata: whether they are displayed, searchable, sortable, filterable, distinct, have a custom ranking rule (asc/desc), and for filterable fields the sort order for facet values.
# Swap indexes
Source: https://www.meilisearch.com/docs/reference/api/indexes/swap-indexes
assets/open-api/meilisearch-openapi-mintlify.json post /swap-indexes
Swap the documents, settings, and task history of two or more indexes.
Indexes are swapped in pairs; a single request can include multiple pairs.
The operation is atomic: either all swaps succeed or none do. In the task history, every mention of one index uid is replaced by the other and vice versa.
Enqueued tasks are left unmodified.
# Update index
Source: https://www.meilisearch.com/docs/reference/api/indexes/update-index
assets/open-api/meilisearch-openapi-mintlify.json patch /indexes/{index_uid}
Update the [primary key](https://www.meilisearch.com/docs/learn/getting_started/primary_key) or uid of an index.
Returns an error if the index does not exist or if it already contains documents ([primary key](https://www.meilisearch.com/docs/learn/getting_started/primary_key) cannot be changed in that case).
# Create API key
Source: https://www.meilisearch.com/docs/reference/api/keys/create-api-key
assets/open-api/meilisearch-openapi-mintlify.json post /keys
Create a new API key with the specified name, description, actions, and index scopes. The key value is returned only once at creation time; store it securely.
# Delete API key
Source: https://www.meilisearch.com/docs/reference/api/keys/delete-api-key
assets/open-api/meilisearch-openapi-mintlify.json delete /keys/{key}
Permanently delete the specified API key. The key will no longer be valid for authentication.
# Get API key
Source: https://www.meilisearch.com/docs/reference/api/keys/get-api-key
assets/open-api/meilisearch-openapi-mintlify.json get /keys/{key}
Retrieve a single API key by its `uid` or by its `key` value.
# List API keys
Source: https://www.meilisearch.com/docs/reference/api/keys/list-api-keys
assets/open-api/meilisearch-openapi-mintlify.json get /keys
Return all API keys configured on the instance. Results are paginated and can be filtered by offset and limit. The key value itself is never returned after creation.
# Update API key
Source: https://www.meilisearch.com/docs/reference/api/keys/update-api-key
assets/open-api/meilisearch-openapi-mintlify.json patch /keys/{key}
Update the name and description of an API key.
Updates are partial: only the fields you send are changed, and any fields not present in the payload remain unchanged.
# Perform a multi-search
Source: https://www.meilisearch.com/docs/reference/api/multi-search/perform-a-multi-search
assets/open-api/meilisearch-openapi-mintlify.json post /multi-search
Run multiple search queries in a single API request.
Each query can target a different index, so you can search across several indexes at once and get one combined response.
# Search with GET
Source: https://www.meilisearch.com/docs/reference/api/search/search-with-get
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/search
Search for documents matching a query in the given index.
> Equivalent to the [search with POST route](/reference/api/search/search-with-post) in the Meilisearch API.
# Search with POST
Source: https://www.meilisearch.com/docs/reference/api/search/search-with-post
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/search
Search for documents matching a query in the given index.
> Equivalent to the [search with GET route](/reference/api/search/search-with-get) in the Meilisearch API.
# Get chat
Source: https://www.meilisearch.com/docs/reference/api/settings/get-chat
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/chat
Returns the current value of the `chat` setting for the index.
# Get dictionary
Source: https://www.meilisearch.com/docs/reference/api/settings/get-dictionary
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/dictionary
Returns the current value of the `dictionary` setting for the index.
# Get displayedAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/get-displayedattributes
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/displayed-attributes
Returns the current value of the `displayedAttributes` setting for the index.
# Get distinctAttribute
Source: https://www.meilisearch.com/docs/reference/api/settings/get-distinctattribute
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/distinct-attribute
Returns the current value of the `distinctAttribute` setting for the index.
# Get embedders
Source: https://www.meilisearch.com/docs/reference/api/settings/get-embedders
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/embedders
Returns the current value of the `embedders` setting for the index.
# Get faceting
Source: https://www.meilisearch.com/docs/reference/api/settings/get-faceting
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/faceting
Returns the current value of the `faceting` setting for the index.
# Get facetSearch
Source: https://www.meilisearch.com/docs/reference/api/settings/get-facetsearch
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/facet-search
Returns the current value of the `facetSearch` setting for the index.
# Get filterableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/get-filterableattributes
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/filterable-attributes
Returns the current value of the `filterableAttributes` setting for the index.
# Get localizedAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/get-localizedattributes
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/localized-attributes
Returns the current value of the `localizedAttributes` setting for the index.
# Get nonSeparatorTokens
Source: https://www.meilisearch.com/docs/reference/api/settings/get-nonseparatortokens
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/non-separator-tokens
Returns the current value of the `nonSeparatorTokens` setting for the index.
# Get pagination
Source: https://www.meilisearch.com/docs/reference/api/settings/get-pagination
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/pagination
Returns the current value of the `pagination` setting for the index.
# Get prefixSearch
Source: https://www.meilisearch.com/docs/reference/api/settings/get-prefixsearch
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/prefix-search
Returns the current value of the `prefixSearch` setting for the index.
# Get proximityPrecision
Source: https://www.meilisearch.com/docs/reference/api/settings/get-proximityprecision
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/proximity-precision
Returns the current value of the `proximityPrecision` setting for the index.
# Get rankingRules
Source: https://www.meilisearch.com/docs/reference/api/settings/get-rankingrules
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/ranking-rules
Returns the current value of the `rankingRules` setting for the index.
# Get searchableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/get-searchableattributes
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/searchable-attributes
Returns the current value of the `searchableAttributes` setting for the index.
# Get searchCutoffMs
Source: https://www.meilisearch.com/docs/reference/api/settings/get-searchcutoffms
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/search-cutoff-ms
Returns the current value of the `searchCutoffMs` setting for the index.
# Get separatorTokens
Source: https://www.meilisearch.com/docs/reference/api/settings/get-separatortokens
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/separator-tokens
Returns the current value of the `separatorTokens` setting for the index.
# Get sortableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/get-sortableattributes
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/sortable-attributes
Returns the current value of the `sortableAttributes` setting for the index.
# Get stopWords
Source: https://www.meilisearch.com/docs/reference/api/settings/get-stopwords
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/stop-words
Returns the current value of the `stopWords` setting for the index.
# Get synonyms
Source: https://www.meilisearch.com/docs/reference/api/settings/get-synonyms
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/synonyms
Returns the current value of the `synonyms` setting for the index.
# List all settings
Source: https://www.meilisearch.com/docs/reference/api/settings/list-all-settings
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings
Returns all settings of the index. Each setting is returned with its current value or the default if not set.
# Reset all settings
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-all-settings
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings
Resets all settings of the index to their default values.
# Reset chat
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-chat
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/chat
Resets the `chat` setting to its default value.
# Reset dictionary
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-dictionary
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/dictionary
Resets the `dictionary` setting to its default value.
# Reset displayedAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-displayedattributes
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/displayed-attributes
Resets the `displayedAttributes` setting to its default value.
# Reset distinctAttribute
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-distinctattribute
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/distinct-attribute
Resets the `distinctAttribute` setting to its default value.
# Reset embedders
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-embedders
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/embedders
Resets the `embedders` setting to its default value.
# Reset faceting
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-faceting
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/faceting
Resets the `faceting` setting to its default value.
# Reset facetSearch
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-facetsearch
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/facet-search
Resets the `facetSearch` setting to its default value.
# Reset filterableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-filterableattributes
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/filterable-attributes
Resets the `filterableAttributes` setting to its default value.
# Reset localizedAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-localizedattributes
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/localized-attributes
Resets the `localizedAttributes` setting to its default value.
# Reset nonSeparatorTokens
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-nonseparatortokens
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/non-separator-tokens
Resets the `nonSeparatorTokens` setting to its default value.
# Reset pagination
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-pagination
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/pagination
Resets the `pagination` setting to its default value.
# Reset prefixSearch
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-prefixsearch
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/prefix-search
Resets the `prefixSearch` setting to its default value.
# Reset proximityPrecision
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-proximityprecision
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/proximity-precision
Resets the `proximityPrecision` setting to its default value.
# Reset rankingRules
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-rankingrules
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/ranking-rules
Resets the `rankingRules` setting to its default value.
# Reset searchableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-searchableattributes
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/searchable-attributes
Resets the `searchableAttributes` setting to its default value.
# Reset searchCutoffMs
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-searchcutoffms
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/search-cutoff-ms
Resets the `searchCutoffMs` setting to its default value.
# Reset separatorTokens
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-separatortokens
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/separator-tokens
Resets the `separatorTokens` setting to its default value.
# Reset sortableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-sortableattributes
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/sortable-attributes
Resets the `sortableAttributes` setting to its default value.
# Reset stopWords
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-stopwords
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/stop-words
Resets the `stopWords` setting to its default value.
# Update all settings
Source: https://www.meilisearch.com/docs/reference/api/settings/update-all-settings
assets/open-api/meilisearch-openapi-mintlify.json patch /indexes/{index_uid}/settings
Updates one or more settings for the index. Only the fields sent in the body are changed. Pass null for a setting to reset it to its default. If the index does not exist, it is created.
See also: [Configuring index settings on the Cloud](https://www.meilisearch.com/docs/learn/configuration/configuring_index_settings).
# Update chat
Source: https://www.meilisearch.com/docs/reference/api/settings/update-chat
assets/open-api/meilisearch-openapi-mintlify.json patch /indexes/{index_uid}/settings/chat
Updates the `chat` setting for the index. Send the new value in the request body; send null to reset to default.
# Update dictionary
Source: https://www.meilisearch.com/docs/reference/api/settings/update-dictionary
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/dictionary
Updates the `dictionary` setting for the index. Send the new value in the request body; send null to reset to default.
# Update displayedAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/update-displayedattributes
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/displayed-attributes
Updates the `displayedAttributes` setting for the index. Send the new value in the request body; send null to reset to default.
# Update distinctAttribute
Source: https://www.meilisearch.com/docs/reference/api/settings/update-distinctattribute
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/distinct-attribute
Updates the `distinctAttribute` setting for the index. Send the new value in the request body; send null to reset to default.
# Update embedders
Source: https://www.meilisearch.com/docs/reference/api/settings/update-embedders
assets/open-api/meilisearch-openapi-mintlify.json patch /indexes/{index_uid}/settings/embedders
Updates the `embedders` setting for the index. Send the new value in the request body; send null to reset to default.
# Update faceting
Source: https://www.meilisearch.com/docs/reference/api/settings/update-faceting
assets/open-api/meilisearch-openapi-mintlify.json patch /indexes/{index_uid}/settings/faceting
Updates the `faceting` setting for the index. Send the new value in the request body; send null to reset to default.
# Update facetSearch
Source: https://www.meilisearch.com/docs/reference/api/settings/update-facetsearch
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/facet-search
Updates the `facetSearch` setting for the index. Send the new value in the request body; send null to reset to default.
# Update filterableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/update-filterableattributes
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/filterable-attributes
Updates the `filterableAttributes` setting for the index. Send the new value in the request body; send null to reset to default.
# Update localizedAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/update-localizedattributes
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/localized-attributes
Updates the `localizedAttributes` setting for the index. Send the new value in the request body; send null to reset to default.
# Update nonSeparatorTokens
Source: https://www.meilisearch.com/docs/reference/api/settings/update-nonseparatortokens
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/non-separator-tokens
Updates the `nonSeparatorTokens` setting for the index. Send the new value in the request body; send null to reset to default.
# Update pagination
Source: https://www.meilisearch.com/docs/reference/api/settings/update-pagination
assets/open-api/meilisearch-openapi-mintlify.json patch /indexes/{index_uid}/settings/pagination
Updates the `pagination` setting for the index. Send the new value in the request body; send null to reset to default.
# Update prefixSearch
Source: https://www.meilisearch.com/docs/reference/api/settings/update-prefixsearch
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/prefix-search
Updates the `prefixSearch` setting for the index. Send the new value in the request body; send null to reset to default.
# Update proximityPrecision
Source: https://www.meilisearch.com/docs/reference/api/settings/update-proximityprecision
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/proximity-precision
Updates the `proximityPrecision` setting for the index. Send the new value in the request body; send null to reset to default.
# Update rankingRules
Source: https://www.meilisearch.com/docs/reference/api/settings/update-rankingrules
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/ranking-rules
Updates the `rankingRules` setting for the index. Send the new value in the request body; send null to reset to default.
# Update searchableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/update-searchableattributes
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/searchable-attributes
Updates the `searchableAttributes` setting for the index. Send the new value in the request body; send null to reset to default.
# Update searchCutoffMs
Source: https://www.meilisearch.com/docs/reference/api/settings/update-searchcutoffms
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/search-cutoff-ms
Updates the `searchCutoffMs` setting for the index. Send the new value in the request body; send null to reset to default.
# Update separatorTokens
Source: https://www.meilisearch.com/docs/reference/api/settings/update-separatortokens
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/separator-tokens
Updates the `separatorTokens` setting for the index. Send the new value in the request body; send null to reset to default.
# Update sortableAttributes
Source: https://www.meilisearch.com/docs/reference/api/settings/update-sortableattributes
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/sortable-attributes
Updates the `sortableAttributes` setting for the index. Send the new value in the request body; send null to reset to default.
# Update stopWords
Source: https://www.meilisearch.com/docs/reference/api/settings/update-stopwords
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/stop-words
Updates the `stopWords` setting for the index. Send the new value in the request body; send null to reset to default.
# Get similar documents with GET
Source: https://www.meilisearch.com/docs/reference/api/similar-documents/get-similar-documents-with-get
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/similar
Retrieve documents similar to a reference document identified by its id.
> Useful for “more like this” or recommendations.
# Get similar documents with POST
Source: https://www.meilisearch.com/docs/reference/api/similar-documents/get-similar-documents-with-post
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/similar
Retrieve documents similar to a reference document identified by its id.
> Useful for “more like this” or recommendations.
# Meilisearch & Model Context Protocol - Talk to Meilisearch with Claude desktop
Source: https://www.meilisearch.com/docs/guides/ai/mcp
This guide walks Meilisearch users through setting up the MCP server with Claude desktop to talk to the Meilisearch API
# Model Context Protocol - Talk to Meilisearch with Claude desktop
## Introduction
This guide will walk you through setting up and using Meilisearch through natural language interactions with Claude AI via Model Context Protocol (MCP).
## Requirements
To follow this guide, you'll need:
* [Claude Desktop](https://claude.ai/download) (free)
* [A Meilisearch Cloud project](https://www.meilisearch.com/cloud) (14 days free-trial)
* Python ≥ 3.9
* From the Meilisearch Cloud dashboard, your Meilisearch host & api key
## Setting up Claude Desktop with the Meilisearch MCP Server
### 1. Install Claude Desktop
Download and install [Claude Desktop](https://claude.ai/download).
### 2. Install the Meilisearch MCP Server
You can install the Meilisearch MCP server using `uv` or `pip`:
```bash theme={null}
# Using uv (recommended)
uv pip install meilisearch-mcp
# Using pip
pip install meilisearch-mcp
```
### 3. Configure Claude Desktop
Open Claude Desktop, click on the Claude menu in the top bar, and select "Settings". In the Settings window, click on "Developer" in the left sidebar, then click "Edit Config". This will open your `claude_desktop_config.json` file.
Add the Meilisearch MCP server to your configuration:
```json theme={null}
{
"mcpServers": {
"meilisearch": {
"command": "uvx",
"args": ["-n", "meilisearch-mcp"]
}
}
```
Save the file and restart Claude.
## Connecting to Your Meilisearch Instance
Once Claude Desktop is set up with the Meilisearch MCP server, you can connect to your Meilisearch instance by asking Claude to update the connection settings.
Open Claude Desktop and start a new conversation.
Next, connect to your Meilisearch instance by asking Claude to update the connection settings, replacing `MEILISEARCH_URL` with your project URL and `API_KEY` with your project's API key:
```
Please connect to my Meilisearch instance at MEILISEARCH_URL using the API key API_KEY
```
Claude will use the MCP server's `update-connection-settings` tool to establish a connection to your Meilisearch instance.
Finally, verify the connection by asking:
```
Can you check the connection to my Meilisearch instance and tell me what version it's running?
```
Claude will use the `get-version` and `health-check` tools to verify the connection and provide information about your instance.
## Create an e-commerce index
Now you have configured the MCP to work with Meilisearch, you can use it to manage your indexes.
First, verify what indexes you have in your project:
```
What indexes do I have in my Meilisearch instance?
```
Next, ask Claude to create an index optimized for e-commerce:
```
Create a new index called "products" for our e-commerce site with the primary key "product_id"
```
Finally, check the index has been created successfully and is completely empty:
```
How many documents are in my "products" index and what's its size?
```
## Add documents to your new index
Ask Calude to add a couple of test documents to your "products" index:
```
Add these products to my "products" index:
[
{"product_id": 1, "name": "Ergonomic Chair", "description": "Comfortable office chair", "price": 299.99, "category": "Furniture"},
{"product_id": 2, "name": "Standing Desk", "description": "Adjustable height desk", "price": 499.99, "category": "Furniture"}
]
```
Since you are only using "products" for testing, you can also ask Claude to automatically populate it with placeholder data:
```
Add 10 documents in the index "products" with a name, category, price, and description of your choice
```
To verify data insertion worked as expected, retrieve the first few documents in your index:
```
Show me the first 5 products in my "products" index
```
## Configure your index
Before performing your first search, set a few index settings to ensure relevant results.
Ask Claude to prioritize exact word matches over multiple partial matches:
```
Update the ranking rules for the "products" index to prioritize word matches and handle typos, but make exact matches more important than proximity
```
It's also a good practice to limit searchable attributes only to highly-relevant fields, and only return attributes you are going to display in your search interface:
```
Configure my "products" index to make the "name" and "description" fields searchable, but only "name", "price", and "category" should be displayed in results
```
## Perform searches with MCP
Perform your first search with the following prompt:
```
Search the "products" index for "desk" and return the top 3 results
```
You can also request your search uses other Meilisearch features such as filters and sorting:
```
Search the "products" index for "chair" where the price is less than 200 and the category is "Furniture". Sort results by price in ascending order.
```
### Important note about LLM limitation
Large Language Models like Claude tend to say "yes" to most requests, even if they can't actually perform them.
Claude can only perform actions that are exposed through the Meilisearch API and implemented in the MCP server. If you're unsure whether a particular operation is possible, refer to the [Meilisearch documentation](https://docs.meilisearch.com) and the [MCP server README](https://github.com/meilisearch/meilisearch-mcp).
## Troubleshooting
If you encounter issues with the Meilisearch MCP integration, try these steps
### 1. Ask Claude to verify your connection settings
```
What are the current Meilisearch connection settings?
```
### 2. Ask Claude to check your Meilisearch instance health
```
Run a health check on my Meilisearch instance
```
### 3. Review Claude's logs
Open the logs file in your text editor or log viewer:
* On macOS: `~/Library/Logs/Claude/mcp*.log`
* On Windows: `%APPDATA%\Claude\logs\mcp*.log`
### 4. Test the MCP server independently
Open your terminal and query the MCP Inspector with `npx`:
```bash theme={null}
npx @modelcontextprotocol/inspector uvx -n meilisearch-mcp
```
## Conclusion
The Meilisearch MCP integration with Claude can transform multiple API calls and configuration tasks into conversational requests. This can help you focus more on building your application and less on implementation details.
For more information about advanced configurations and capabilities, refer to the [Meilisearch documentation](https://docs.meilisearch.com) and the [Meilisearch MCP server repository](https://github.com/meilisearch/meilisearch-mcp).
# Computing Hugging Face embeddings with the GPU
Source: https://www.meilisearch.com/docs/guides/computing_hugging_face_embeddings_gpu
This guide for experienced users shows you how to compile a Meilisearch binary that generates Hugging Face embeddings with an Nvidia GPU.
This guide is aimed at experienced users working with a self-hosted Meilisearch instance. It shows you how to compile a Meilisearch binary that generates Hugging Face embeddings with an Nvidia GPU.
## Prerequisites
* A [CUDA-compatible Linux distribution](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#id12)
* An Nvidia GPU with CUDA support
* A modern Rust compiler
## Install CUDA
Follow Nvidia's [CUDA installation instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html).
## Verify your CUDA install
After you have installed CUDA in your machine, run the following command in your command-line terminal:
```sh theme={null}
nvcc --version | head -1
```
If CUDA is working correctly, you will see the following response:
```
nvcc: NVIDIA (R) Cuda compiler driver
```
## Compile Meilisearch
First, clone Meilisearch:
```sh theme={null}
git clone https://github.com/meilisearch/meilisearch.git
```
Then, compile the Meilisearch binary with `cuda` enabled:
```sh theme={null}
cargo build --release --features cuda
```
This might take a few moments. Once the compiler is done, you should have a CUDA-compatible Meilisearch binary.
## Configure the Hugging Face embedder
Run your freshly compiled binary:
```sh theme={null}
./meilisearch
```
Then add the Hugging Face embedder to your index settings:
```sh theme={null}
curl \
-X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/embedders' \
-H 'Content-Type: application/json' \
--data-binary '{ "default": { "source": "huggingFace" } }'
```
Meilisearch will return a summarized task object and place your request on the task queue:
```json theme={null}
{
"taskUid": 1,
"indexUid": "INDEX_NAME",
"status": "enqueued",
"type": "settingsUpdate",
"enqueuedAt": "2024-03-04T15:05:43.383955Z"
}
```
Use the task object's `taskUid` to [monitor the task status](/reference/api/async-task-management/get-task). The Hugging Face embedder will be ready to use once the task is completed.
## Conclusion
You have seen how to compile a Meilisearch binary that uses your Nvidia GPU to compute vector embeddings. Doing this should significantly speed up indexing when using Hugging Face.
# Using Meilisearch with Docker
Source: https://www.meilisearch.com/docs/guides/docker
Learn how to use Docker to download and run Meilisearch, configure its behavior, and manage your Meilisearch data.
In this guide you will learn how to use Docker to download and run Meilisearch, configure its behavior, and manage your Meilisearch data.
Docker is a tool that bundles applications into containers. Docker containers ensure your application runs the same way in different environments. When using Docker for development, we recommend following [the official instructions to install Docker Desktop](https://docs.docker.com/get-docker/).
## Download Meilisearch with Docker
Docker containers are distributed in images. To use Meilisearch, use the `docker pull` command to download a Meilisearch image:
```sh theme={null}
docker pull getmeili/meilisearch:latest
```
Meilisearch deploys a new Docker image with every release of the engine. Each image is tagged with the corresponding Meilisearch version, indicated in the above example by the text following the `:` symbol. You can see [the full list of available Meilisearch Docker images](https://hub.docker.com/r/getmeili/meilisearch/tags#!) on Docker Hub.
The `latest` tag will always download the most recent Meilisearch release. Meilisearch advises against using it, as it might result in different machines running different images if significant time passes between setting up each one of them.
## Run Meilisearch with Docker
After completing the previous step, use `docker run` to launch the Meilisearch image:
```sh theme={null}
docker run -it --rm \
-p 7700:7700 \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:latest
```
### Configure Meilisearch
Meilisearch accepts a number of instance options during launch. You can configure these in two ways: environment variables and CLI arguments. Note that some options are only available as CLI arguments—[consult our configuration reference for more details](/learn/self_hosted/configure_meilisearch_at_launch).
#### Passing instance options with environment variables
To pass environment variables to Docker, add the `-e` argument to `docker run`. The example below launches Meilisearch with a master key:
```sh theme={null}
docker run -it --rm \
-p 7700:7700 \
-e MEILI_MASTER_KEY='MASTER_KEY'\
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:latest
```
#### Passing instance options with CLI arguments
If you want to pass command-line arguments to Meilisearch with Docker, you must add a line to the end of your `docker run` command explicitly launching the `meilisearch` binary:
```sh theme={null}
docker run -it --rm \
-p 7700:7700 \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:latest \
meilisearch --master-key="MASTER_KEY"
```
## Managing data
When using Docker, your working directory is `/meili_data`. This means the location of your database file is `/meili_data/data.ms`.
### Data persistency
By default, data written to a Docker container is deleted every time the container stops running. This data includes your indexes and the documents they store.
To keep your data intact between reboots, specify a dedicated volume by running Docker with the `-v` command-line option:
```sh theme={null}
docker run -it --rm \
-p 7700:7700 \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:latest
```
The example above uses `$(pwd)/meili_data`, which is a directory in the host machine. Depending on your OS, mounting volumes from the host to the container might result in performance loss and is only recommended when developing your application.
### Generating dumps and updating Meilisearch
To export a dump, [use the create dump endpoint as described in our dumps guide](/learn/data_backup/dumps). Once the task is complete, you can access the dump file in `/meili_data/dumps` inside the volume you passed with `-v`.
To import a dump, use Meilisearch's `--import-dump` command-line option and specify the path to the dump file. Make sure the path points to a volume reachable by Docker:
```sh theme={null}
docker run -it --rm \
-p 7700:7700 \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:latest \
meilisearch --import-dump /meili_data/dumps/20200813-042312213.dump
```
Note that exporting and importing dumps require using command-line arguments. [For more information on how to run Meilisearch with CLI options and Docker, refer to this guide's relevant section.](#passing-instance-options-with-cli-arguments)
If you are storing your data in a persistent volume as instructed in [the data persistency section](#data-persistency), you must delete `/meili_data/data.ms` in that volume before importing a dump.
Use dumps to migrate data between different Meilisearch releases. [Read more about updating Meilisearch in our dedicated guide.](/learn/update_and_migration/updating)
### Snapshots
To generate a Meilisearch snapshot with Docker, launch Meilisearch with `--schedule-snapshot` and `--snapshot-dir`:
```sh theme={null}
docker run -it --rm \
-p 7700:7700 \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:latest \
meilisearch --schedule-snapshot --snapshot-dir /meili_data/snapshots
```
`--snapshot-dir` should point to a folder inside the Docker working directory for Meilisearch, `/meili_data`. Once generated, snapshots will be available in the configured directory.
To import a snapshot, launch Meilisearch with the `--import-snapshot` option:
```sh theme={null}
docker run -it --rm \
-p 7700:7700 \
-v $(pwd)/meili_data:/meili_data \
getmeili/meilisearch:latest \
meilisearch --import-snapshot /meili_data/snapshots/data.ms.snapshot
```
Use snapshots for backup or when migrating data between two Meilisearch instances of the same version. [Read more about snapshots in our guide.](/learn/data_backup/snapshots)
# Semantic Search with AWS Bedrock Embeddings
Source: https://www.meilisearch.com/docs/guides/embedders/bedrock
This guide will walk you through the process of setting up Meilisearch with AWS Bedrock embeddings to enable semantic search capabilities.
## Introduction
This guide will walk you through the process of setting up Meilisearch with AWS Bedrock embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and AWS Bedrock's embedding API, you can enhance your search experience and retrieve more relevant results.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* An AWS account with Bedrock access and an API key for embedding generation. You can sign up for an AWS account at [AWS](https://aws.amazon.com/).
## Setting up Meilisearch
To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings/list-all-settings) for more details on updating the embedder settings.
### Text embeddings
AWS Bedrock offers multiple text embedding models:
* `amazon.titan-embed-text-v2:0`: 256, 512, or 1024 dimensions (Amazon Titan Text Embeddings V2)
* `amazon.nova-2-multimodal-embeddings-v1:0`: 256, 384, 1024, or 3072 dimensions (Amazon Nova Multimodal Embeddings - also supports images, video, and audio)
* `cohere.embed-multilingual-v3`: 1024 dimensions (Cohere Embed Multilingual v3)
* `cohere.embed-v4:0`: 256, 512, 1024, or 1536 dimensions (Cohere Embed v4 - also supports images)
**Amazon Titan Text Embeddings V2**
```json theme={null}
{
"bedrock": {
"source": "rest",
"url": "https://bedrock-runtime..amazonaws.com/model/amazon.titan-embed-text-v2:0/invoke",
"apiKey": "",
"dimensions": 1024,
"documentTemplate": "",
"request": {
"inputText": "{{text}}",
"dimensions": 1024,
"normalize": true
},
"response": {
"embedding": "{{embedding}}"
}
}
}
```
**Amazon Nova Multimodal Embeddings (text mode)**
For advanced configuration options like `embeddingPurpose` and `truncationMode`, refer to the [Nova Embeddings schema documentation](https://docs.aws.amazon.com/nova/latest/userguide/embeddings-schema.html).
```json theme={null}
{
"bedrock": {
"source": "rest",
"url": "https://bedrock-runtime..amazonaws.com/model/amazon.nova-2-multimodal-embeddings-v1:0/invoke",
"apiKey": "",
"dimensions": 1024,
"documentTemplate": "",
"request": {
"taskType": "SINGLE_EMBEDDING",
"singleEmbeddingParams": {
"embeddingPurpose": "GENERIC_INDEX",
"embeddingDimension": 1024,
"text": {
"truncationMode": "END",
"value": "{{text}}"
}
}
},
"response": {
"embeddings": [{ "embedding": "{{embedding}}" }]
}
}
}
```
**Cohere Embed Multilingual v3**
```json theme={null}
{
"bedrock": {
"source": "rest",
"url": "https://bedrock-runtime..amazonaws.com/model/cohere.embed-multilingual-v3/invoke",
"apiKey": "",
"dimensions": 1024,
"documentTemplate": "",
"request": {
"texts": ["{{text}}"],
"input_type": "search_document"
},
"response": {
"embeddings": ["{{embedding}}"]
}
}
}
```
**Cohere Embed v4 (text mode)**
```json theme={null}
{
"bedrock": {
"source": "rest",
"url": "https://bedrock-runtime..amazonaws.com/model/cohere.embed-v4:0/invoke",
"apiKey": "",
"dimensions": 1536,
"documentTemplate": "",
"request": {
"texts": ["{{text}}"],
"input_type": "search_document"
},
"response": {
"embeddings": { "float": ["{{embedding}}"] }
}
}
}
```
In these configurations:
* `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
* `url`: The Bedrock Runtime API endpoint. Replace `` with your AWS region (e.g., `us-east-1`, `us-west-2`, `eu-west-3`). Note: Nova is currently only available in `us-east-1`.
* `apiKey`: Replace `` with your actual Bedrock API key.
* `dimensions`: Specifies the dimensions of the embeddings. Titan V2 supports 256, 512, or 1024. Nova supports 256, 384, 1024, or 3072. Cohere v3 outputs 1024 dimensions. Cohere v4 defaults to 1536 dimensions (also supports 256, 512, or 1024 via `output_dimension` parameter).
* `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
* `request`: Defines the request structure for the Bedrock API. Titan V2 uses `inputText` with optional `dimensions` and `normalize` parameters. Nova uses `taskType`, `singleEmbeddingParams` with `embeddingPurpose`, `embeddingDimension`, and `text` object. Cohere v3 uses `texts` array and `input_type`.
* `response`: Defines the expected response structure from the Bedrock API.
### Multimodal embeddings
AWS Bedrock offers multimodal embedding models for image search capabilities:
* `amazon.titan-embed-image-v1`: 256, 384, or 1024 dimensions (Amazon Titan Multimodal Embeddings G1)
* `amazon.nova-2-multimodal-embeddings-v1:0`: 256, 384, 1024, or 3072 dimensions (Amazon Nova Multimodal Embeddings - supports text, images, video, and audio)
* `cohere.embed-v4:0`: 256, 512, 1024, or 1536 dimensions (Cohere Embed v4 - supports text, images, and interleaved texts and images)
These models require `indexingFragments` and `searchFragments` because they embed images during indexing and text queries during search.
**Amazon Titan Multimodal Embeddings G1**
```json theme={null}
{
"bedrock": {
"source": "rest",
"url": "https://bedrock-runtime..amazonaws.com/model/amazon.titan-embed-image-v1/invoke",
"apiKey": "",
"dimensions": 1024,
"indexingFragments": {
"image": {
"value": {
"inputImage": "{{doc.image_base64}}",
"embeddingConfig": {
"outputEmbeddingLength": 1024
}
}
}
},
"searchFragments": {
"text": {
"value": {
"inputText": "{{q}}",
"embeddingConfig": {
"outputEmbeddingLength": 1024
}
}
}
},
"request": "{{fragment}}",
"response": {
"embedding": "{{embedding}}"
}
}
}
```
**Amazon Nova Multimodal Embeddings (image mode)**
For complete configuration options like `embeddingPurpose`, `detailLevel`, and supported formats, refer to the [Nova Embeddings schema documentation](https://docs.aws.amazon.com/nova/latest/userguide/embeddings-schema.html).
```json theme={null}
{
"bedrock": {
"source": "rest",
"url": "https://bedrock-runtime..amazonaws.com/model/amazon.nova-2-multimodal-embeddings-v1:0/invoke",
"apiKey": "",
"dimensions": 1024,
"indexingFragments": {
"image": {
"value": {
"taskType": "SINGLE_EMBEDDING",
"singleEmbeddingParams": {
"embeddingPurpose": "GENERIC_INDEX",
"embeddingDimension": 1024,
"image": {
"format": "",
"source": {
"bytes": "{{doc.image_base64}}"
}
}
}
}
}
},
"searchFragments": {
"text": {
"value": {
"taskType": "SINGLE_EMBEDDING",
"singleEmbeddingParams": {
"embeddingPurpose": "GENERIC_RETRIEVAL",
"embeddingDimension": 1024,
"text": {
"truncationMode": "END",
"value": "{{q}}"
}
}
}
}
},
"request": "{{fragment}}",
"response": {
"embeddings": [{ "embedding": "{{embedding}}" }]
}
}
}
```
**Cohere Embed v4**
```json theme={null}
{
"bedrock": {
"source": "rest",
"url": "https://bedrock-runtime..amazonaws.com/model/cohere.embed-v4:0/invoke",
"apiKey": "",
"dimensions": 1536,
"indexingFragments": {
"image": {
"value": {
"images": ["data:image/jpeg;base64,{{doc.image_base64}}"],
"input_type": "search_document"
}
}
},
"searchFragments": {
"text": {
"value": {
"texts": ["{{q}}"],
"input_type": "search_query"
}
}
},
"request": "{{fragment}}",
"response": {
"embeddings": { "float": ["{{embedding}}"] }
}
}
}
```
In these configurations:
* `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
* `url`: The Bedrock Runtime API endpoint. Replace `` with your AWS region (e.g., `us-east-1`, `us-west-2`, `eu-west-3`).
* `apiKey`: Replace `` with your actual Bedrock API key.
* `dimensions`: Specifies the dimensions of the embeddings. Titan Multimodal supports 256, 384, or 1024. Nova supports 256, 384, 1024, or 3072. Cohere v4 supports 256, 512, 1024, or 1536.
* `indexingFragments`: Defines how to embed images during document indexing. Uses `{{doc.FIELD_NAME}}` to reference the base64-encoded image field in your documents (e.g., `{{doc.image_base64}}`).
* `searchFragments`: Defines how to embed text during search queries. Uses `{{q}}` to reference the search query.
* `request`: Set to `{{fragment}}` to use the appropriate fragment based on the operation.
* `response`: Defines the expected response structure from the Bedrock API.
Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.
Please note that AWS Bedrock has rate limiting, which is managed by Meilisearch. The indexation process may take some time depending on your AWS account limits, but Meilisearch will handle it with a retry strategy.
It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/async-task-management/list-tasks).
## Testing semantic search
With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:
```json theme={null}
{
"q": "",
"hybrid": {
"semanticRatio": 1,
"embedder": "bedrock"
}
}
```
In this request:
* `q`: Represents the user's search query.
* `hybrid`: Specifies the configuration for the hybrid search.
* `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
* `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "bedrock".
You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.
## Conclusion
By following this guide, you should now have Meilisearch set up with AWS Bedrock embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings/list-all-settings).
# Semantic Search with Cloudflare Worker AI Embeddings
Source: https://www.meilisearch.com/docs/guides/embedders/cloudflare
This guide will walk you through the process of setting up Meilisearch with Cloudflare Worker AI embeddings to enable semantic search capabilities.
## Introduction
This guide will walk you through the process of setting up Meilisearch with Cloudflare Worker AI embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and Cloudflare Worker AI's embedding API, you can enhance your search experience and retrieve more relevant results.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* A Cloudflare account with access to Worker AI and an API key. You can sign up for a Cloudflare account at [Cloudflare](https://www.cloudflare.com/)
* Your Cloudflare account ID
## Setting up Meilisearch
To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings/list-all-settings) for more details on updating the embedder settings.
Cloudflare Worker AI offers the following embedding models:
* `baai/bge-base-en-v1.5`: 768 dimensions
* `baai/bge-large-en-v1.5`: 1024 dimensions
* `baai/bge-small-en-v1.5`: 384 dimensions
Here's an example of embedder settings for Cloudflare Worker AI:
```json theme={null}
{
"cloudflare": {
"source": "rest",
"apiKey": "",
"dimensions": 384,
"documentTemplate": "",
"url": "https://api.cloudflare.com/client/v4/accounts//ai/run/@cf/",
"request": {
"text": ["{{text}}", "{{..}}"]
},
"response": {
"result": {
"data": ["{{embedding}}", "{{..}}"]
}
}
}
}
```
In this configuration:
* `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
* `apiKey`: Replace `` with your actual Cloudflare API key.
* `dimensions`: Specifies the dimensions of the embeddings. Set to 384 for `baai/bge-small-en-v1.5`, 768 for `baai/bge-base-en-v1.5`, or 1024 for `baai/bge-large-en-v1.5`.
* `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
* `url`: Specifies the URL of the Cloudflare Worker AI API endpoint.
* `request`: Defines the request structure for the Cloudflare Worker AI API, including the input parameters.
* `response`: Defines the expected response structure from the Cloudflare Worker AI API, including the embedding data.
Be careful when setting up the `url` field in your configuration. The URL contains your Cloudflare account ID (``) and the specific model you want to use (``). Make sure to replace these placeholders with your actual account ID and the desired model name (e.g., `baai/bge-small-en-v1.5`).
Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.
Please note that Cloudflare may have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.
It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/async-task-management/list-tasks).
## Testing semantic search
With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:
```json theme={null}
{
"q": "",
"hybrid": {
"semanticRatio": 1,
"embedder": "cloudflare"
}
}
```
In this request:
* `q`: Represents the user's search query.
* `hybrid`: Specifies the configuration for the hybrid search.
* `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
* `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "cloudflare".
You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.
## Conclusion
By following this guide, you should now have Meilisearch set up with Cloudflare Worker AI embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings/list-all-settings).
# Semantic Search with Cohere Embeddings
Source: https://www.meilisearch.com/docs/guides/embedders/cohere
This guide will walk you through the process of setting up Meilisearch with Cohere embeddings to enable semantic search capabilities.
## Introduction
This guide will walk you through the process of setting up Meilisearch with Cohere embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and Cohere's embedding API, you can enhance your search experience and retrieve more relevant results.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* A Cohere account with an API key for embedding generation. You can sign up for a Cohere account at [Cohere](https://cohere.com/).
* No backend required.
## Setting up Meilisearch
To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings/list-all-settings) for more details on updating the embedder settings.
Cohere offers multiple embedding models:
* `embed-english-v3.0` and `embed-multilingual-v3.0`: 1024 dimensions
* `embed-english-light-v3.0` and `embed-multilingual-light-v3.0`: 384 dimensions
Here's an example of embedder settings for Cohere:
```json theme={null}
{
"cohere": {
"source": "rest",
"apiKey": "",
"dimensions": 1024,
"documentTemplate": "",
"url": "https://api.cohere.com/v1/embed",
"request": {
"model": "embed-english-v3.0",
"texts": [
"{{text}}",
"{{..}}"
],
"input_type": "search_document"
},
"response": {
"embeddings": [
"{{embedding}}",
"{{..}}"
]
},
}
}
```
In this configuration:
* `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
* `apiKey`: Replace `` with your actual Cohere API key.
* `dimensions`: Specifies the dimensions of the embeddings, set to 1024 for the `embed-english-v3.0` model.
* `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
* `url`: Specifies the URL of the Cohere API endpoint.
* `request`: Defines the request structure for the Cohere API, including the model name and input parameters.
* `response`: Defines the expected response structure from the Cohere API, including the embedding data.
Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.
Please note that most third-party tools have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.
It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/async-task-management/list-tasks).
## Testing semantic search
With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:
```json theme={null}
{
"q": "",
"hybrid": {
"semanticRatio": 1,
"embedder": "cohere"
}
}
```
In this request:
* `q`: Represents the user's search query.
* `hybrid`: Specifies the configuration for the hybrid search.
* `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
* `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "cohere".
You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.
## Conclusion
By following this guide, you should now have Meilisearch set up with Cohere embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings/list-all-settings).
# Semantic Search with Gemini Embeddings
Source: https://www.meilisearch.com/docs/guides/embedders/gemini
This guide will walk you through the process of setting up Meilisearch with Gemini embeddings to enable semantic search capabilities.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* A Google account with an API key for embedding generation. You can sign up for a Google account at [Google](https://google.com/)
## Setting up Meilisearch
To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings/list-all-settings) for more details on updating the embedder settings.
While using Gemini to generate embeddings, you'll need to use the model `gemini-embedding-001`. Unlike some other services, Gemini currently offers only one embedding model.
Here's an example of embedder settings for Gemini:
```json theme={null}
{
"gemini": {
"source": "rest",
"dimensions": 3072,
"documentTemplate": "",
"headers": {
"Content-Type": "application/json",
"x-goog-api-key": ""
},
"url": "https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:batchEmbedContents",
"request": {
"requests": [
{
"model": "models/gemini-embedding-001",
"content": {
"parts": [
{ "text": "{{text}}" }
]
}
},
"{{..}}"
]
},
"response": {
"embeddings": [
{ "values": "{{embedding}}" },
"{{..}}"
]
}
}
}
```
In this configuration:
* `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
* `headers`: Replace `` with your actual Google API key.
* `dimensions`: Specifies the dimensions of the embeddings, set to 3072 for the `gemini-embedding-001` model.
* `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
* `url`: Specifies the URL of the Gemini API endpoint.
* `request`: Defines the request structure for the Gemini API, including the model name and input parameters.
* `response`: Defines the expected response structure from the Gemini API, including the embedding data.
Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.
Please note that most third-party tools have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.
It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/async-task-management/list-tasks).
## Testing semantic search
With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:
```json theme={null}
{
"q": "",
"hybrid": {
"semanticRatio": 1,
"embedder": "gemini"
}
}
```
In this request:
* `q`: Represents the user's search query.
* `hybrid`: Specifies the configuration for the hybrid search.
* `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
* `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "gemini".
You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.
## Conclusion
By following this guide, you should now have Meilisearch set up with Gemini embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings/list-all-settings).
# Semantic Search with Hugging Face Inference Endpoints
Source: https://www.meilisearch.com/docs/guides/embedders/huggingface
This guide will walk you through the process of setting up Meilisearch with Hugging Face Inference Endpoints.
## Introduction
This guide will walk you through the process of setting up a Meilisearch REST embedder with [Hugging Face Inference Endpoints](https://ui.endpoints.huggingface.co/) to enable semantic search capabilities.
You can use Hugging Face and Meilisearch in two ways: running the model locally by setting the embedder source to `huggingface`, or remotely in Hugging Face's servers by setting the embeder source to `rest`.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* A [Hugging Face account](https://huggingface.co/) with a deployed inference endpoint
* The endpoint URL and API key of the deployed model on your Hugging Face account
## Configure the embedder
Set up an embedder using the update settings endpoint:
```json theme={null}
{
"hf-inference": {
"source": "rest",
"url": "ENDPOINT_URL",
"apiKey": "API_KEY",
"dimensions": 384,
"documentTemplate": "CUSTOM_LIQUID_TEMPLATE",
"request": {
"inputs": ["{{text}}", "{{..}}"],
"model": "baai/bge-small-en-v1.5"
},
"response": ["{{embedding}}", "{{..}}"]
}
}
```
In this configuration:
* `source`: declares Meilisearch should connect to this embedder via its REST API
* `url`: replace `ENDPOINT_URL` with the address of your Hugging Face model endpoint
* `apiKey`: replace `API_KEY` with your Hugging Face API key
* `dimensions`: specifies the dimensions of the embeddings, which are 384 for `baai/bge-small-en-v1.5`
* `documentTemplate`: an optional but recommended [template](/learn/ai_powered_search/getting_started_with_ai_search) for the data you will send the embedder
* `request`: defines the structure and parameters of the request Meilisearch will send to the embedder
* `response`: defines the structure of the embedder's response
Once you've configured the embedder, Meilisearch will automatically generate embeddings for your documents. Monitor the task using the Cloud UI or the [get task endpoint](/reference/api/async-task-management/list-tasks).
This example uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as its model, but Hugging Face offers [other options that may fit your dataset better](https://ui.endpoints.huggingface.co/catalog?task=sentence-embeddings).
## Perform a semantic search
With the embedder set up, you can now perform semantic searches. Make a search request with the `hybrid` search parameter, setting `semanticRatio` to `1`:
```json theme={null}
{
"q": "QUERY_TERMS",
"hybrid": {
"semanticRatio": 1,
"embedder": "hf-inference"
}
}
```
In this request:
* `q`: the search query
* `hybrid`: enables AI-powered search functionality
* `semanticRatio`: controls the balance between semantic search and full-text search. Setting it to `1` means you will only receive semantic search results
* `embedder`: the name of the embedder used for generating embeddings
## Conclusion
You have set up with an embedder using Hugging Face Inference Endpoints. This allows you to use pure semantic search capabilities in your application.
Consult the [embedder setting documentation](/reference/api/settings/list-all-settings) for more information on other embedder configuration options.
# Semantic Search with Mistral Embeddings
Source: https://www.meilisearch.com/docs/guides/embedders/mistral
This guide will walk you through the process of setting up Meilisearch with Mistral embeddings to enable semantic search capabilities.
## Introduction
This guide will walk you through the process of setting up Meilisearch with Mistral embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and Mistral's embedding API, you can enhance your search experience and retrieve more relevant results.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* A Mistral account with an API key for embedding generation. You can sign up for a Mistral account at [Mistral](https://mistral.ai/).
* No backend required.
## Setting up Meilisearch
To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings/list-all-settings) for more details on updating the embedder settings.
While using Mistral to generate embeddings, you'll need to use the model `mistral-embed`. Unlike some other services, Mistral currently offers only one embedding model.
Here's an example of embedder settings for Mistral:
```json theme={null}
{
"mistral": {
"source": "rest",
"apiKey": "",
"dimensions": 1024,
"documentTemplate": "",
"url": "https://api.mistral.ai/v1/embeddings",
"request": {
"model": "mistral-embed",
"input": ["{{text}}", "{{..}}"]
},
"response": {
"data": [
{
"embedding": "{{embedding}}"
},
"{{..}}"
]
}
}
}
```
In this configuration:
* `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
* `apiKey`: Replace `` with your actual Mistral API key.
* `dimensions`: Specifies the dimensions of the embeddings, set to 1024 for the `mistral-embed` model.
* `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
* `url`: Specifies the URL of the Mistral API endpoint.
* `request`: Defines the request structure for the Mistral API, including the model name and input parameters.
* `response`: Defines the expected response structure from the Mistral API, including the embedding data.
Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.
Please note that most third-party tools have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.
It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/async-task-management/list-tasks)
## Testing semantic search
With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:
```json theme={null}
{
"q": "",
"hybrid": {
"semanticRatio": 1,
"embedder": "mistral"
}
}
```
In this request:
* `q`: Represents the user's search query.
* `hybrid`: Specifies the configuration for the hybrid search.
* `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
* `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "mistral".
You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.
## Conclusion
By following this guide, you should now have Meilisearch set up with Mistral embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings/list-all-settings).
# Semantic Search with OpenAI Embeddings
Source: https://www.meilisearch.com/docs/guides/embedders/openai
This guide will walk you through the process of setting up Meilisearch with OpenAI embeddings to enable semantic search capabilities.
## Introduction
This guide will walk you through the process of setting up Meilisearch with OpenAI embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and OpenAI's embedding API, you can enhance your search experience and retrieve more relevant results.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* An OpenAI account with an API key for embedding generation. You can sign up for an OpenAI account at [OpenAI](https://openai.com/).
* No backend required.
## Setting up Meilisearch
To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings/list-all-settings) for more details on updating the embedder settings.
OpenAI offers three main embedding models:
* `text-embedding-3-large`: 3,072 dimensions
* `text-embedding-3-small`: 1,536 dimensions
* `text-embedding-ada-002`: 1,536 dimensions
Here's an example of embedder settings for OpenAI:
```json theme={null}
{
"openai": {
"source": "openAi",
"apiKey": "",
"dimensions": 1536,
"documentTemplate": "",
"model": "text-embedding-3-small"
}
}
```
In this configuration:
* `source`: Specifies the source of the embedder, which is set to "openAi" for using OpenAI's API.
* `apiKey`: Replace `` with your actual OpenAI API key.
* `dimensions`: Specifies the dimensions of the embeddings. Set to 1536 for `text-embedding-3-small` and `text-embedding-ada-002`, or 3072 for `text-embedding-3-large`.
* `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
* `model`: Specifies the OpenAI model to use for generating embeddings. Choose from `text-embedding-3-large`, `text-embedding-3-small`, or `text-embedding-ada-002`.
Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.
Please note that OpenAI has rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.
It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/async-task-management/list-tasks)
## Testing semantic search
With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:
```json theme={null}
{
"q": "",
"hybrid": {
"semanticRatio": 1,
"embedder": "openai"
}
}
```
In this request:
* `q`: Represents the user's search query.
* `hybrid`: Specifies the configuration for the hybrid search.
* `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
* `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "openai".
You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.
## Conclusion
By following this guide, you should now have Meilisearch set up with OpenAI embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings/list-all-settings).
# Semantic Search with Voyage AI Embeddings
Source: https://www.meilisearch.com/docs/guides/embedders/voyage
This guide will walk you through the process of setting up Meilisearch with Voyage AI embeddings to enable semantic search capabilities.
## Introduction
This guide will walk you through the process of setting up Meilisearch with Voyage AI embeddings to enable semantic search capabilities. By leveraging Meilisearch's AI features and Voyage AI's embedding API, you can enhance your search experience and retrieve more relevant results.
## Requirements
To follow this guide, you'll need:
* A [Meilisearch Cloud](https://www.meilisearch.com/cloud) project running version >=1.13
* A Voyage AI account with an API key for embedding generation. You can sign up for a Voyage AI account at [Voyage AI](https://www.voyageai.com/).
* No backend required.
## Setting up Meilisearch
To set up an embedder in Meilisearch, you need to configure it to your settings. You can refer to the [Meilisearch documentation](/reference/api/settings/list-all-settings) for more details on updating the embedder settings.
Voyage AI offers the following embedding models:
* `voyage-large-2-instruct`: 1024 dimensions
* `voyage-multilingual-2`: 1024 dimensions
* `voyage-large-2`: 1536 dimensions
* `voyage-2`: 1024 dimensions
Here's an example of embedder settings for Voyage AI:
```json theme={null}
{
"voyage": {
"source": "rest",
"apiKey": "",
"dimensions": 1024,
"documentTemplate": "",
"url": "https://api.voyageai.com/v1/embeddings",
"request": {
"model": "voyage-2",
"input": ["{{text}}", "{{..}}"]
},
"response": {
"data": [
{
"embedding": "{{embedding}}"
},
"{{..}}"
]
}
}
}
```
In this configuration:
* `source`: Specifies the source of the embedder, which is set to "rest" for using a REST API.
* `apiKey`: Replace `` with your actual Voyage AI API key.
* `dimensions`: Specifies the dimensions of the embeddings. Set to 1024 for `voyage-2`, `voyage-large-2-instruct`, and `voyage-multilingual-2`, or 1536 for `voyage-large-2`.
* `documentTemplate`: Optionally, you can provide a [custom template](/learn/ai_powered_search/getting_started_with_ai_search) for generating embeddings from your documents.
* `url`: Specifies the URL of the Voyage AI API endpoint.
* `request`: Defines the request structure for the Voyage AI API, including the model name and input parameters.
* `response`: Defines the expected response structure from the Voyage AI API, including the embedding data.
Once you've configured the embedder settings, Meilisearch will automatically generate embeddings for your documents and store them in the vector store.
Please note that most third-party tools have rate limiting, which is managed by Meilisearch. If you have a free account, the indexation process may take some time, but Meilisearch will handle it with a retry strategy.
It's recommended to monitor the tasks queue to ensure everything is running smoothly. You can access the tasks queue using the Cloud UI or the [Meilisearch API](/reference/api/async-task-management/list-tasks).
## Testing semantic search
With the embedder set up, you can now perform semantic searches using Meilisearch. When you send a search query, Meilisearch will generate an embedding for the query using the configured embedder and then use it to find the most semantically similar documents in the vector store.
To perform a semantic search, you simply need to make a normal search request but include the hybrid parameter:
```json theme={null}
{
"q": "",
"hybrid": {
"semanticRatio": 1,
"embedder": "voyage"
}
}
```
In this request:
* `q`: Represents the user's search query.
* `hybrid`: Specifies the configuration for the hybrid search.
* `semanticRatio`: Allows you to control the balance between semantic search and traditional search. A value of 1 indicates pure semantic search, while a value of 0 represents full-text search. You can adjust this parameter to achieve a hybrid search experience.
* `embedder`: The name of the embedder used for generating embeddings. Make sure to use the same name as specified in the embedder configuration, which in this case is "voyage".
You can use the Meilisearch API or client libraries to perform searches and retrieve the relevant documents based on semantic similarity.
## Conclusion
By following this guide, you should now have Meilisearch set up with Voyage AI embedding, enabling you to leverage semantic search capabilities in your application. Meilisearch's auto-batching and efficient handling of embeddings make it a powerful choice for integrating semantic search into your project.
To explore further configuration options for embedders, consult the [detailed documentation about the embedder setting possibilities](/reference/api/settings/list-all-settings).
# Front-end integration
Source: https://www.meilisearch.com/docs/guides/front_end/front_end_integration
Create a simple front-end interface to search through your dataset after following Meilisearch's quick start.
In the [quick start tutorial](/learn/self_hosted/getting_started_with_self_hosted_meilisearch), you learned how to launch Meilisearch and make a search request. This article will teach you how to create a simple front-end interface to search through your dataset.
Using [`instant-meilisearch`](https://github.com/meilisearch/instant-meilisearch) is the easiest way to build a front-end interface for search. `instant-meilisearch` is a plugin that establishes communication between a Meilisearch instance and [InstantSearch](https://github.com/algolia/instantsearch.js). InstantSearch, an open-source project developed by Algolia, renders all the components needed to start searching.
1. Create an empty file and name it `index.html`
2. Open it in a text editor like Notepad, Sublime Text, or Visual Studio Code
3. Copy-paste one the code sample below
4. Open `index.html` in your browser by double-clicking it in your folder
```js theme={null}
```
Here's what's happening:
* The first four lines of the `` add two container elements: `#searchbox` and `#hits`. `instant-meilisearch` creates the search bar inside `#searchbox` and lists search results in `#hits`
* The first two`
```
These URL and API key point to a public Meilisearch instance that contains data from Steam video games.
The `ais-instant-search` widget is the mandatory wrapper that allows you to configure your search. It takes two props: the `search-client` and the [`index-name`](/learn/getting_started/indexes#index-uid).
## 5. Add a search bar and list search results
Add the `ais-search-box` and `ais-hits` widgets inside the `ais-instant-search` wrapper widget.
Import the CSS library to style the search components.
```
{{ item.name }}
{{ item.description }}
```
Use the slot directive to customize how each search result is rendered.
Use the following CSS classes to add custom styles to your components:
`.ais-InstantSearch`, `.ais-SearchBox`, `.ais-InfiniteHits-list`, `.ais-InfiniteHits-item`
## 6.Start the app and search as you type
Start the app by running:
```bash theme={null}
npm run dev
```
Now open your browser, navigate to your Vue app URL (e.g., `localhost:5173`), and start searching.
Encountering issues? Check out the code in action in our [live demo](https://codesandbox.io/p/sandbox/ms-vue3-is-forked-wsrkl8)!
## Next steps
Want to search through your own data? [Create a project](https://cloud.meilisearch.com) in the Meilisearch Dashboard. Check out our [getting started guide](/learn/getting_started/cloud_quick_start) for step-by-step instructions.
# Using HTTP/2 and SSL with Meilisearch
Source: https://www.meilisearch.com/docs/guides/http2_ssl
Learn how to configure a server to use Meilisearch with HTTP/2.
For those willing to use HTTP/2, please be aware that it is **only possible if your server is configured with SSL certificate**.
Therefore, you will see how to launch a Meilisearch server with SSL. This tutorial gives a short introduction to do it locally, but you can as well do the same thing on a remote server.
First of all, you need the binary of Meilisearch, or you can also use docker. In the latter case, it is necessary to pass the parameters using environment variables and the SSL certificates via a volume.
A tool to generate SSL certificates is also required. In this How To, you will use [mkcert](https://github.com/FiloSottile/mkcert). However, if on a remote server, you can also use certbot or certificates signed by a Certificate Authority.
Then, use `curl` to do requests. It is a simple way to specify that you want to send HTTP/2 requests by using the `--http2` option.
## Try to use HTTP/2 without SSL
Start by running the binary.
```bash theme={null}
./meilisearch
```
And then, send a request.
```bash theme={null}
curl -kvs --http2 --request GET 'http://localhost:7700/indexes'
```
You will get the following answer from the server:
```bash theme={null}
* Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 7700 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 7700 (#0)
> GET /indexes HTTP/1.1
> Host: localhost:7700
> User-Agent: curl/7.64.1
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
>
< HTTP/1.1 200 OK
< content-length: 2
< content-type: application/json
< date: Fri, 17 Jul 2020 11:01:02 GMT
<
* Connection #0 to host localhost left intact
[]* Closing connection 0
```
You can see on line `> Connection: Upgrade, HTTP2-Settings` that the server tries to upgrade to HTTP/2, but is unsuccessful.
The answer `< HTTP/1.1 200 OK` indicates that the server still uses HTTP/1.
## Try to use HTTP/2 with SSL
This time, start by generating the SSL certificates. mkcert creates two files: `127.0.0.1.pem` and `127.0.0.1-key.pem`.
```bash theme={null}
mkcert '127.0.0.1'
```
Then, use the certificate and the key to configure Meilisearch with SSL.
```bash theme={null}
./meilisearch --ssl-cert-path ./127.0.0.1.pem --ssl-key-path ./127.0.0.1-key.pem
```
Next, make the same request as above but change `http://` to `https://`.
```bash theme={null}
curl -kvs --http2 --request GET 'https://localhost:7700/indexes'
```
You will get the following answer from the server:
```bash theme={null}
* Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 7700 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 7700 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=mkcert development certificate; OU=quentindequelen@s-iMac (Quentin de Quelen)
* start date: Jun 1 00:00:00 2019 GMT
* expire date: Jul 17 10:38:53 2030 GMT
* issuer: O=mkcert development CA; OU=quentindequelen@s-iMac (Quentin de Quelen); CN=mkcert quentindequelen@s-iMac (Quentin de Quelen)
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7ff601009200)
> GET /indexes HTTP/2
> Host: localhost:7700
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
< HTTP/2 200
< content-length: 2
< content-type: application/json
< date: Fri, 17 Jul 2020 11:06:27 GMT
<
* Connection #0 to host localhost left intact
[]* Closing connection 0
```
You can see that the server now supports HTTP/2.
```bash theme={null}
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
```
The server successfully receives HTTP/2 requests.
```bash theme={null}
< HTTP/2 200
```
# Improve relevancy when working with large documents
Source: https://www.meilisearch.com/docs/guides/improve_relevancy_large_documents
Use JavaScript with Node.js to split a single large document and configure Meilisearch with a distinct attribute to prevent duplicated results.
Meilisearch is optimized for handling paragraph-sized chunks of text. Datasets with many documents containing large amounts of text may lead to reduced search result relevancy.
In this guide, you will see how to use JavaScript with Node.js to split a single large document and configure Meilisearch with a distinct attribute to prevent duplicated results.
## Requirements
* A running Meilisearch project
* A command-line console
* Node.js v18
## Dataset
`stories.json` contains two documents, each storing the full text of a short story in its `text` field:
```json theme={null}
[
{
"id": 0,
"title": "A Haunted House",
"author": "Virginia Woolf",
"text": "Whatever hour you woke there was a door shutting. From room to room they went, hand in hand, lifting here, opening there, making sure—a ghostly couple.\n\n \"Here we left it,\" she said. And he added, \"Oh, but here too!\" \"It's upstairs,\" she murmured. \"And in the garden,\" he whispered. \"Quietly,\" they said, \"or we shall wake them.\"\n\nBut it wasn't that you woke us. Oh, no. \"They're looking for it; they're drawing the curtain,\" one might say, and so read on a page or two. \"Now they've found it,\" one would be certain, stopping the pencil on the margin. And then, tired of reading, one might rise and see for oneself, the house all empty, the doors standing open, only the wood pigeons bubbling with content and the hum of the threshing machine sounding from the farm. \"What did I come in here for? What did I want to find?\" My hands were empty. \"Perhaps it's upstairs then?\" The apples were in the loft. And so down again, the garden still as ever, only the book had slipped into the grass.\n\nBut they had found it in the drawing room. Not that one could ever see them. The window panes reflected apples, reflected roses; all the leaves were green in the glass. If they moved in the drawing room, the apple only turned its yellow side. Yet, the moment after, if the door was opened, spread about the floor, hung upon the walls, pendant from the ceiling—what? My hands were empty. The shadow of a thrush crossed the carpet; from the deepest wells of silence the wood pigeon drew its bubble of sound. \"Safe, safe, safe,\" the pulse of the house beat softly. \"The treasure buried; the room ...\" the pulse stopped short. Oh, was that the buried treasure?\n\nA moment later the light had faded. Out in the garden then? But the trees spun darkness for a wandering beam of sun. So fine, so rare, coolly sunk beneath the surface the beam I sought always burnt behind the glass. Death was the glass; death was between us; coming to the woman first, hundreds of years ago, leaving the house, sealing all the windows; the rooms were darkened. He left it, left her, went North, went East, saw the stars turned in the Southern sky; sought the house, found it dropped beneath the Downs. \"Safe, safe, safe,\" the pulse of the house beat gladly. \"The Treasure yours.\"\n\nThe wind roars up the avenue. Trees stoop and bend this way and that. Moonbeams splash and spill wildly in the rain. But the beam of the lamp falls straight from the window. The candle burns stiff and still. Wandering through the house, opening the windows, whispering not to wake us, the ghostly couple seek their joy.\n\n\"Here we slept,\" she says. And he adds, \"Kisses without number.\" \"Waking in the morning—\" \"Silver between the trees—\" \"Upstairs—\" \"In the garden—\" \"When summer came—\" \"In winter snowtime—\" The doors go shutting far in the distance, gently knocking like the pulse of a heart.\n\nNearer they come; cease at the doorway. The wind falls, the rain slides silver down the glass. Our eyes darken; we hear no steps beside us; we see no lady spread her ghostly cloak. His hands shield the lantern. \"Look,\" he breathes. \"Sound asleep. Love upon their lips.\"\n\nStooping, holding their silver lamp above us, long they look and deeply. Long they pause. The wind drives straightly; the flame stoops slightly. Wild beams of moonlight cross both floor and wall, and, meeting, stain the faces bent; the faces pondering; the faces that search the sleepers and seek their hidden joy.\n\n\"Safe, safe, safe,\" the heart of the house beats proudly. \"Long years—\" he sighs. \"Again you found me.\" \"Here,\" she murmurs, \"sleeping; in the garden reading; laughing, rolling apples in the loft. Here we left our treasure—\" Stooping, their light lifts the lids upon my eyes. \"Safe! safe! safe!\" the pulse of the house beats wildly. Waking, I cry \"Oh, is this _your_ buried treasure? The light in the heart."
},
{
"id": 1,
"title": "Monday or Tuesday",
"author": "Virginia Woolf",
"text": "Lazy and indifferent, shaking space easily from his wings, knowing his way, the heron passes over the church beneath the sky. White and distant, absorbed in itself, endlessly the sky covers and uncovers, moves and remains. A lake? Blot the shores of it out! A mountain? Oh, perfect—the sun gold on its slopes. Down that falls. Ferns then, or white feathers, for ever and ever——\n\nDesiring truth, awaiting it, laboriously distilling a few words, for ever desiring—(a cry starts to the left, another to the right. Wheels strike divergently. Omnibuses conglomerate in conflict)—for ever desiring—(the clock asseverates with twelve distinct strokes that it is midday; light sheds gold scales; children swarm)—for ever desiring truth. Red is the dome; coins hang on the trees; smoke trails from the chimneys; bark, shout, cry \"Iron for sale\"—and truth?\n\nRadiating to a point men's feet and women's feet, black or gold-encrusted—(This foggy weather—Sugar? No, thank you—The commonwealth of the future)—the firelight darting and making the room red, save for the black figures and their bright eyes, while outside a van discharges, Miss Thingummy drinks tea at her desk, and plate-glass preserves fur coats——\n\nFlaunted, leaf-light, drifting at corners, blown across the wheels, silver-splashed, home or not home, gathered, scattered, squandered in separate scales, swept up, down, torn, sunk, assembled—and truth?\n\nNow to recollect by the fireside on the white square of marble. From ivory depths words rising shed their blackness, blossom and penetrate. Fallen the book; in the flame, in the smoke, in the momentary sparks—or now voyaging, the marble square pendant, minarets beneath and the Indian seas, while space rushes blue and stars glint—truth? or now, content with closeness?\n\nLazy and indifferent the heron returns; the sky veils her stars; then bares them."
}
]
```
Meilisearch works best with documents under 1kb in size. This roughly translates to a maximum of two or three paragraphs of text.
## Splitting documents
Create a `split_documents.js` file in your working directory:
```js theme={null}
#!/usr/bin/env node
const datasetPath = process.argv[2];
const datasetFile = fs.readFileSync(datasetPath);
const documents = JSON.parse(datasetFile);
const splitDocuments = [];
for (let documentNumber = documents.length, i = 0; i < documentNumber; i += 1) {
const document = documents[i];
const story = document.text;
const paragraphs = story.split("\n\n");
for (let paragraphNumber = paragraphs.length, o = 0; o < paragraphNumber; o += 1) {
splitDocuments.push({
"id": document.id,
"title": document.title,
"author": document.author,
"text": paragraphs[o]
});
}
}
fs.writeFileSync("stories-split.json", JSON.stringify(splitDocuments));
```
Next, run the script on your console, specifying the path to your JSON dataset:
```sh theme={null}
node ./split_documents.js ./stories.json
```
This script accepts one argument: a path pointing to a JSON dataset. It reads the file and parses each document in it. For each paragraph in a document's `text` field, it creates a new document with a new `id` and `text` fields. Finally, it writes the new documents on `stories-split.json`.
## Generating unique IDs
Right now, Meilisearch would not accept the new dataset because many documents share the same primary key.
Update the script from the previous step to create a new field, `story_id`:
```js theme={null}
#!/usr/bin/env node
const datasetPath = process.argv[2];
const datasetFile = fs.readFileSync(datasetPath);
const documents = JSON.parse(datasetFile);
const splitDocuments = [];
for (let documentNumber = documents.length, i = 0; i < documentNumber; i += 1) {
const document = documents[i];
const story = document.text;
const paragraphs = story.split("\n\n");
for (let paragraphNumber = paragraphs.length, o = 0; o < paragraphNumber; o += 1) {
splitDocuments.push({
"story_id": document.id,
"id": `${document.id}-${o}`,
"title": document.title,
"author": document.author,
"text": paragraphs[o]
});
}
}
```
The script now stores the original document's `id` in `story_id`. It then creates a new unique identifier for each new document and stores it in the primary key field.
## Configuring distinct attribute
This dataset is now valid, but since each document effectively points to the same story, queries are likely to result in duplicated search results.
To prevent that from happening, configure `story_id` as the index's distinct attribute:
```sh theme={null}
curl \
-X PUT 'MEILISEARCH_URL/indexes/INDEX_NAME/settings/distinct-attribute' \
-H 'Content-Type: application/json' \
--data-binary '"story_id"'
```
Users searching this dataset will now be able to find more relevant results across large chunks of text, without any loss of performance and no duplicates.
## Conclusion
You have seen how to split large documents to improve search relevancy. You also saw how to configure a distinct attribute to prevent Meilisearch from returning duplicate results.
Though this guide used JavaScript, you can replicate the process with any programming language you are comfortable using.
# Implementing semantic search with LangChain
Source: https://www.meilisearch.com/docs/guides/langchain
This guide shows you how to implement semantic search using LangChain and similarity search.
In this guide, you’ll use OpenAI’s text embeddings to measure the similarity between document properties. Then, you’ll use the LangChain framework to seamlessly integrate Meilisearch and create an application with semantic search.
## Requirements
This guide assumes a basic understanding of Python and LangChain. Beginners to LangChain will still find the tutorial accessible.
* Python (LangChain requires >= 3.8.1 and \< 4.0) and the pip CLI
* A [Meilisearch >= 1.6 project](/learn/getting_started/cloud_quick_start)
* An [OpenAI API key](https://platform.openai.com/account/api-keys)
## Creating the application
Create a folder for your application with an empty `setup.py` file.
Before writing any code, install the necessary dependencies:
```bash theme={null}
pip install langchain openai meilisearch python-dotenv
```
First create a .env to store our credentials:
```
# .env
MEILI_HTTP_ADDR="your Meilisearch host"
MEILI_API_KEY="your Meilisearch API key"
OPENAI_API_KEY="your OpenAI API key"
```
Now that you have your environment variables available, create a `setup.py` file with some boilerplate code:
```python theme={null}
# setup.py
import os
from dotenv import load_dotenv # remove if not using dotenv
from langchain.vectorstores import Meilisearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import JSONLoader
load_dotenv() # remove if not using dotenv
# exit if missing env vars
if "MEILI_HTTP_ADDR" not in os.environ:
raise Exception("Missing MEILI_HTTP_ADDR env var")
if "MEILI_API_KEY" not in os.environ:
raise Exception("Missing MEILI_API_KEY env var")
if "OPENAI_API_KEY" not in os.environ:
raise Exception("Missing OPENAI_API_KEY env var")
# Setup code will go here 👇
```
## Importing documents and embeddings
Now that the project is ready, import some documents in Meilisearch. First, download this small movies dataset:
Download movies-lite.json
Then, update the setup.py file to load the JSON and store it in Meilisearch. You will also use the OpenAI text search models to generate vector embeddings.
To use vector search, we need to set the embedders index setting. In this case, you are using an `userProvided` source which requires to specify the size of the vectors in a `dimensions` field. The default model used by `OpenAIEmbeddings()` is `text-embedding-ada-002`, which has 1,536 dimensions.
```python theme={null}
# setup.py
# previous code
# Load documents
loader = JSONLoader(
file_path="./movies-lite.json",
jq_schema=".[] | {id: .id, overview: .overview, title: .title}",
text_content=False,
)
documents = loader.load()
print("Loaded {} documents".format(len(documents)))
# Store documents in Meilisearch
embeddings = OpenAIEmbeddings()
embedders = {
"custom": {
"source": "userProvided",
"dimensions": 1536
}
}
embedder_name = "custom"
vector_store = Meilisearch.from_documents(documents=documents, embedding=embeddings, embedders=embedders, embedder_name=embedder_name)
print("Started importing documents")
```
Your Meilisearch instance will now contain your documents. Meilisearch runs tasks like document import asynchronously, so you might need to wait a bit for documents to be available. Consult [the asynchronous operations explanation](/learn/async/asynchronous_operations) for more information on how tasks work.
## Performing similarity search
Your database is now populated with the data from the movies dataset. Create a new `search.py` file to make a semantic search query: searching for documents using similarity search.
```python theme={null}
# search.py
import os
from dotenv import load_dotenv
from langchain.vectorstores import Meilisearch
from langchain.embeddings.openai import OpenAIEmbeddings
import meilisearch
load_dotenv()
# You can use the same code as `setup.py` to check for missing env vars
# Create the vector store
client = meilisearch.Client(
url=os.environ.get("MEILI_HTTP_ADDR"),
api_key=os.environ.get("MEILI_API_KEY"),
)
embeddings = OpenAIEmbeddings()
vector_store = Meilisearch(client=client, embedding=embeddings)
# Make similarity search
embedder_name = "custom"
query = "superhero fighting evil in a city at night"
results = vector_store.similarity_search(
query=query,
embedder_name=embedder_name,
k=3,
)
# Display results
for result in results:
print(result.page_content)
```
Run `search.py`. If everything is working correctly, you should see an output like this:
```
{"id": 155, "title": "The Dark Knight", "overview": "Batman raises the stakes in his war on crime. With the help of Lt. Jim Gordon and District Attorney Harvey Dent, Batman sets out to dismantle the remaining criminal organizations that plague the streets. The partnership proves to be effective, but they soon find themselves prey to a reign of chaos unleashed by a rising criminal mastermind known to the terrified citizens of Gotham as the Joker."}
{"id": 314, "title": "Catwoman", "overview": "Liquidated after discovering a corporate conspiracy, mild-mannered graphic artist Patience Phillips washes up on an island, where she's resurrected and endowed with the prowess of a cat -- and she's eager to use her new skills ... as a vigilante. Before you can say \"cat and mouse,\" handsome gumshoe Tom Lone is on her tail."}
{"id": 268, "title": "Batman", "overview": "Batman must face his most ruthless nemesis when a deformed madman calling himself \"The Joker\" seizes control of Gotham's criminal underworld."}
```
Congrats 🎉 You managed to make a similarity search using Meilisearch as a LangChain vector store.
## Going further
Using Meilisearch as a LangChain vector store allows you to load documents and search for them in different ways:
* [Import documents from text](https://python.langchain.com/docs/integrations/vectorstores/meilisearch#adding-text-and-embeddings)
* [Similarity search with score](https://python.langchain.com/docs/integrations/vectorstores/meilisearch#similarity-search-with-score)
* [Similarity search by vector](https://python.langchain.com/docs/integrations/vectorstores/meilisearch#similarity-search-by-vector)
For additional information, consult:
[Meilisearch Python SDK docs](https://python-sdk.meilisearch.com/)
Finally, should you want to use Meilisearch's vector search capabilities without LangChain or its hybrid search feature, refer to the [dedicated tutorial](/learn/ai_powered_search/getting_started_with_ai_search).
# Laravel multitenancy guide
Source: https://www.meilisearch.com/docs/guides/laravel_multitenancy
Learn how to implement secure, multitenant search in your Laravel applications.
This guide will walk you through implementing search in a multitenant Laravel application. We'll use the example of a customer relationship manager (CRM) application that allows users to store contacts.
## Requirements
This guide requires:
* A Laravel 10 application with [Laravel Scout](https://laravel.com/docs/10.x/scout) configured to use the `meilisearch` driver
* A Meilisearch server running — see our [quick start](/learn/getting_started/cloud_quick_start)
* A search API key — available in your Meilisearch dashboard
* A search API key UID — retrieve it using the [keys endpoints](/reference/api/keys/list-api-keys)
Prefer self-hosting? Read our [installation guide](/learn/self_hosted/install_meilisearch_locally).
## Models & relationships
Our example CRM is a multitenant application, where each user can only access data belonging to their organization.
On a technical level, this means:
* A `User` model that belongs to an `Organization`
* A `Contact` model that belongs to an `Organization` (can only be accessed by users from the same organization)
* An `Organization` model that has many `User`s and many `Contact`s
With that in mind, the first step is to define such these models and their relationship:
In `app/Models/Contact.php`:
```php theme={null}
belongsTo(Organization::class, 'organization_id');
}
}
```
In `app/Models/User.php`:
```php theme={null}
*/
protected $fillable = [
'name',
'email',
'password',
];
/**
* The attributes that should be hidden for serialization.
*
* @var array
*/
protected $hidden = [
'password',
'remember_token',
];
/**
* The attributes that should be cast.
*
* @var array
*/
protected $casts = [
'email_verified_at' => 'datetime',
'password' => 'hashed',
];
public function organization()
{
return $this->belongsTo(Organization::class, 'organization_id');
}
}
```
And in `app/Models/Organization.php`:
```php theme={null}
hasMany(Contact::class);
}
}
```
Now you have a solid understanding of your application's models and their relationships, you are ready to generate tenant tokens.
## Generating tenant tokens
Currently, all `User`s can search through data belonging to all `Organizations`. To prevent that from happening, you need to generate a tenant token for each organization. You can then use this token to authenticate requests to Meilisearch and ensure that users can only access data from their organization. All `User` within the same `Organization` will share the same token.
In this guide, you will generate the token when the organization is retrieved from the database. If the organization has no token, you will generate one and store it in the `meilisearch_token` attribute.
Update `app/Models/Organization.php`:
```php theme={null}
hasMany(Contact::class);
}
protected static function booted()
{
static::retrieved(function (Organization $organization) {
// You may want to add some logic to skip generating tokens in certain environments
if (env('SCOUT_DRIVER') === 'array' && env('APP_ENV') === 'testing') {
$organization->meilisearch_token = 'fake-tenant-token';
return;
}
// Early return if the organization already has a token
if ($organization->meilisearch_token) {
Log::debug('Organization ' . $organization->id . ': already has a token');
return;
}
Log::debug('Generating tenant token for organization ID: ' . $organization->id);
// The object belows is used to generate a tenant token that:
// • applies to all indexes
// • filters only documents where `organization_id` is equal to this org ID
$searchRules = (object) [
'*' => (object) [
'filter' => 'organization_id = ' . $organization->id,
]
];
// Replace with your own Search API key and API key UID
$meiliApiKey = env('MEILISEARCH_SEARCH_KEY');
$meiliApiKeyUid = env('MEILISEARCH_SEARCH_KEY_UID');
// Generate the token
$token = self::generateMeiliTenantToken($meiliApiKeyUid, $searchRules, $meiliApiKey);
// Save the token in the database
$organization->meilisearch_token = $token;
$organization->save();
});
}
protected static function generateMeiliTenantToken($meiliApiKeyUid, $searchRules, $meiliApiKey)
{
$meilisearch = resolve(EngineManager::class)->engine();
return $meilisearch->generateTenantToken(
$meiliApiKeyUid,
$searchRules,
[
'apiKey' => $meiliApiKey,
'expiresAt' => new DateTime('2030-12-31'),
]
);
}
}
```
Now the `Organization` model is generating tenant tokens, you need to provide the front-end with these tokens so that it can access Meilisearch securely.
## Using tenant tokens with Laravel Blade
Use [view composers](https://laravel.com/docs/10.x/views#view-composers) to provide views with your search token. This way, you ensure the token is available in all views, without having to pass it manually.
If you prefer, you can pass the token manually to each view using the `with` method.
Create a new `app/View/Composers/AuthComposer.php` file:
```php theme={null}
with([
'meilisearchToken' => $user->organization->meilisearch_token,
]);
}
}
```
Now, register this view composer in the `AppServiceProvider`:
```php theme={null}
import { instantMeiliSearch } from "@meilisearch/instant-meilisearch"
const props = defineProps<{
host: string,
apiKey: string,
indexName: string,
}>()
const { searchClient } = instantMeiliSearch(props.host, props.apiKey)
```
You can use the `Meilisearch` component it in any Blade view by providing it with the tenant token. Don't forget to add the `@vite` directive to include the Vue app in your view.
```blade theme={null}
@push('scripts')
@vite('resources/js/vue-app.js')
@endpush
```
Et voilà! You now have a search interface that is secure and multitenant. Users can only access data from their organization, and you can rest assured that data from other tenants is safe.
## Conclusion
In this guide, you saw how to implement secure, multitenant search in a Laravel application. You then generated tenant tokens for each organization and used them to secure access to Meilisearch. You also built a search interface using Vue InstantSearch and provided it with the tenant token.
All the code in this guide is a simplified example of what we implemented in the [Laravel CRM](https://saas.meilisearch.com/?utm_campaign=oss\&utm_source=docs\&utm_medium=laravel-multitenancy) example application. Find the full code on [GitHub](https://github.com/meilisearch/saas-demo).
# Laravel Scout guide
Source: https://www.meilisearch.com/docs/guides/laravel_scout
Learn how to use Meilisearch with Laravel Scout.
In this guide, you will see how to setup [Laravel Scout](https://laravel.com/docs/10.x/scout) to use Meilisearch in your Laravel 10 application.
## Prerequisites
Before you start, make sure you have the following installed on your machine:
* PHP
* [Composer](https://getcomposer.org/)
You will also need a Laravel application. If you don't have one, you can create a new one by running the following command:
```sh theme={null}
composer create-project laravel/laravel my-application
```
## Installing Laravel Scout
Laravel comes with out-of-the-box full-text search capabilities via Laravel Scout.
To enable it, navigate to your Laravel application directory and install Scout via the Composer package manager:
```sh theme={null}
composer require laravel/scout
```
After installing Scout, you need to publish the Scout configuration file. You can do this by running the following `artisan` command:
```sh theme={null}
php artisan vendor:publish --provider="Laravel\Scout\ScoutServiceProvider"
```
This command should create a new configuration file in your application directory: `config/scout.php`.
## Configuring the Laravel Scout driver
Now you need to configure Laravel Scout to use the Meilisearch driver. First, install the dependencies required to use Scout with Meilisearch via Composer:
```sh theme={null}
composer require meilisearch/meilisearch-php http-interop/http-factory-guzzle
```
Then, update the environment variables in your `.env` file:
```sh theme={null}
SCOUT_DRIVER=meilisearch
# Use the host below if you're running Meilisearch via Laravel Sail
MEILISEARCH_HOST=http://meilisearch:7700
MEILISEARCH_KEY=masterKey
```
### Local development
Laravel’s official Docker development environment, Laravel Sail, comes with a Meilisearch service out-of-the-box. Please note that when running Meilisearch via Sail, Meilisearch’s host is `http://meilisearch:7700` (instead of say, `http://localhost:7700`).
Check out Docker [Bridge network driver](https://docs.docker.com/network/drivers/bridge/#differences-between-user-defined-bridges-and-the-default-bridge) documentation for further detail.
### Running in production
For production use cases, we recommend using a managed Meilisearch via [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=laravel\&utm_source=docs\&utm_medium=laravel-scout-guide). On Meilisearch Cloud, you can find your host URL in your project settings.
Read the [Meilisearch Cloud quick start](/learn/getting_started/cloud_quick_start).
If you prefer to self-host, read our guide for [running Meilisearch in production](/guides/running_production).
## Making Eloquent models searchable
With Scout installed and configured, add the `Laravel\Scout\Searchable` trait to your Eloquent models to make them searchable. This trait will use Laravel’s model observers to keep the data in your model in sync with Meilisearch.
Here’s an example model:
```php theme={null}
belongsTo(Company::class);
}
public function toSearchableArray(): array
{
// All model attributes are made searchable
$array = $this->toArray();
// Then we add some additional fields
$array['organization_id'] = $this->company->organization->id;
$array['company_name'] = $this->company->name;
$array['company_url'] = $this->company->url;
return $array;
}
}
```
## Configuring filterable and sortable attributes
Configure which attributes are [filterable](/learn/filtering_and_sorting/filter_search_results) and [sortable](/learn/filtering_and_sorting/sort_search_results) via your Meilisearch index settings.
In Laravel, you can configure your index settings via the `config/scout.php` file:
```php theme={null}
[
'host' => env('MEILISEARCH_HOST', 'https://edge.meilisearch.com'),
'key' => env('MEILISEARCH_KEY'),
'index-settings' => [
Contact::class => [
'filterableAttributes' => ['organization_id'],
'sortableAttributes' => ['name', 'company_name']
],
],
],
];
```
The example above updates Meilisearch index settings for the `Contact` model:
* it makes the `organization_id` field filterable
* it makes the `name` and `company_name` fields sortable
After changing your index settings, you will need to synchronize your Scout index settings.
## Synchronizing your index settings
To synchronize your index settings, run the following command:
```sh theme={null}
php artisan scout:sync-index-settings
```
## Example usage
You built an example application to demonstrate how to use Meilisearch with Laravel Scout. It showcases an app-wide search in a CRM (Customer Relationship Management) application.
This demo application uses the following features:
* [Multi-search](/reference/api/search/perform-a-multi-search) (search across multiple indexes)
* [Multi-tenancy](/learn/security/multitenancy_tenant_tokens)
* [Filtering](/learn/filtering_and_sorting/filter_search_results)
* [Sorting](/learn/filtering_and_sorting/sort_search_results)
Of course, the code is open-sourced on [GitHub](https://github.com/meilisearch/saas-demo). 🎉
# Node.js multitenancy guide
Source: https://www.meilisearch.com/docs/guides/multitenancy_nodejs
Learn how to implement secure, multitenant search in your Node.js applications.
This guide will walk you through implementing search in a multitenant Node.js application handling sensitive medical data.
## What is multitenancy?
In Meilisearch, you might have one index containing data belonging to many distinct tenants. In such cases, your tenants must only be able to search through their own documents. You can implement this using [tenant tokens](/learn/security/multitenancy_tenant_tokens).
## Requirements
* [Node.js](https://nodejs.org/en) and a package manager like `npm`, `yarn`, or `pnpm`
* [Meilisearch JavaScript SDK](/learn/resources/sdks)
* A Meilisearch server running — see our [quick start](/learn/getting_started/cloud_quick_start)
* A search API key — available in your Meilisearch dashboard
* A search API key UID — retrieve it using the [keys endpoints](/reference/api/keys/list-api-keys)
Prefer self-hosting? Read our [installation guide](/learn/self_hosted/install_meilisearch_locally).
## Data models
This guide uses a simple data model to represent medical appointments. The documents in the Meilisearch index will look like this:
```json theme={null}
[
{
"id": 1,
"patient": "John",
"details": "I think I caught a cold. Can you help me?",
"status": "pending"
},
{
"id": 2,
"patient": "Zia",
"details": "I'm suffering from fever. I need an appointment ASAP.",
"status": "pending"
},
{
"id": 3,
"patient": "Kevin",
"details": "Some confidential information Kevin has shared.",
"status": "confirmed"
}
]
```
For the purpose of this guide, we assume documents are stored in an `appointments` index.
## Creating a tenant token
The first step is generating a tenant token that will allow a given patient to search only for their appointments. To achieve this, you must first create a tenant token that filters results based on the patient's ID.
Create a `search.js` file and use the following code to generate a tenant token:
```js theme={null}
// search.js
import { Meilisearch } from 'meilisearch'
const apiKey = 'YOUR_SEARCH_API_KEY'
const apiKeyUid = 'YOUR_SEARCH_API_KEY_UID'
const indexName = 'appointments'
const client = new Meilisearch({
host: 'https://edge.meilisearch.com', // Your Meilisearch host
apiKey: apiKey
})
export function createTenantToken(patientName) {
const searchRules = {
[indexName]: {
'filter': `patient = ${patientName}`
}
}
const tenantToken = client.generateTenantToken(
apiKeyUid,
searchRules,
{
expiresAt: new Date('2030-01-01'), // Choose an expiration date
apiKey: apiKey,
}
)
return tenantToken
}
```
When Meilisearch gets a search query with a tenant token, it decodes it and applies the search rules to the search request. In this example, the results are filtered by the `patient` field. This means that a patient can only search for their own appointments.
## Using the tenant token
Now that you have a tenant token, use it to perform searches. To achieve this, you will need to:
* On the server: create an endpoint to send the token to your front-end
* On the client: retrieve the token and use it to perform searches
### Serving the tenant token
This guide uses [Express.js](https://expressjs.com/en/starter/installing.html) to create the server. You can install `express` by running:
```sh theme={null}
# with NPM
npm i express
# with Yarn
yarn add express
# with pnpm
pnpm add express
```
Then, add the following code in a `server.js` file:
```js theme={null}
// server.js
import express from 'express'
import { createTenantToken } from './search.js'
const server = express()
server.get('/token', async (request, response) => {
const { id: patientId } = request.query
const token = createTenantToken(patientId)
return response.json({ token });
})
server.listen(3000, () => {
console.log('Server is running on port 3000')
})
```
This code creates an endpoint at `http://localhost:3000/token` that accepts an `id` query parameter and returns a tenant token.
### Making a search
Now that we have an endpoint, you will use it to retrieve the tenant token in your front-end application. This guide uses [InstantSearch.js](/guides/front_end/front_end_integration) to create a search interface. You will use CDN links to include InstantSearch.js and the Meilisearch InstantSearch.js connector in your HTML file.
Create `client.html` file and insert this code:
```html theme={null}
```
Ta-da! You have successfully implemented a secure, multitenant search in your Node.js application. Users will only be able to search for documents that belong to them.
## Conclusion
In this guide, you saw how to implement secure, multitenant search in a Node.js application. You then created an endpoint to generate tenant tokens for each user. You also built a search interface with InstantSearch to make searches using the tenant token.
All the code in this guide is a taken from our [multitenacy example](https://tenant-token.meilisearch.com/?utm_campaign=oss\&utm_source=docs\&utm_medium=node-multitenancy) application. The code is available on [GitHub](https://github.com/meilisearch/tutorials/tree/main/src/tenant-token-tutorial).
# Interpreting ranking score details
Source: https://www.meilisearch.com/docs/guides/relevancy/interpreting_ranking_scores
Learn how to understand ranking score details to see how Meilisearch evaluates each result and which rules determined their order.
# How do I interpret ranking score details?
[In the previous guide](/guides/relevancy/ordering_ranking_rules), we covered how ranking rules determine result order and how changing their sequence affects what your users see first. But when you're actually making those tweaks, how do you know if they're working the way you expect?
That's where ranking score details come in. They give you a behind-the-scenes view of every ranking decision Meilisearch made for each result — with specific numeric scores for each relevancy rule, in the order they were evaluated.
You'll be able to see things like: did Proximity decide this result's position, or was it Typo? Did Sort even get a chance to act, or did an earlier rule already settle things? And since Sort doesn't measure relevance (it shows a `value` rather than a `score`), the details also make it clear exactly where Sort slotted into the evaluation path and whether it actually influenced the final order.
**Firstly, how do I see ranking score details?**
When you search you can pass in an option to view the details of scoring and sorting using `“showRankingScoreDetails”: true` and it will return an indepth look at the ranking rules that you are working with
```bash cURL theme={null}
curl \
-X POST 'MEILISEARCH_URL/indexes/movies/search' \
-H 'Content-Type: application/json' \
--data-binary '{
"q": "dragon",
"showRankingScoreDetails": true
}'
```
```javascript JS theme={null}
client.index('movies').search('dragon', { showRankingScoreDetails: true })
```
```python Python theme={null}
client.index('movies').search('dragon', {
'showRankingScoreDetails': True
})
```
```php PHP theme={null}
$client->index('movies')->search('dragon', [
'showRankingScoreDetails' => true
]);
```
```java Java theme={null}
SearchRequest searchRequest = SearchRequest.builder().q("dragon").showRankingScoreDetails(true).build();
client.index("movies").search(searchRequest);
```
```ruby Ruby theme={null}
client.index('movies').search('dragon', {
show_ranking_score_details: true
})
```
```go Go theme={null}
resp, err := client.Index("movies").Search("dragon", &meilisearch.SearchRequest{
showRankingScoreDetails: true,
})
```
```csharp C# theme={null}
var params = new SearchQuery()
{
ShowRankingScoreDetails = true
};
await client.Index("movies").SearchAsync("dragon", params);
```
```rust Rust theme={null}
let results: SearchResults = client
.index("movies")
.search()
.with_query("dragon")
.with_show_ranking_score_details(true)
.execute()
.await
.unwrap();
```
```swift Swift theme={null}
let searchParameters = SearchParameters(query: "dragon", showRankingScoreDetails: true)
let movies: Searchable = try await client.index("movies").search(searchParameters)
```
Ranking Score details example
```sh theme={null}
{
"hits": [
{
"id": 31072,
"title": "Dragon",
"overview": "In a desperate attempt to save her kingdom…",
…
"_rankingScoreDetails": {
"words": {
"order": 0,
"matchingWords": 4,
"maxMatchingWords": 4,
"score": 1.0
},
"typo": {
"order": 2,
"typoCount": 1,
"maxTypoCount": 4,
"score": 0.75
},
"name:asc": {
"order": 1,
"value": "Dragon"
}
}
},
…
],
…
}
```
# Ranking rules: same data, different results. How `sort` placement changes outcomes
## The setup
You run a **recipe search app**. You have two recipes in your index:
```json theme={null}
[
{
"id": 1,
"title": "Easy Chicken Curry",
"description": "A quick and simple chicken curry ready in 20 minutes",
"prep_time_minutes": 20
},
{
"id": 2,
"title": "Chicken Stew with Curry Spices and Vegetables",
"description": "A hearty stew with warming spices",
"prep_time_minutes": 15
}
]
```
A user searches for `"chicken curry"` and sorts by `prep_time_minutes:asc` (quickest first).
Both documents match both search words. But **Doc 1** is clearly the stronger text match as `"chicken"` and `"curry"` appear right next to each other in the title. **Doc 2** has both words in the title too, but they're separated by several other words.
Let's see how moving Sort **one position** in your ranking rules changes which result comes first, and how to read the ranking score details to understand why.
***
## Scenario A: `sort` placed AFTER Group 1 rules (recommended)
We’ve set up our ranking rules to have sort after our Group 1 wide net rules.
```json theme={null}
["words", "typo", "proximity", "sort", "attributeRank", "wordPosition", "exactness"]
```
With this set up Meilisearch evaluates the text relevance rules first, *then* uses Sort.
### 🥇 Result #1 — Easy Chicken Curry
```json theme={null}
{
"prep_time_minutes": 20,
"title": "Easy Chicken Curry",
"id": 1,
"description": "A quick and simple chicken curry ready in 20 minutes",
"_rankingScore": 0.9982363315696648,
"_rankingScoreDetails": {
"words": {
"order": 0,
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 1.0
},
"typo": { "order": 1, "typoCount": 0, "maxTypoCount": 2, "score": 1.0 },
"proximity": { "order": 2, "score": 1.0 },
"prep_time_minutes:asc": { "order": 3, "value": 20.0 },
"attribute": {
"order": 4,
"attributeRankingOrderScore": 1.0,
"queryWordDistanceScore": 0.9047619047619048,
"score": 0.9682539682539683
},
"exactness": {
"order": 5,
"matchType": "noExactMatch",
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 0.3333333333333333
}
}
}
```
### 🥈 Result #2 — Chicken Stew with Curry Spices and Vegetables
```json theme={null}
{
"prep_time_minutes": 15,
"title": "Chicken Stew with Curry Spices and Vegetables",
"id": 2,
"description": "A hearty stew with warming spices",
"_rankingScore": 0.9149029982363316,
"_rankingScoreDetails": {
"words": {
"order": 0,
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 1.0
},
"typo": { "order": 1, "typoCount": 0, "maxTypoCount": 2, "score": 1.0 },
"proximity": { "order": 2, "score": 0.5 },
"prep_time_minutes:asc": { "order": 3, "value": 15.0 },
"attribute": {
"order": 4,
"attributeRankingOrderScore": 1.0,
"queryWordDistanceScore": 0.9047619047619048,
"score": 0.9682539682539683
},
"exactness": {
"order": 5,
"matchType": "noExactMatch",
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 0.3333333333333333
}
}
```
### What decided this? Reading the score details
Walk through the rules in `order` (0, 1, 2…) and look for where the scores diverge:
| Step | Rule | Doc 1 | Doc 2 | Outcome |
| ---- | ------------- | --------------- | --------------- | --------------------- |
| 0 | **Words** | 2/2 → `1.0` | 2/2 → `1.0` | 🤝 Tie |
| 1 | **Typo** | 0 typos → `1.0` | 0 typos → `1.0` | 🤝 Tie |
| 2 | **Proximity** | `1.0` | `0.5` | ✅ **Doc 1 wins here** |
Proximity broke the tie. `"chicken"` and `"curry"` sit right next to each other in Doc 1's title (score `1.0`), but are separated by three words in Doc 2's title (score `0.5`).
Sort (order 3) never got a chance to act because Proximity already decided the winner. **Even though Doc 2 has a faster prep time (15 min vs 20 min), it ranks second because text relevance was evaluated first.**
Also notice: Sort shows a `value` instead of a `score`. That's because Sort doesn't measure relevance, it just orders by the field value. This is why Sort doesn't contribute to `_rankingScore`.
***
## Scenario B: `sort` placed BEFORE Group 1 rules
Now let's move `sort` to the top of our ranking rules:
```json theme={null}
["sort", "words", "typo", "proximity", "attributeRank", "wordPosition", "exactness"]
```
### 🥇 Result #1 — Chicken Stew with Curry Spices and Vegetables
```json theme={null}
{
"prep_time_minutes": 15,
"title": "Chicken Stew with Curry Spices and Vegetables",
"id": 2,
"description": "A hearty stew with warming spices",
"_rankingScore": 0.9149029982363316,
"_rankingScoreDetails": {
"prep_time_minutes:asc": { "order": 0, "value": 15.0 },
"words": {
"order": 1,
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 1.0
},
"typo": { "order": 2, "typoCount": 0, "maxTypoCount": 2, "score": 1.0 },
"proximity": { "order": 3, "score": 0.5 },
"attribute": {
"order": 4,
"attributeRankingOrderScore": 1.0,
"queryWordDistanceScore": 0.9047619047619048,
"score": 0.9682539682539683
},
"exactness": {
"order": 5,
"matchType": "noExactMatch",
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 0.3333333333333333
}
}
}
```
### 🥈 Result #2 — Easy Chicken Curry
```json theme={null}
{
"prep_time_minutes": 20,
"title": "Easy Chicken Curry",
"id": 1,
"description": "A quick and simple chicken curry ready in 20 minutes",
"_rankingScore": 0.9982363315696648,
"_rankingScoreDetails": {
"prep_time_minutes:asc": { "order": 0, "value": 20.0 },
"words": {
"order": 1,
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 1.0
},
"typo": { "order": 2, "typoCount": 0, "maxTypoCount": 2, "score": 1.0 },
"proximity": { "order": 3, "score": 1.0 },
"attribute": {
"order": 4,
"attributeRankingOrderScore": 1.0,
"queryWordDistanceScore": 0.9047619047619048,
"score": 0.9682539682539683
},
"exactness": {
"order": 5,
"matchType": "noExactMatch",
"matchingWords": 2,
"maxMatchingWords": 2,
"score": 0.3333333333333333
}
}
}
```
### Reading the score details - what changed?
Look at the `order` values. Sort is now `order: 0` so it runs first.
| Step | Rule | Doc 1 (Easy Chicken Curry) | Doc 2 (Chicken Stew…) | Outcome |
| ---- | ---------------------------------- | -------------------------- | --------------------- | --------------------- |
| 0 | **Sort** (`prep_time_minutes:asc`) | value: `20` | value: `15` | ✅ **Doc 2 wins here** |
Sort immediately separated the documents: 15 min beats 20 min. `:asc` will sort lowest to highest. Words, Typo, Proximity, and the rest never got a say.
Notice something important: **Doc 1 still has a higher `_rankingScore` (0.998 vs 0.914)** but it ranks second. This is exactly what we described in [Ordering ranking rules](/guides/relevancy/ordering_ranking_rules): ranking score only measures text relevance. Sort affects the final order but doesn't change the ranking score. If you only looked at `_rankingScore`, you'd think Doc 1 should be first. The score details tell you the real story.
***
## Side by side
In both scenarios the user searches for `"chicken curry"` and sorts by `prep_time_minutes:asc` (quickest first). The only change is the ranking rule placement.
| | Scenario A (Sort is placed after Group 1 ranking rules) | Scenario B (Sort is placed first) |
| ------------------------- | ------------------------------------------------------- | -------------------------------------------------------------- |
| **#1 result** | Easy Chicken Curry (20 min) | Chicken Stew with Curry… (15 min) |
| **Decided by** | Proximity (order 2) | Sort (order 0) |
| **Doc 1 `_rankingScore`** | 0.998 | 0.998 (same — sort doesn't affect it) |
| **Doc 2 `_rankingScore`** | 0.914 | 0.914(same — sort doesn't affect it) |
| **Best for** | Users who want the most relevant recipe | Users who want the quickest recipe regardless of match quality |
***
## The takeaway
Moving Sort **one position** flipped the results. The ranking score details let you see exactly why:
* **Look at the `order` values** to understand the sequence rules were applied
* **Find where scores first diverge** — that's the rule that decided the final order
* **Remember that Sort shows a `value`, not a `score`** It doesn't contribute to `_rankingScore`, which is why a higher-scored document can rank lower when Sort takes priority
Start with Sort after Group 1 rules (Scenario A) and adjust from there based on what your users expect.
# Ordering ranking rules
Source: https://www.meilisearch.com/docs/guides/relevancy/ordering_ranking_rules
Learn how Meilisearch orders search results and how to customize ranking rule order for your use case.
# Ranking rules: getting the order right for you
**When to read this guide**
This guide is for you if you want to understand how Meilisearch orders your search results and how to customize that behavior for your specific use case.
You might be here because you've noticed a document with a lower ranking score appearing above one with a higher score, or you're curious about what happens when you adjust the ranking rule sequence. Maybe you're proactively exploring how to fine-tune results before going live, or you want to prioritize certain types of content over others.
**What you'll learn:** This guide explains how Meilisearch's ranking rules system works behind the scenes - how ranking scores relate to final result order, and how to adjust rankings to match your needs. You'll get practical tips and recommendations for common scenarios, so you can confidently tune your search results.
## **How Meilisearch ranks results**
### **Ranking score vs. final order**
**Ranking score only measures text match quality.** It doesn't include Sort or Custom ranking rules.
Ever noticed a document with a lower ranking score appearing higher in results? That's normal. The ranking score captures text relevance, but your final result order also includes Sort and Custom ranking rules, which don’t care for textual relevancy, and so these don't contribute to the ranking score. Understanding how these two work together is important to tweak effectively.
### **How ranking rules work**
Meilisearch applies ranking rules sequentially. Each rule sorts documents into buckets and passes them to the next rule. This is why rule order matters - earlier rules take priority and later rules serve only as tie-breakers.
### Types of ranking rules
**Group 1 - Broad matching: Word, Type, Proximity (included in ranking score)**
This covers things like:
* **Word**: How many of your search terms appear in the document (more matches = higher ranking)
* **Typo**: Whether these matches are the exact words or matches that are included through typo-tolerance (exact matches rank higher)
* **Proximity**: How close together your search terms appear in the document (closer = more relevant)
**These three rules cast a wide net and return lots of results.** That's good—you want to start broad and then narrow down, not the other way around. If you start too narrow you can lose relevancy easily.
**Group 2 - Fine-tuning : Exactness, Attribute Rank, Word Position (included in ranking score)**
This covers things like:
* **Exactness**: Did the document match your whole search term or just pieces of it? Whole matches rank higher, especially when an entire field matches exactly or starts with your query. Documents containing extra content beyond the search term are ranked lower.
* **Attribute Rank**: Matches in your most important fields rank higher. You set field priority in `searchableAttributes`, with fields at the top of the list treated as the most important.
* **Word Position**: Matches near the beginning of a field rank higher.
**These are your fine-tuning filters.** They return fewer, more precise results. Use these after Group 1 rules to refine your large result set into something more precise.
If you want to dive deeper into the [built in ranking rules](https://www.meilisearch.com/docs/learn/relevancy/ranking_rules) and [custom ranking rules](https://www.meilisearch.com/docs/learn/relevancy/custom_ranking_rules) we have more information available in our documentation.
**And finally... Sort & Custom ranking rules (NOT included in ranking score)**
Its important to note that `sort` ,`asc/desc` custom ranking rules will not be reflected in the Ranking Score. However if they are set, and how they are set, can affect your results. Heres what you need to know\..
**Sort**
The Sort rule only activates when you use the `sort` parameter in your search query. **Without that parameter, it has no effect.**
When you do use `sort`, whatever you specify as a sort gets swapped into the Sort position in your ranking rules:
Search query:
```json theme={null}
"q": "hello"
"sort": [
"price:asc",
"author:desc"
]
```
Ranking rules:
```json theme={null}
[
"words",
"typo",
"proximity",
"attributeRank",
"sort", // "price:asc" "author:desc" gets swapped in here
"wordPosition",
"exactness",
"release_date:asc",
"movie_ranking:desc"
]
```
**Key behaviour: Sort ignores text relevance**
Sort and Custom ranking rules don't consider how well documents match your search query - they simply order results alphabetically or numerically by your chosen field (price, date, etc.).
**Placement matters.** If you put Sort or Custom ranking rules at the top of your ranking rules, results will be ordered by that field instead of by text relevance. Apart from very specific use cases, such as price ordering, this usually creates a poor search experience where less relevant results appear first just because they have the right price or date.
## Our Recommendations for Ranking Rule Ordering
### Keep Group 1 rules first (Words, Typo, Proximity)
Start with `words` as your first rule as it's the foundation. Every other rule depends on word matches existing, so it makes sense to establish those first. Follow it with `typo` and `proximity` to round out your broad matching.
These three rules cast a wide net and pass a large pool of relevant results through the ranking chain. Starting broad is important. If you begin too narrow, you risk losing relevant documents before the later rules get a chance to refine them.
### Place Sort strategically
We recommend putting Sort after your Group 1 rules and before your Group 2 rules (Attribute Rank, Word Position, Exactness). This way, Meilisearch finds relevant results first and then uses your sort field to order documents that have similar text relevance, giving you a balance of match quality and sorting.
If sorting matters more than text relevance for your use case - like an e-commerce price filter where users expect strict price ordering - move Sort higher. Just remember that Sort only activates when you include the `sort` parameter in your search query. Without it, the Sort rule has no effect.
One thing to watch: placing Sort too late means most results are already in their final position before Sort gets a chance to act. If your sort field isn't influencing results the way you expect, try moving it up one position at a time and testing until you find the right spot. For a practical look at how this works, see [How Do I Interpret Ranking Score Details?](/guides/relevancy/interpreting_ranking_scores) where we show the same search returning different results just by moving Sort one position.
### Use Custom ranking rules as tiebreakers
Place custom ranking rules at the end of your sequence. They work best for adding business logic after text relevance has been established — things like popularity, recency, or user ratings. For example, if two recipes match equally well for "chicken curry," a custom `popularity:desc` rule can push the one with more saves to the top.
### Going deeper
Each ranking rule has its own settings you can fine-tune beyond just ordering. For example, you can adjust which fields take priority in attribute ranking, or configure how aggressively typo tolerance matches similar words. If you want to dig into the specifics:
* [Built-in ranking rules](https://www.meilisearch.com/docs/learn/relevancy/ranking_rules#list-of-built-in-ranking-rules) — how each rule works and what it evaluates
* [Attribute ranking order](https://www.meilisearch.com/docs/learn/relevancy/attribute_ranking_order) — controlling which fields matter most with `attributeRank` and `wordPosition`
* [Typo tolerance settings](https://www.meilisearch.com/docs/learn/relevancy/typo_tolerance_settings) — adjusting how flexible matching behaves
**Want to see these rules in action?** In our next guide, [How Do I Interpret Ranking Score Details?](/guides/relevancy/interpreting_ranking_scores), we walk through a real example showing exactly how Meilisearch evaluates each rule — and how moving Sort one position can flip your results.
***
# Ruby on Rails quick start
Source: https://www.meilisearch.com/docs/guides/ruby_on_rails_quick_start
Integrate Meilisearch into your Ruby on Rails app.
Integrate Meilisearch into your Ruby on Rails app.
## 1. Create a Meilisearch project
[Create a project](https://cloud.meilisearch.com) in the Meilisearch Cloud dashboard. Check out our [getting started guide](/learn/getting_started/cloud_quick_start) for step-by-step instructions.
If you prefer to use the self-hosted version of Meilisearch, you can follow the [quick start](https://www.meilisearch.com/docs/learn/self_hosted/getting_started_with_self_hosted_meilisearch) tutorial.
## 2. Create a Rails app
Ensure your environment uses at least Ruby 2.7.0 and Rails 6.1.
```bash theme={null}
rails new blog
```
## 3. Install the meilisearch-rails gem
Navigate to your Rails app and install the `meilisearch-rails` gem.
```bash theme={null}
bundle add meilisearch-rails
```
## 4. Add your Meilisearch credentials
Run the following command to create a `config/initializers/meilisearch.rb` file.
```bash theme={null}
bin/rails meilisearch:install
```
Then add your Meilisearch URL and [Default Admin API Key](/learn/security/basic_security#obtaining-api-keys). On Meilisearch Cloud, you can find your credentials in your project settings.
```Ruby theme={null}
MeiliSearch::Rails.configuration = {
meilisearch_url: '',
meilisearch_api_key: ''
}
```
## 5. Generate the model and run the database migration
Create an example `Article` model and generate the migration files.
```bash theme={null}
bin/rails generate model Article title:string body:text
bin/rails db:migrate
```
## 6. Index your model into Meilisearch
Include the `MeiliSearch::Rails` module and the `meilisearch` block.
```Ruby theme={null}
class Article < ApplicationRecord
include MeiliSearch::Rails
meilisearch do
# index settings
# all attributes will be sent to Meilisearch if block is left empty
end
end
```
This code creates an `Article` index and adds search capabilities to your `Article` model.
Once configured, `meilisearch-rails` automatically syncs your table data with your Meilisearch instance.
## 7. Create new records in the database
Use the Rails console to create new entries in the database.
```bash theme={null}
bin/rails console
```
```Ruby theme={null}
# Use a loop to create and save 5 unique articles with predefined titles and bodies
titles = ["Welcome to Rails", "Exploring Rails", "Advanced Rails", "Rails Tips", "Rails in Production"]
bodies = [
"This is your first step into Ruby on Rails.",
"Dive deeper into the Rails framework.",
"Explore advanced features of Rails.",
"Quick tips for Rails developers.",
"Managing Rails applications in production environments."
]
titles.each_with_index do |title, index|
article = Article.new(title: title, body: bodies[index])
article.save # Saves the entry to the database
end
```
## 8. Start searching
### Backend search
The backend search returns ORM-compliant objects reloaded from your database.
```Ruby theme={null}
# Meilisearch is typo-tolerant:
hits = Article.search('deepre')
hits.first
```
We strongly recommend using the frontend search to enjoy the swift and responsive search-as-you-type experience.
### Frontend search
For testing purposes, you can explore the records using our built-in [search preview](/learn/getting_started/search_preview).
We also provide resources to help you quickly build your own [frontend interface](/guides/front_end/front_end_integration).
## Next steps
When you're ready to use your own data, make sure to configure your [index settings](/reference/api/settings/list-all-settings) first to follow [best practices](/learn/indexing/indexing_best_practices). For a full configuration example, see the [meilisearch-rails gem README](https://github.com/meilisearch/meilisearch-rails?tab=readme-ov-file#%EF%B8%8F-settings).
# Running Meilisearch in production
Source: https://www.meilisearch.com/docs/guides/running_production
Deploy Meilisearch in a Digital Ocean droplet. Covers installation, server configuration, and securing your instance.
This tutorial will guide you through setting up a production-ready Meilisearch instance. These instructions use a DigitalOcean droplet running Debian, but should be compatible with any hosting service running a Linux distro.
[Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=running-production-oss) is the recommended way to run Meilisearch in production environments.
## Requirements
* A DigitalOcean droplet running Debian 12
* An SSH key pair to connect to that machine
DigitalOcean has extensive documentation on [how to use SSH to connect to a droplet](https://www.digitalocean.com/docs/droplets/how-to/connect-with-ssh/).
## Step 1: Install Meilisearch
Log into your server via SSH, update the list of available packages, and install `curl`:
```sh theme={null}
apt update
apt install curl -y
```
Using the latest version of a package is good security practice, especially in production environments.
Next, use `curl` to download and run the Meilisearch command-line installer:
```sh theme={null}
# Install Meilisearch latest version from the script
curl -L https://install.meilisearch.com | sh
```
The Meilisearch installer is a set of scripts that ensure you will get the correct binary for your system.
Next, you need to make the binary accessible from anywhere in your system. Move the binary file into `/usr/local/bin`:
```sh theme={null}
mv ./meilisearch /usr/local/bin/
```
Meilisearch is now installed in your system, but it is not publicly accessible.
## Step 2: Create system user
Running applications as root exposes you to unnecessary security risks. To prevent that, create a dedicated user for Meilisearch:
```sh theme={null}
useradd -d /var/lib/meilisearch -s /bin/false -m -r meilisearch
```
Then give the new user ownership of the Meilisearch binary:
```sh theme={null}
chown meilisearch:meilisearch /usr/local/bin/meilisearch
```
## Step 3: Create a configuration file
After installing Meilisearch and taking the first step towards keeping your data safe, you need to set up a basic configuration file.
First, create the directories where Meilisearch will store its data:
```bash theme={null}
mkdir /var/lib/meilisearch/data /var/lib/meilisearch/dumps /var/lib/meilisearch/snapshots
chown -R meilisearch:meilisearch /var/lib/meilisearch
chmod 750 /var/lib/meilisearch
```
In this tutorial, you're creating the directories in your droplet's local disk. If you are using additional block storage, create these directories there.
Next, download the default configuration to `/etc`:
```bash theme={null}
curl https://raw.githubusercontent.com/meilisearch/meilisearch/latest/config.toml > /etc/meilisearch.toml
```
Finally, update the following lines in the `meilisearch.toml` file so Meilisearch uses the directories you created earlier to store its data, replacing `MASTER_KEY` with a 16-byte string:
```ini theme={null}
env = "production"
master_key = "MASTER_KEY"
db_path = "/var/lib/meilisearch/data"
dump_dir = "/var/lib/meilisearch/dumps"
snapshot_dir = "/var/lib/meilisearch/snapshots"
```
Remember to choose a [safe master key](/learn/security/basic_security#creating-the-master-key-in-a-self-hosted-instance) and avoid exposing it in publicly accessible locations.
You have now configured your Meilisearch instance.
## Step 4: Run Meilisearch as a service
In Linux environments, a service is a process that can be launched when the operating system is booting and which will keep running in the background. If your program stops running for any reason, Linux will immediately restart the service, helping reduce downtime.
### 4.1. Create a service file
Service files are text files that tell your operating system how to run your program.
Run this command to create a service file in `/etc/systemd/system`:
```bash theme={null}
cat << EOF > /etc/systemd/system/meilisearch.service
[Unit]
Description=Meilisearch
After=systemd-user-sessions.service
[Service]
Type=simple
WorkingDirectory=/var/lib/meilisearch
ExecStart=/usr/local/bin/meilisearch --config-file-path /etc/meilisearch.toml
User=meilisearch
Group=meilisearch
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
```
### 4.2. Enable and start service
With your service file now ready to go, activate the service using `systemctl`:
```bash theme={null}
systemctl enable meilisearch
systemctl start meilisearch
```
With `systemctl enable`, you're telling the operating system you want it to run at every boot. `systemctl start` then immediately starts the Meilisearch service.
Ensure everything is working by checking the service status:
```sh theme={null}
systemctl status meilisearch
```
You should see a message confirming your service is running:
```sh theme={null}
● meilisearch.service - Meilisearch
Loaded: loaded (/etc/systemd/system/meilisearch.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-04-10 14:27:49 UTC; 1min 8s ago
Main PID: 14960 (meilisearch)
```
## Step 5: Secure and finish your setup
At this point, Meilisearch is installed and running. It is also protected from eventual crashes and system restarts.
The next step is to make your instance publicly accessible.
If all the requests you send to Meilisearch are done by another application living in the same machine, you can safely skip this section.
### 5.1. Creating a reverse proxy with Nginx
A [reverse proxy](https://www.keycdn.com/support/nginx-reverse-proxy) is an application that will handle every communication between the outside world and your application. In this tutorial, you will use [Nginx](https://www.nginx.com/) as your reverse proxy to receive external HTTP requests and redirect them to Meilisearch.
First, install Nginx on your machine:
```bash theme={null}
apt-get install nginx -y
```
Next, delete the default configuration file:
```bash theme={null}
rm -f /etc/nginx/sites-enabled/default
```
Nginx comes with a set of default settings, such as its default HTTP port, that might conflict with Meilisearch.
Create a new configuration file specifying the reverse proxy settings:
```sh theme={null}
cat << EOF > /etc/nginx/sites-enabled/meilisearch
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
proxy_pass http://localhost:7700;
}
}
EOF
```
Finally, enable the Nginx service:
```bash theme={null}
systemctl daemon-reload
systemctl enable nginx
systemctl restart nginx
```
Your Meilisearch instance is now publicly available.
### 5.2. Enable HTTPS
The only remaining problem is that Meilisearch processes requests via HTTP without any additional security. This is a major security flaw that could result in an attacker accessing your data.
This tutorial assumes you have a registered domain name, and you have correctly configured its DNS's `A record` to point to your DigitalOcean droplet's IP address. Consult the [DigitalOcean DNS documentation](https://docs.digitalocean.com/products/networking/dns/getting-started/dns-registrars/) for more information.
Use [certbot](https://certbot.eff.org/) to configure enable HTTPS in your server.
First, install the required packages on your system:
```bash theme={null}
sudo apt install certbot python3-certbot-nginx -y
```
Next, run certbot:
```bash theme={null}
certbot --nginx
```
Enter your email address, agree to the Terms and Conditions, and choose your domain. When prompted if you want to automatically redirect HTTP traffic, choose option `2: Redirect`.
Certbot will finish configuring Nginx. Once it is done, all traffic to your server will use HTTPS and you will have finished securing your Meilisearch instance.
Your security certificate must be renewed every 90 days. Certbot schedules the renewal automatically. Run a test to verify this process is in place:
```bash theme={null}
sudo certbot renew --dry-run
```
If this command returns no errors, you have successfully enabled HTTPS in your Nginx server.
## Conclusion
You have followed the main steps to provide a safe and stable service. Your Meilisearch instance is now up and running in a safe and publicly accessible environment thanks to the combination of a reverse proxy, HTTPS, and Meilisearch's built-in security keys.
# Strapi v4 guide
Source: https://www.meilisearch.com/docs/guides/strapi_v4
Learn how to use Meilisearch with Strapi v4.
This tutorial will show you how to integrate Meilisearch with [Strapi](https://strapi.io/) to create a search-based web app. First, you will use Strapi’s quick start guide to create a restaurant collection, and then search this collection with Meilisearch.
## Prerequisites
* [Node.js](https://nodejs.org/): active LTS or maintenance LTS versions, currently Node.js >=18.0.0 \<=20.x.x
* npm >=6.0.0 (installed with Node.js)
* A running instance of Meilisearch (v >= 1.x). If you need help with this part, you can consult the [Installation section](/learn/self_hosted/install_meilisearch_locally).
## Create a back end using Strapi
### Set up the project
Create a directory called `my-app` where you will add the back and front-end parts of the application. Generate a back-end API using Strapi inside `my-app`:
```bash theme={null}
npx create-strapi-app@latest back --quickstart
```
This command creates a Strapi app in a new directory called `back` and opens the admin dashboard. Create an account to access it.
Once you have created your account, you should be redirected to Strapi's admin dashboard. This is where you will configure your back-end API.
### Build and manage your content
The next step is to create a new collection type. A collection is like a blueprint of your content—in this case, it will be a collection of restaurants. You will create another collection called "Category" to organize your restaurants later.
To follow along, complete "Part B: Build your data structure with the Content-type Builder" and steps 2 to 5 in "Part D: Add content to your Strapi Cloud project with the Content Manager" from Strapi's quick start guide. These will include:
* creating collection types
* creating new entries
* setting roles & permissions
* publishing the content
### Expand your database
After finishing those steps of Strapi's quick start guide, two new collections named Restaurant and Category should have appeared under `Content Manager > Collection Types`. If you click on `Restaurant`, you can see that there is only one. Add more by clicking the `+ Create new entry` button in the upper-right corner of the dashboard.
Add the following three restaurants, one by one. For each restaurant, you need to press `Save` and then `Publish`.
* Name: `The Butter Biscotte`
* Description: `All about butter, nothing about health.`
Next, add the `French food` category on the bottom right corner of the page.
* Name: `The Slimy Snail`
* Description: `Gastronomy is made of garlic and butter.`
* Category: `French food`
* Name: `The Smell of Blue`
* Description: `Blue Cheese is not expired, it is how you eat it. With a bit of butter and a lot of happiness.`
* Category: `French food`
Your Strapi back-end is now up and running. Strapi automatically creates a REST API for your Restaurants collection. Check Strapi's documentation for all available [API endpoints](https://strapi.io/documentation/developer-docs/latest/developer-resources/content-api/content-api.html#api-endpoints).
Now, it’s time to connect Strapi and Meilisearch and start searching.
## Connect Strapi and Meilisearch
To add the Meilisearch plugin to Strapi, you need to first quit the Strapi app. Go to the terminal window running Strapi and push `Ctrl+C` to kill the process.
Next, install the plugin in the `back` directory.
```bash theme={null}
npm install strapi-plugin-meilisearch
```
After the installation, you have to rebuild the Strapi app before starting it again in development mode, since it makes configuration easier.
```bash theme={null}
npm run build
npm run develop
```
At this point, your Strapi app should be running once again on the default address: [http://localhost:1337/admin](http://localhost:1337/admin). Open it in your browser. You should see an admin log-in page. Enter the credentials you used to create your account.
Once connected, you should see the new `meilisearch` plugin on the left side of the screen.
Add your Meilisearch credentials on the Settings tab of the `meilisearch` plugin page.
Now it's time to add your Strapi collection to Meilisearch. In the `Collections` tab on the `meilisearch` plugin page, you should see the `restaurant` and `category` content-types.
By clicking on the checkbox next to `restaurant`, the content-type is automatically indexed in Meilisearch.
The word “Hooked” appears when you click on the `restaurant`'s checkbox in the `Collections` tab. This means that each time you add, update or delete an entry in your restaurant content-types, Meilisearch is automatically updated.
Once the indexing finishes, your restaurants are in Meilisearch. Access the [search preview](/learn/getting_started/search_preview) confirm everything is working correctly by searching for “butter”.
Your Strapi entries are sent to Meilisearch as is. You can modify the data before sending it to Meilisearch, for instance by removing a field. Check out all the customization options on the [strapi-plugin-meilisearch page](https://github.com/meilisearch/strapi-plugin-meilisearch/#-customization).
## What's next
This tutorial showed you how to add your Strapi collections to Meilisearch.
In most real-life scenarios, you'll typically build a custom search interface and fetch results using Meilisearch's API. To learn how to quickly build a front-end interface of your own, check out the [Front-end integration page](/guides/front_end/front_end_integration) guide.
# Integrate Meilisearch Cloud with Vercel
Source: https://www.meilisearch.com/docs/guides/vercel
Link Meilisearch Cloud to a Vercel Project.
In this guide you will learn how to link a [Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=vercel-integration) instance to your Vercel project.
## Introducing our tools
### What is Vercel?
[Vercel](https://vercel.com/) is a cloud platform for building and deploying web applications. It works out of the box with most popular web development tools.
### What is Meilisearch Cloud?
[Meilisearch Cloud](https://www.meilisearch.com/cloud?utm_campaign=oss\&utm_source=docs\&utm_medium=vercel-integration) offers a managed search service that is scalable, reliable, and designed to meet the needs of all companies.
## Integrate Meilisearch into your Vercel project
### Create and deploy a Vercel project
From your Vercel dashboard, create a new project. You can create a project from a template, or import a Git repository.
Select your project, then click on **Deploy**. Once deployment is complete, go back to your project’s dashboard.
### Add the Meilisearch integration
Go to the project settings tab and click on **Integrations** on the sidebar menu to the left of your screen.
Search for the [Meilisearch integration](https://vercel.com/integrations/meilisearch-cloud) in the search bar. Click on the **Add integration** button.
Select the Vercel account or team and the project you to which you want to add the integration. You may add the Meilisearch integration to one or more projects in this menu.
Click on **Continue**. Vercel will display a list with the permissions the integration needs to work properly. Review it, then click on **Add Integration**.
### Set up Meilisearch Cloud
Vercel will redirect you to the Meilisearch Cloud page. Log in or create an account. New accounts enjoy a 14-day free trial period.
You can choose an existing project or create a new one. To create a new project, complete the form with the project name and region.
Once you click on **Create project**, you should see the following message: “Your Meilisearch + Vercel integration is one click away from being completed.” Click "Finish the Vercel integration setup". Meilisearch will then redirect you back to the Vercel integration page.
### Understand and use Meilisearch API keys
Meilisearch creates [four default API keys](/learn/security/basic_security#obtaining-api-keys): `Default Search API Key`, `Default Admin API Key`, `Default Read-Only Admin API Key`, and `Default Chat API Key`.
#### Admin API key
Use the `Default Admin API Key`, to control who can access or create new documents, indexes, and change index settings. Be careful with the admin key and avoid exposing it in public environments.
#### Search API key
Use the `Default Search API Key` to access the [search route](/reference/api/search/search-with-post). This is the one you want to use in your front end.
The Search and Admin API keys are automatically added to Vercel along with the Meilisearch URL. For more information on the other default keys, consult the [security documentation](/learn/security/basic_security#obtaining-api-keys).
The master key–which hasn’t been added to Vercel–grants users full control over an instance. You can find it in your project’s overview on your [Meilisearch Cloud dashboard](https://cloud.meilisearch.com/projects/?utm_campaign=oss\&utm_source=docs\&utm_medium=vercel-integration). Read more about [Meilisearch security](https://www.meilisearch.com/docs/learn/security/master_api_keys).
### Review your project settings
Go back to your project settings and check the new Meilisearch environment variables:
* `MEILISEARCH_ADMIN_KEY`
* `MEILISEARCH_URL`
* `MEILISEARCH_SEARCH_KEY`
When using [Next.js](https://nextjs.org/), ensure you prefix your browser-facing environment variables with `NEXT_PUBLIC_`. This makes them available to the browser side of your application.
## Take advantage of the Meilisearch Cloud dashboard
Use the [Meilisearch Cloud dashboard](https://cloud.meilisearch.com/projects/?utm_campaign=oss\&utm_source=docs\&utm_medium=vercel-integration), to index documents and manage your project settings.
## Resources and next steps
Check out the [quick start guide](/learn/self_hosted/getting_started_with_self_hosted_meilisearch#add-documents) for a short introduction on how to use Meilisearch. We also provide many [SDKs and tools](/learn/resources/sdks), so you can use Meilisearch in your favorite language or framework.
You are now ready to [start searching](/reference/api/search/search-with-post)!
# Create dump
Source: https://www.meilisearch.com/docs/reference/api/backups/create-dump
assets/open-api/meilisearch-openapi-mintlify.json post /dumps
Trigger a dump creation process. When complete, a dump file is written to the [dump directory](https://www.meilisearch.com/docs/learn/self_hosted/configure_meilisearch_at_launch#dump-directory). The directory is created if it does not exist.
# Create snapshot
Source: https://www.meilisearch.com/docs/reference/api/backups/create-snapshot
assets/open-api/meilisearch-openapi-mintlify.json post /snapshots
Trigger a snapshot creation process. When complete, a snapshot file is written to the snapshot directory. The directory is created if it does not exist.
# Configure experimental features
Source: https://www.meilisearch.com/docs/reference/api/experimental-features/configure-experimental-features
assets/open-api/meilisearch-openapi-mintlify.json patch /experimental-features
Enable or disable experimental features at runtime.
# Configure network topology
Source: https://www.meilisearch.com/docs/reference/api/experimental-features/configure-network-topology
assets/open-api/meilisearch-openapi-mintlify.json patch /network
Add or remove remote nodes from the network. Changes apply to the current instance’s view of the cluster.
# Get network topology
Source: https://www.meilisearch.com/docs/reference/api/experimental-features/get-network-topology
assets/open-api/meilisearch-openapi-mintlify.json get /network
Return the list of Meilisearch instances currently known to this node (self and remotes).
# List experimental features
Source: https://www.meilisearch.com/docs/reference/api/experimental-features/list-experimental-features
assets/open-api/meilisearch-openapi-mintlify.json get /experimental-features
Return all experimental features that can be toggled via this API, and whether each one is currently enabled or disabled.
# Network control
Source: https://www.meilisearch.com/docs/reference/api/experimental-features/network-control
assets/open-api/meilisearch-openapi-mintlify.json post /network/control
Send messages to control the progress of a network topology change task.
The route is mostly used internally when sending a PATCH to the network, but is accessible for manual control as well.
# Export to a remote Meilisearch
Source: https://www.meilisearch.com/docs/reference/api/export/export-to-a-remote-meilisearch
assets/open-api/meilisearch-openapi-mintlify.json post /export
Trigger an export that sends documents and settings from this instance to a remote Meilisearch server. Configure the remote URL and optional API key in the request body.
# Get health
Source: https://www.meilisearch.com/docs/reference/api/health/get-health
assets/open-api/meilisearch-openapi-mintlify.json get /health
The health check endpoint enables you to periodically test the health of your Meilisearch instance. Returns a simple status indicating that the server is available.
# Compact index
Source: https://www.meilisearch.com/docs/reference/api/indexes/compact-index
assets/open-api/meilisearch-openapi-mintlify.json post /indexes/{index_uid}/compact
Trigger a compaction process on the specified index.
Compaction reorganizes the index database to reclaim space and improve read performance.
# Get stats of index
Source: https://www.meilisearch.com/docs/reference/api/indexes/get-stats-of-index
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/stats
Return statistics for a single index: document count, database size, indexing status, and field distribution.
# Retrieve logs
Source: https://www.meilisearch.com/docs/reference/api/logs/retrieve-logs
assets/open-api/meilisearch-openapi-mintlify.json post /logs/stream
Stream logs over HTTP. The format of the logs depends on the configuration specified in the payload. The logs are sent as multi-part, and the stream never stops, so ensure your client can handle a long-lived connection. To stop receiving logs, call the `DELETE /logs/stream` route.
Only one client can listen at a time. An error is returned if you call this route while it is already in use by another client.
# Stop retrieving logs
Source: https://www.meilisearch.com/docs/reference/api/logs/stop-retrieving-logs
assets/open-api/meilisearch-openapi-mintlify.json delete /logs/stream
Call this route to make the engine stop sending logs to the client that opened the `POST /logs/stream` connection.
# Update target of the console logs
Source: https://www.meilisearch.com/docs/reference/api/logs/update-target-of-the-console-logs
assets/open-api/meilisearch-openapi-mintlify.json post /logs/stderr
Configure at runtime the level of the console logs written to stderr (e.g. debug, info, warn, error).
# Get typoTolerance
Source: https://www.meilisearch.com/docs/reference/api/settings/get-typotolerance
assets/open-api/meilisearch-openapi-mintlify.json get /indexes/{index_uid}/settings/typo-tolerance
Returns the current value of the `typoTolerance` setting for the index.
# Reset synonyms
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-synonyms
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/synonyms
Resets the `synonyms` setting to its default value.
# Reset typoTolerance
Source: https://www.meilisearch.com/docs/reference/api/settings/reset-typotolerance
assets/open-api/meilisearch-openapi-mintlify.json delete /indexes/{index_uid}/settings/typo-tolerance
Resets the `typoTolerance` setting to its default value.
# Update synonyms
Source: https://www.meilisearch.com/docs/reference/api/settings/update-synonyms
assets/open-api/meilisearch-openapi-mintlify.json put /indexes/{index_uid}/settings/synonyms
Updates the `synonyms` setting for the index. Send the new value in the request body; send null to reset to default.
# Update typoTolerance
Source: https://www.meilisearch.com/docs/reference/api/settings/update-typotolerance
assets/open-api/meilisearch-openapi-mintlify.json patch /indexes/{index_uid}/settings/typo-tolerance
Updates the `typoTolerance` setting for the index. Send the new value in the request body; send null to reset to default.
# Get Prometheus metrics
Source: https://www.meilisearch.com/docs/reference/api/stats/get-prometheus-metrics
assets/open-api/meilisearch-openapi-mintlify.json get /metrics
Return metrics for the engine in Prometheus format. This is an [experimental feature](https://www.meilisearch.com/docs/learn/experimental/overview) and must be enabled before use.
# Get stats of all indexes
Source: https://www.meilisearch.com/docs/reference/api/stats/get-stats-of-all-indexes
assets/open-api/meilisearch-openapi-mintlify.json get /stats
Return statistics for the Meilisearch instance and for each index. Includes database size, last update time, document counts, and indexing status per index.
# Get version
Source: https://www.meilisearch.com/docs/reference/api/version/get-version
assets/open-api/meilisearch-openapi-mintlify.json get /version
Return the current Meilisearch version, including the commit SHA and build date.
# Create webhook
Source: https://www.meilisearch.com/docs/reference/api/webhooks/create-webhook
assets/open-api/meilisearch-openapi-mintlify.json post /webhooks
Register a new webhook to receive task completion notifications. You can optionally set custom headers (e.g. for authentication) and configure the callback URL.
# Delete webhook
Source: https://www.meilisearch.com/docs/reference/api/webhooks/delete-webhook
assets/open-api/meilisearch-openapi-mintlify.json delete /webhooks/{uuid}
Permanently remove a webhook by its UUID. The webhook will no longer receive task notifications.
# Get webhook
Source: https://www.meilisearch.com/docs/reference/api/webhooks/get-webhook
assets/open-api/meilisearch-openapi-mintlify.json get /webhooks/{uuid}
Retrieve a single webhook by its UUID.
# List webhooks
Source: https://www.meilisearch.com/docs/reference/api/webhooks/list-webhooks
assets/open-api/meilisearch-openapi-mintlify.json get /webhooks
Return all webhooks registered on the instance. Each webhook is returned with its URL, optional headers, and UUID (the key value is never returned).
# Update webhook
Source: https://www.meilisearch.com/docs/reference/api/webhooks/update-webhook
assets/open-api/meilisearch-openapi-mintlify.json patch /webhooks/{uuid}
Update the URL or headers of an existing webhook identified by its UUID.
# Error codes
Source: https://www.meilisearch.com/docs/reference/errors/error_codes
Consult this page for an exhaustive list of errors you may encounter when using the Meilisearch API.
This page is an exhaustive list of Meilisearch API errors.
## `api_key_already_exists`
A key with this [`uid`](/reference/api/keys/get-api-key#response-uid) already exists.
## `api_key_not_found`
The requested API key could not be found.
## `bad_request`
The request is invalid, check the error message for more information.
## `batch_not_found`
The requested batch does not exist. Please ensure that you are using the correct [`uid`](/reference/api/async-task-management/list-batches).
## `database_size_limit_reached`
The requested database has reached its maximum size.
## `document_fields_limit_reached`
A document exceeds the [maximum limit of 65,535 fields](/learn/resources/known_limitations#maximum-number-of-attributes-per-document).
## `document_not_found`
The requested document can't be retrieved. Either it doesn't exist, or the database was left in an inconsistent state.
## `dump_process_failed`
An error occurred during the dump creation process. The task was aborted.
## `facet_search_disabled`
The [`/facet-search`](/reference/api/facet-search/search-in-facets) route has been queried while [the `facetSearch` index setting](/reference/api/settings/get-facetsearch) is set to `false`.
## `feature_not_enabled`
You have tried using an [experimental feature](/learn/resources/experimental_features_overview) without activating it.
## `immutable_api_key_actions`
The [`actions`](/reference/api/keys/list-api-keys) field of an API key cannot be modified.
## `immutable_api_key_created_at`
The [`createdAt`](/reference/api/keys/get-api-key#response-created-at) field of an API key cannot be modified.
## `immutable_api_key_expires_at`
The [`expiresAt`](/reference/api/keys/get-api-key#response-expiresat) field of an API key cannot be modified.
## `immutable_api_key_indexes`
The [`indexes`](/reference/api/keys/get-api-key#response-indexes) field of an API key cannot be modified.
## `immutable_api_key_key`
The [`key`](/reference/api/keys/get-api-key#response-key) field of an API key cannot be modified.
## `immutable_api_key_uid`
The [`uid`](/reference/api/keys/get-api-key#response-uid) field of an API key cannot be modified.
## `immutable_api_key_updated_at`
The [`updatedAt`](/reference/api/keys/get-api-key#response-updated-at) field of an API key cannot be modified.
## `immutable_index_uid`
The [`uid`](/reference/api/indexes/get-index) field of an index cannot be modified.
## `immutable_index_updated_at`
The [`updatedAt`](/reference/api/indexes/get-index) field of an index cannot be modified.
## `immutable_webhook`
You tried to modify a reserved [webhook](/reference/api/webhooks/list-webhooks). Reserved webhooks are configured by Meilisearch Cloud and have `isEditable` set to `true`. Webhooks created with an instance option are also immutable.
## `immutable_webhook_uuid`
You tried to manually set a webhook `uuid`. Meilisearch automatically generates `uuid` for webhooks.
## `immutable_webhook_is_editable`
You tried to manually set a webhook's `isEditable` field. Meilisearch automatically sets `isEditable` for all webhooks. Only reserved webhooks have `isEditable` set to `false`.
## `index_already_exists`
An index with this [`uid`](/reference/api/indexes/get-index) already exists, check out our guide on [index creation](/learn/getting_started/indexes).
## `index_creation_failed`
An error occurred while trying to create an index, check out our guide on [index creation](/learn/getting_started/indexes).
## `index_not_found`
An index with this `uid` was not found, check out our guide on [index creation](/learn/getting_started/indexes).
## `index_primary_key_already_exists`
The requested index already has a primary key that [cannot be changed](/learn/getting_started/primary_key#changing-your-primary-key-with-the-update-index-endpoint).
## `index_primary_key_multiple_candidates_found`
[Primary key inference](/learn/getting_started/primary_key#meilisearch-guesses-your-primary-key) failed because the received documents contain multiple fields ending with `id`. Use the [update index endpoint](/reference/api/indexes/update-index) to manually set a primary key.
## `internal`
Meilisearch experienced an internal error. Check the error message, and [open an issue](https://github.com/meilisearch/meilisearch/issues/new?assignees=\&labels=\&template=bug_report\&title=) if necessary.
## `invalid_api_key`
The requested resources are protected with an API key. The provided API key is invalid. Read more about it in our [security tutorial](/learn/security/basic_security).
## `invalid_api_key_actions`
The [`actions`](/reference/api/keys/list-api-keys) field for the provided API key resource is invalid. It should be an array of strings representing action names.
## `invalid_api_key_description`
The [`description`](/reference/api/keys/get-api-key#response-description) field for the provided API key resource is invalid. It should either be a string or set to `null`.
## `invalid_api_key_expires_at`
The [`expiresAt`](/reference/api/keys/get-api-key#response-expiresat) field for the provided API key resource is invalid. It should either show a future date or datetime in the [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format or be set to `null`.
## `invalid_api_key_indexes`
The [`indexes`](/reference/api/keys/get-api-key#response-indexes) field for the provided API key resource is invalid. It should be an array of strings representing index names.
## `invalid_api_key_limit`
The [`limit`](/reference/api/keys/list-api-keys) parameter is invalid. It should be an integer.
## `invalid_api_key_name`
The given [`name`](/reference/api/keys/get-api-key#response-name) is invalid. It should either be a string or set to `null`.
## `invalid_api_key_offset`
The [`offset`](/reference/api/keys/list-api-keys) parameter is invalid. It should be an integer.
## `invalid_api_key_uid`
The given [`uid`](/reference/api/keys/get-api-key#response-uid) is invalid. The `uid` must follow the [uuid v4](https://www.sohamkamani.com/uuid-versions-explained) format.
## `invalid_search_attributes_to_search_on`
The value passed to [`attributesToSearchOn`](/reference/api/search/search-with-post#body-attributes-to-search-on) is invalid. `attributesToSearchOn` accepts an array of strings indicating document attributes. Attributes given to `attributesToSearchOn` must be present in the [`searchableAttributes` list](/learn/relevancy/displayed_searchable_attributes#the-searchableattributes-list).
## `invalid_search_media`
The value passed to [`media`](/reference/api/search/search-with-post#body-media) is not a valid JSON object.
## `invalid_search_media_and_vector`
The search query contains non-`null` values for both [`media`](/reference/api/search/search-with-post#body-media) and [`vector`](/reference/api/search/search-with-post#body-media). These two parameters are mutually exclusive, since `media` generates vector embeddings via the embedder configured in `hybrid`.
## `invalid_content_type`
The [Content-Type header](/reference/api/headers) is not supported by Meilisearch. Currently, Meilisearch only supports JSON, CSV, and NDJSON.
## `invalid_document_csv_delimiter`
The [`csvDelimiter`](/reference/api/documents/add-or-replace-documents) parameter is invalid. It should either be a string or [a single ASCII character](https://www.rfc-editor.org/rfc/rfc20).
## `invalid_document_id`
The provided [document identifier](/learn/getting_started/primary_key#document-id) does not meet the format requirements. A document identifier must be of type integer or string, composed only of alphanumeric characters (a-z A-Z 0-9), hyphens (-), and underscores (\_).
## `invalid_document_fields`
The [`fields`](/reference/api/documents/list-documents-with-get) parameter is invalid. It should be a string.
## `invalid_document_filter`
This error occurs if:
* The [`filter`](/reference/api/documents/list-documents-with-get) parameter is invalid
* It should be a string, array of strings, or array of array of strings for the [get documents with POST endpoint](/reference/api/documents/list-documents-with-post)
* It should be a string for the [get documents with GET endpoint](/reference/api/documents/list-documents-with-get)
* The attribute used for filtering is not defined in the [`filterableAttributes` list](/reference/api/settings/get-filterableattributes)
* The [filter expression](/learn/filtering_and_sorting/filter_expression_reference) has a missing or invalid operator. [Read more about our supported operators](/learn/filtering_and_sorting/filter_expression_reference)
## `invalid_document_limit`
The [`limit`](/reference/api/documents/list-documents-with-get) parameter is invalid. It should be an integer.
## `invalid_document_offset`
The [`offset`](/reference/api/documents/list-documents-with-get) parameter is invalid. It should be an integer.
## `invalid_document_sort`
This error occurs if:
* The syntax for the [`sort`](/reference/api/documents/list-documents-with-post) parameter is invalid
* The attribute used for sorting is not defined in the [`sortableAttributes`](/reference/api/settings/get-sortableattributes) list or the `sort` ranking rule is missing from the settings
* A reserved keyword like `_geo`, `_geoDistance`, `_geoRadius`, or `_geoBoundingBox` is used as a filter
## `invalid_document_geo_field`
The provided `_geo` field of one or more documents is invalid. Meilisearch expects `_geo` to be an object with two fields, `lat` and `lng`, each containing geographic coordinates expressed as a string or floating point number. Read more about `_geo` and how to troubleshoot it in [our dedicated guide](/learn/filtering_and_sorting/geosearch).
## `invalid_document_geojson_field`
The `geojson` field in one or more documents is invalid or doesn't match the [GeoJSON specification](https://datatracker.ietf.org/doc/html/rfc7946).
## `invalid_export_url`
The export target instance URL is invalid or could not be reached.
## `invalid_export_api_key`
The supplied security key does not have the required permissions to access the target instance.
## `invalid_export_payload_size`
The provided payload size is invalid. The payload size must be a string indicating the maximum payload size in a human-readable format.
## `invalid_export_indexes_patterns`
The provided index pattern is invalid. The index pattern must be an alphanumeric string, optionally including a wildcard.
## `invalid_export_index_filter`
The provided index export filter is not a valid [filter expression](/learn/filtering_and_sorting/filter_expression_reference).
## `invalid_facet_search_facet_name`
The attribute used for the `facetName` field is either not a string or not defined in the [`filterableAttributes` list](/reference/api/settings/get-filterableattributes).
## `invalid_facet_search_facet_query`
The provided value for `facetQuery` is invalid. It should either be a string or `null`.
## `invalid_index_limit`
The [`limit`](/reference/api/indexes/list-indexes) parameter is invalid. It should be an integer.
## `invalid_index_offset`
The [`offset`](/reference/api/indexes/list-indexes) parameter is invalid. It should be an integer.
## `invalid_index_uid`
There is an error in the provided index format, check out our guide on [index creation](/learn/getting_started/indexes).
## `invalid_index_primary_key`
The [`primaryKey`](/reference/api/indexes/swap-indexes) field is invalid. It should either be a string or set to `null`.
## `invalid_multi_search_query_federated`
A multi-search query includes `federationOptions` but the top-level `federation` object is `null` or missing.
## `invalid_multi_search_query_pagination`
A multi-search query contains `page`, `hitsPerPage`, `limit` or `offset`, but the top-level federation object is not `null`.
## `invalid_multi_search_query_position`
`federationOptions.queryPosition` is not a positive integer.
## `invalid_multi_search_weight`
A multi-search query contains a negative value for `federated.weight`.
## `invalid_multi_search_queries_ranking_rules`
Two or more queries in a multi-search request have incompatible results.
## `invalid_multi_search_facets`
`federation.facetsByIndex.` contains a value that is not in the filterable attributes list.
## `invalid_multi_search_sort_facet_values_by`
`federation.mergeFacets.sortFacetValuesBy` is not a string or doesn't have one of the allowed values.
## `invalid_multi_search_query_facets`
A query in the queries array contains `facets` when federation is present and non-`null`.
## `invalid_multi_search_merge_facets`
`federation.mergeFacets` is not an object or contains unexpected fields.
## `invalid_multi_search_max_values_per_facet`
`federation.mergeFacets.maxValuesPerFacet` is not a positive integer.
## `invalid_multi_search_facet_order`
Two or more indexes have a different `faceting.sortFacetValuesBy` for the same requested facet.
## `invalid_multi_search_facets_by_index`
`facetsByIndex` is not an object or contains unknown fields.
## `invalid_multi_search_remote`
`federationOptions.remote` is not `network.self` and is not a key in `network.remotes`.
## `invalid_network_self`
The [network object](/reference/api/network/get-network) contains a `self` that is not a string or `null`.
## `invalid_network_remotes`
The [network object](/reference/api/network/get-network) contains a `remotes` that is not an object or `null`.
## `invalid_network_url`
One of the remotes in the [network object](/reference/api/network/get-network) contains a `url` that is not a string.
## `invalid_network_search_api_key`
One of the remotes in the [network object](/reference/api/network/get-network) contains a `searchApiKey` that is not a string or `null`.
## `invalid_search_attributes_to_crop`
The [`attributesToCrop`](/reference/api/search/search-with-post#body-attributes-to-crop) parameter is invalid. It should be an array of strings, a string, or set to `null`.
## `invalid_search_attributes_to_highlight`
The [`attributesToHighlight`](/reference/api/search/search-with-post#body-attributes-to-highlight) parameter is invalid. It should be an array of strings, a string, or set to `null`.
## `invalid_search_attributes_to_retrieve`
The [`attributesToRetrieve`](/reference/api/search/search-with-post#body-attributes-to-retrieve) parameter is invalid. It should be an array of strings, a string, or set to `null`.
## `invalid_search_crop_length`
The [`cropLength`](/reference/api/search/search-with-post#body-crop-length) parameter is invalid. It should be an integer.
## `invalid_search_crop_marker`
The [`cropMarker`](/reference/api/search/search-with-post#body-crop-marker) parameter is invalid. It should be a string or set to `null`.
## `invalid_search_embedder`
[`embedder`](/reference/api/search/search-with-post#body-hybrid) is invalid. It should be a string corresponding to the name of a configured embedder.
## `invalid_search_facets`
This error occurs if:
* The [`facets`](/reference/api/search/search-with-post#body-facets) parameter is invalid. It should be an array of strings, a string, or set to `null`
* The attribute used for faceting is not defined in the [`filterableAttributes` list](/reference/api/settings/get-filterableattributes)
## `invalid_search_filter`
This error occurs if:
* The syntax for the [`filter`](/reference/api/search/search-with-post#body-filter) parameter is invalid
* The attribute used for filtering is not defined in the [`filterableAttributes` list](/reference/api/settings/get-filterableattributes)
* A reserved keyword like `_geo`, `_geoDistance`, or `_geoPoint` is used as a filter
## `invalid_search_highlight_post_tag`
The [`highlightPostTag`](/reference/api/search/search-with-post#body-highlight-pre-tag) parameter is invalid. It should be a string.
## `invalid_search_highlight_pre_tag`
The [`highlightPreTag`](/reference/api/search/search-with-post#body-highlight-pre-tag) parameter is invalid. It should be a string.
## `invalid_search_hits_per_page`
The [`hitsPerPage`](/reference/api/search/search-with-post#body-hits-per-page) parameter is invalid. It should be an integer.
## `invalid_search_hybrid_query`
The [`hybrid`](/reference/api/search/search-with-post#body-hybrid) parameter is neither `null` nor an object, or it is an object with unknown keys.
## `invalid_search_limit`
The [`limit`](/reference/api/search/search-with-post#body-limit) parameter is invalid. It should be an integer.
## `invalid_search_locales`
The [`locales`](/reference/api/search/search-with-post#body-locales) parameter is invalid.
## `invalid_settings_embedder`
The [`embedders`](/reference/api/settings/get-embedders) index setting value is invalid.
## `invalid_settings_facet_search`
The [`facetSearch`](/reference/api/settings/get-facetsearch) index setting value is invalid.
## `invalid_settings_localized_attributes`
The [`localizedAttributes`](/reference/api/settings/get-localized-attributes) index setting value is invalid.
## `invalid_search_matching_strategy`
The [`matchingStrategy`](/reference/api/search/search-with-post#body-matching-strategy) parameter is invalid. It should either be set to `last` or `all`.
## `invalid_search_offset`
The [`offset`](/reference/api/search/search-with-post#body-offset) parameter is invalid. It should be an integer.
## `invalid_settings_prefix_search`
The [`prefixSearch`](/reference/api/settings/get-prefix-search) index setting value is invalid.
## `invalid_search_page`
The [`page`](/reference/api/search/search-with-post#body-page) parameter is invalid. It should be an integer.
## `invalid_search_q`
The [`q`](/reference/api/search/search-with-post#body-q) parameter is invalid. It should be a string or set to `null`
## `invalid_search_ranking_score_threshold`
The [`rankingScoreThreshold`](/reference/api/search/search-with-post#body-show-ranking-score-threshold) in a search or multi-search request is not a number between `0.0` and `1.0`.
## `invalid_search_show_matches_position`
The [`showMatchesPosition`](/reference/api/search/search-with-post#body-show-matches-position) parameter is invalid. It should either be a boolean or set to `null`.
## `invalid_search_sort`
This error occurs if:
* The syntax for the [`sort`](/reference/api/search/search-with-post#body-sort) parameter is invalid
* The attribute used for sorting is not defined in the [`sortableAttributes`](/reference/api/settings/get-sortableattributes) list or the `sort` ranking rule is missing from the settings
* A reserved keyword like `_geo`, `_geoDistance`, `_geoRadius`, or `_geoBoundingBox` is used as a filter
## `invalid_settings_displayed_attributes`
The value of [displayed attributes](/learn/relevancy/displayed_searchable_attributes#displayed-fields) is invalid. It should be an empty array, an array of strings, or set to `null`.
## `invalid_settings_distinct_attribute`
The value of [distinct attributes](/learn/relevancy/distinct_attribute) is invalid. It should be a string or set to `null`.
## `invalid_settings_faceting_sort_facet_values_by`
The value provided for the [`sortFacetValuesBy`](/reference/api/settings/get-faceting) object is incorrect. The accepted values are `alpha` or `count`.
## `invalid_settings_faceting_max_values_per_facet`
The value for the [`maxValuesPerFacet`](/reference/api/settings/get-faceting) field is invalid. It should either be an integer or set to `null`.
## `invalid_settings_filterable_attributes`
The value of [filterable attributes](/reference/api/settings/get-filterableattributes) is invalid. It should be an empty array, an array of strings, or set to `null`.
## `invalid_settings_pagination`
The value for the [`maxTotalHits`](/reference/api/settings/update-pagination) field is invalid. It should either be an integer or set to `null`.
## `invalid_settings_ranking_rules`
This error occurs if:
* The [settings payload](/reference/api/settings/update-all-settings) has an invalid format
* A non-existent ranking rule is specified
* A custom ranking rule is malformed
* A reserved keyword like `_geo`, `_geoDistance`, `_geoRadius`, `_geoBoundingBox`, or `_geoPoint` is used as a custom ranking rule
## `invalid_settings_searchable_attributes`
The value of [searchable attributes](/reference/api/settings/get-searchableattributes) is invalid. It should be an empty array, an array of strings or set to `null`.
## `invalid_settings_search_cutoff_ms`
The specified value for [`searchCutoffMs`](/reference/api/settings/update-searchcutoffms) is invalid. It should be an integer indicating the cutoff in milliseconds.
## `invalid_settings_sortable_attributes`
The value of [sortable attributes](/reference/api/settings/get-sortableattributes) is invalid. It should be an empty array, an array of strings or set to `null`.
## `invalid_settings_stop_words`
The value of [stop words](/reference/api/settings/get-stopwords) is invalid. It should be an empty array, an array of strings or set to `null`.
## `invalid_settings_synonyms`
The value of the [synonyms](/reference/api/settings/get-synonyms) is invalid. It should either be an object or set to `null`.
## `invalid_settings_typo_tolerance`
This error occurs if:
* The [`enabled`](/reference/api/settings/get-typotolerance) field is invalid. It should either be a boolean or set to `null`
* The [`disableOnAttributes`](/reference/api/settings/get-typotolerance) field is invalid. It should either be an array of strings or set to `null`
* The [`disableOnWords`](/reference/api/settings/get-typotolerance) field is invalid. It should either be an array of strings or set to `null`
* The [`minWordSizeForTypos`](/reference/api/settings/get-typotolerance) field is invalid. It should either be an integer or set to `null`
* The value of either [`oneTypo`](/reference/api/settings/get-typotolerance) or [`twoTypos`](/reference/api/settings/get-typotolerance) is invalid. It should either be an integer or set to `null`
## `invalid_similar_id`
The provided target document identifier is invalid. A document identifier can be of type integer or string, only composed of alphanumeric characters (a-z A-Z 0-9), hyphens (-) and underscores (\_).
## `not_found_similar_id`
Meilisearch could not find the target document. Make sure your target document identifier corresponds to a document in your index.
## `invalid_similar_attributes_to_retrieve`
[`attributesToRetrieve`](/reference/api/search/search-with-post#body-attributes-to-retrieve) is invalid. It should be an array of strings, a string, or set to null.
## `invalid_similar_embedder`
[`embedder`](/reference/api/similar-documents/get-similar-documents-with-post) is invalid. It should be a string corresponding to the name of a configured embedder.
## `invalid_similar_filter`
[`filter`](/reference/api/search/search-with-post#body-filter) is invalid or contains a filter expression with a missing or invalid operator. Filter expressions must be a string, array of strings, or array of array of strings for the POST endpoint. It must be a string for the GET endpoint.
Meilisearch also throws this error if the attribute used for filtering is not defined in the `filterableAttributes` list.
## `invalid_similar_limit`
[`limit`](/reference/api/search/search-with-post#body-limit) is invalid. It should be an integer.
## `invalid_similar_offset`
[`offset`](/reference/api/search/search-with-post#body-offset) is invalid. It should be an integer.
## `invalid_similar_show_ranking_score`
[`ranking_score`](/reference/api/search/search-with-post#body-show-ranking-score) is invalid. It should be a boolean.
## `invalid_similar_show_ranking_score_details`
[`ranking_score_details`](/reference/api/search/search-with-post#body-show-ranking-score-details) is invalid. It should be a boolean.
## `invalid_similar_ranking_score_threshold`
The [`rankingScoreThreshold`](/reference/api/search/search-with-post#body-show-ranking-score-threshold) in a similar documents request is not a number between `0.0` and `1.0`.
## `invalid_state`
The database is in an invalid state. Deleting the database and re-indexing should solve the problem.
## `invalid_store_file`
The `data.ms` folder is in an invalid state. Your `b` file is corrupted or the `data.ms` folder has been replaced by a file.
## `invalid_swap_duplicate_index_found`
The indexes used in the [`indexes`](/reference/api/indexes/swap-indexes) array for a [swap index](/reference/api/indexes/swap-indexes) request have been declared multiple times. You must declare each index only once.
## `invalid_swap_indexes`
This error happens if:
* The payload doesn't contain exactly two index [`uids`](/reference/api/indexes/swap-indexes) for a swap operation
* The payload contains an invalid index name in the [`indexes`](/reference/api/indexes/swap-indexes) array
## `invalid_task_after_enqueued_at`
The [`afterEnqueuedAt`](/reference/api/async-task-management/list-tasks) query parameter is invalid.
## `invalid_task_after_finished_at`
The [`afterFinishedAt`](/reference/api/async-task-management/list-tasks) query parameter is invalid.
## `invalid_task_after_started_at`
The [`afterStartedAt`](/reference/api/async-task-management/list-tasks) query parameter is invalid.
## `invalid_task_before_enqueued_at`
The [`beforeEnqueuedAt`](/reference/api/async-task-management/list-tasks) query parameter is invalid.
## `invalid_task_before_finished_at`
The [`beforeFinishedAt`](/reference/api/async-task-management/list-tasks) query parameter is invalid.
## `invalid_task_before_started_at`
The [`beforeStartedAt`](/reference/api/async-task-management/list-tasks) query parameter is invalid.
## `invalid_task_canceled_by`
The [`canceledBy`](/reference/api/async-task-management/list-tasks) query parameter is invalid. It should be an integer. Multiple `uid`s should be separated by commas (`,`).
## `invalid_task_index_uids`
The [`indexUids`](/reference/api/async-task-management/list-tasks) query parameter contains an invalid index uid.
## `invalid_task_limit`
The [`limit`](/reference/api/async-task-management/list-tasks) parameter is invalid. It must be an integer.
## `invalid_task_statuses`
The requested task status is invalid. Please use one of the [possible values](/reference/api/async-task-management/get-task).
## `invalid_task_types`
The requested task type is invalid. Please use one of the [possible values](/reference/api/async-task-management/get-task).
## `invalid_task_uids`
The [`uids`](/reference/api/async-task-management/list-tasks) query parameter is invalid.
## `invalid_webhooks`
The create webhook request did not contain a valid JSON payload. Meilisearch also returns this error when you try to create more than 20 webhooks.
## `invalid_webhook_url`
The provided webhook URL isn’t a valid JSON string, is `null`, is missing, or its value cannot be parsed as a valid URL.
## `invalid_webhook_headers`
The provided webhook `headers` field is not a JSON object or not a valid HTTP header. Meilisearch also returns this error if you set more than 200 header fields for a single webhook.
## `invalid_webhook_uuid`
The provided webhook `uuid` is not a valid uuid v4 value.
## `io_error`
This error generally occurs when the host system has no space left on the device or when the database doesn't have read or write access.
## `index_primary_key_no_candidate_found`
[Primary key inference](/learn/getting_started/primary_key#meilisearch-guesses-your-primary-key) failed as the received documents do not contain any fields ending with `id`. [Manually designate the primary key](/learn/getting_started/primary_key#setting-the-primary-key), or add some field ending with `id` to your documents.
## `malformed_payload`
The [Content-Type header](/reference/api/headers) does not match the request body payload format or the format is invalid.
## `missing_api_key_actions`
The [`actions`](/reference/api/keys/list-api-keys) field is missing from payload.
## `missing_api_key_expires_at`
The [`expiresAt`](/reference/api/keys/get-api-key#response-expiresat) field is missing from payload.
## `missing_api_key_indexes`
The [`indexes`](/reference/api/keys/get-api-key#response-indexes) field is missing from payload.
## `missing_authorization_header`
This error happens if:
* The requested resources are protected with an API key that was not provided in the request header. Check our [security tutorial](/learn/security/basic_security) for more information
## `missing_content_type`
The payload does not contain a [Content-Type header](/reference/api/headers). Currently, Meilisearch only supports JSON, CSV, and NDJSON.
## `missing_document_filter`
This payload is missing the [`filter`](/reference/api/documents/delete-documents-by-filter) field.
## `missing_document_id`
A document does not contain any value for the required primary key, and is thus invalid. Check documents in the current addition for the invalid ones.
## `missing_index_uid`
The payload is missing the [`uid`](/reference/api/indexes/get-index) field.
## `missing_facet_search_facet_name`
The [`facetName`](/reference/api/facet-search/search-in-facets) parameter is required.
## `missing_master_key`
You need to set a master key before you can access the `/keys` route. Read more about setting a master key at launch in our [security tutorial](/learn/security/basic_security).
## `missing_network_url`
One of the remotes in the [network object](/reference/api/network/get-network) does not contain the `url` field.
## `missing_payload`
The Content-Type header was specified, but no request body was sent to the server or the request body is empty.
## `missing_swap_indexes`
The index swap payload is missing the [`indexes`](/reference/api/indexes/swap-indexes) object.
## `missing_task_filters`
The [cancel tasks](/reference/api/async-task-management/cancel-tasks) and [delete tasks](/reference/api/async-task-management/delete-tasks) endpoints require one of the available query parameters.
## `no_space_left_on_device`
This error occurs if:
* The host system partition reaches its maximum capacity and can no longer accept writes
* The tasks queue reaches its limit and can no longer accept writes. You can delete tasks using the [delete tasks endpoint](/reference/api/async-task-management/delete-tasks) to continue write operations
## `not_found`
The requested resources could not be found.
## `payload_too_large`
The payload sent to the server was too large. Check out this [guide](/learn/self_hosted/configure_meilisearch_at_launch#payload-limit-size) to customize the maximum payload size accepted by Meilisearch.
## `task_not_found`
The requested task does not exist. Please ensure that you are using the correct [`uid`](/reference/api/async-task-management/get-task).
## `too_many_open_files`
Indexing a large batch of documents, such as a JSON file over 3.5GB in size, can result in Meilisearch opening too many file descriptors. Depending on your machine, this might reach your system's default resource usage limits and trigger the `too_many_open_files` error. Use [`ulimit`](https://www.ibm.com/docs/en/aix/7.1?topic=u-ulimit-command) or a similar tool to increase resource consumption limits before running Meilisearch. For example, call `ulimit -Sn 3000` in a UNIX environment to raise the number of allowed open file descriptors to 3000.
## `too_many_search_requests`
You have reached the limit of concurrent search requests. You may configure it by relaunching your instance and setting a higher value to [`--experimental-search-queue-size`](/learn/self_hosted/configure_meilisearch_at_launch).
## `unretrievable_document`
The document exists in store, but there was an error retrieving it. This probably comes from an inconsistent state in the database.
## `vector_embedding_error`
Error while generating embeddings. You may often see this error when the embedding provider service is currently unavailable. Most providers offer status pages to monitor the state of their services, such as OpenAI's [https://status.openai.com/](https://status.openai.com/).
Inaccessible embedding provider errors usually include a message stating Meilisearch "could not reach embedding server".
## `remote_bad_response`
The remote instance answered with a response that this instance could not use as a federated search response.
## `remote_bad_request`
The remote instance answered with `400 BAD REQUEST`.
## `remote_could_not_send_request`
There was an error while sending the remote federated search request.
## `remote_invalid_api_key`
The remote instance answered with `403 FORBIDDEN` or `401 UNAUTHORIZED` to this instance’s request. The configured search API key is either missing, invalid, or lacks the required search permission.
## `remote_remote_error`
The remote instance answered with `500 INTERNAL ERROR`.
## `remote_timeout`
The proxy did not answer in the allocated time.
## `webhook_not_found`
The provided webhook `uuid` does not correspond to any configured webhooks in the instance.
# Errors
Source: https://www.meilisearch.com/docs/reference/errors/overview
Consult this page for an overview of how Meilisearch reports and formats error objects.
Meilisearch uses the following standard HTTP codes for a successful or failed API request:
| Status code | Description |
| :---------- | :---------------------------------------------------------------------------------------- |
| 200 | ✅ **Ok** Everything worked as expected. |
| 201 | ✅ **Created** The resource has been created (synchronous) |
| 202 | ✅ **Accepted** The task has been added to the queue (asynchronous) |
| 204 | ✅ **No Content** The resource has been deleted or no content has been returned |
| 205 | ✅ **Reset Content** All the resources have been deleted |
| 400 | ❌ **Bad Request** The request was unacceptable, often due to missing a required parameter |
| 401 | ❌ **Unauthorized** No valid API key provided |
| 403 | ❌ **Forbidden** The API key doesn't have the permissions to perform the request |
| 404 | ❌ **Not Found** The requested resource doesn't exist |
## Errors
All detailed task responses contain an [`error`](/reference/api/async-task-management/get-task) field. When a task fails, it is always accompanied by a JSON-formatted error response. Meilisearch errors can be of one of the following types:
| Type | Description |
| :-------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`invalid_request`** | This is due to an error in the user input. It is accompanied by the HTTP code `4xx` |
| **`internal`** | This is due to machine or configuration constraints. It is accompanied by the HTTP code `5xx` |
| **`auth`** | This type of error is related to authentication and authorization. It is accompanied by the HTTP code `4xx` |
| **`system`** | This indicates your system has reached or exceeded its limit for disk size, index size, open files, or the database doesn't have read or write access. It is accompanied by the HTTP code `5xx` |
### Error format
```json theme={null}
{
"message": "Index `movies` not found.",
"code": "index_not_found",
"type": "invalid_request",
"link": "https://docs.meilisearch.com/errors#index_not_found"
}
```
| Field | Description |
| :------------ | :------------------------------------------------ |
| **`message`** | Human-readable description of the error |
| **`code`** | [Error code](/reference/errors/error_codes) |
| **`type`** | [Type](#errors) of error returned |
| **`link`** | Link to the relevant section of the documentation |
If you're having trouble understanding an error, take a look at the [complete list](/reference/errors/error_codes) of `code` values and descriptions.
TEST RESPONSE FIELD COMPONENT
Human-readable description of the error
[Error code](/reference/errors/error_codes)
[Type](#errors) of error returned
Link to the relevant section of the documentation