RAG-powered chat

Conversational Search

Let users ask questions and get real answers, grounded in your own content. No hallucinations, no guessing.

Trusted by leading companies

How RAG works

Retrieval-Augmented Generation combines search with AI to deliver accurate, grounded answers.

01

User asks a question

Natural language input from your chat interface.

02

Meilisearch retrieves context

Hybrid search finds the most relevant documents instantly.

03

LLM generates answer

AI synthesizes a response grounded in your actual content.

AI

"Based on your documentation, the recommended approach is to use the /search endpoint with the following parameters…"

Source: API ReferenceSource: Best Practices Guide

Built for production AI

Everything you need to deploy conversational search that actually works.

Grounded in your data

Every answer is built from your own content, so users get accurate responses instead of guesses.

Drop into your existing stack

Works with the OpenAI SDK you already use. Point it at Meilisearch and ship in minutes.

Your choice of AI model

Use OpenAI, Azure OpenAI, Mistral, or vLLM. Swap providers later without rewriting code.

Answers appear instantly

Responses stream word by word as they are generated. No waiting on a spinner.

Trustworthy by design

Every answer cites the documents it came from, so users can verify what they read.

Better context, better answers

Combine keyword and semantic search to pull the right material into every prompt.

Works with all major LLM providers

Native support for OpenAI, Azure OpenAI, Mistral, and vLLM.

58 models from 13 providers

OpenAI

OpenAI

6 models

GPT-5
GPT-5 mini
GPT-4o
GPT-4o mini
o3
o4-mini
Anthropic

Anthropic

3 models

Claude Opus 4.7
Claude Sonnet 4.6
Claude Haiku 4.5
Google

Google

5 models

Gemini 3.1 Pro
Gemini 3 Flash
Gemini 2.5 Pro
Gemini 2.5 Flash
Gemini 2.5 Flash-Lite
Mistral

Mistral AI

7 models

Mistral Large 3
Mistral Small 4
Pixtral Large
Magistral Medium
Magistral Small
Devstral 2
Codestral 2501
Cohere

Cohere

2 models

Command A
Command R
DeepSeek

DeepSeek

3 models

DeepSeek V4 Pro
DeepSeek V4 Flash
DeepSeek R1
Bedrock

AWS Bedrock

4 models

Amazon Nova 2 Pro
Amazon Nova 2 Lite
Amazon Nova Micro
Amazon Nova 2 Sonic
HuggingFace

Hugging Face

6 models

Llama 4 Maverick
Llama 4 Scout
Qwen 3.5 72B
Gemma 4
Phi-4
Phi-4-mini
Ollama

Ollama

8 models

Llama 4 Scout
Llama 4 Maverick
Kimi K2.6
Qwen 3.5
Gemma 4
Mistral
Phi-4
DeepSeek R1
together.ai

Together AI

4 models

Llama 4 Maverick
Qwen 3.5 397B A17B
Gemma 4 31B
DeepSeek V4 Flash
Fireworks

Fireworks AI

4 models

Llama 4 Scout
Kimi K2.5
Qwen 3.5
DeepSeek V4 Flash
Cloudflare

Cloudflare AI

4 models

Llama 4 Scout
Kimi K2.6
Gemma 4
Mistral 7B
MoonshotAI

Moonshot AI

2 models

Kimi K2.6
Kimi K2.5
+

Custom

Any provider

Meilisearch is compatible with any model offering a REST API and tool calling capabilities.

Perfect for

Anywhere users need answers from your content in natural language.

Customer support

Answer customer questions instantly from your knowledge base. Reduce ticket volume and response times.

Documentation assistant

Help developers find answers without reading through pages. Natural language queries work.

Internal knowledge

Give employees instant access to company policies, procedures, and tribal knowledge.

Product discovery

"I need a laptop for video editing under $1500" → personalized product recommendations.

Research assistant

Search through documents, papers, and reports with natural language questions.

Educational content

Help students find answers across course materials, textbooks, and lecture notes.

Frequently asked questions

Conversational search lets users ask questions in natural language and receive direct answers, instead of just a list of links. Meilisearch builds it on top of RAG (Retrieval-Augmented Generation): every question retrieves relevant context from your data first, and an LLM then generates an answer grounded in that content with source attribution.

Ready to get started?

Run Conversational Search on Meilisearch Cloud, or self-host the open-source engine.