Conversational search, out of the box: Meilisearch Chat in Cloud UI
Meilisearch Cloud now ships a built-in chat UI. Select an index, get an auto-generated system prompt, guardrails, and an inspector tab to debug - no separate AI pipeline required.

Try Chat now
Meilisearch Chat is available today in your Cloud dashboard. Select an index and see what gets generated.
Day 3 of Meilisearch Launch Week. New releases dropping every day - keep reading to the end to see what is coming tomorrow.
Prefer to watch instead of read? See the full setup walkthrough →
At our last Launch Week in October, we shipped Meilisearch Chat - a single /chat endpoint that handles the full RAG workflow on top of your Meilisearch data. No separate vector database, no custom LLM orchestration. One API call, grounded in your index.
The feedback since then was consistent: the endpoint worked, but getting it production-ready still took effort. Writing a system prompt from scratch. Deciding what guardrails to apply. Building something to inspect what the model was actually doing at runtime - because when a response goes wrong, the pipeline gives you nothing to work with.
Today we are closing that gap. Open your Cloud dashboard, select an index, and the initial setup is done for you. The system prompt is generated from your actual documents. Guardrails are set. There is an inspector tab so you can see every tool call and raw LLM message before you ship anything, and an integrate page with ready-to-use code snippets for wiring it into your app.
This is Meilisearch Chat in the Cloud UI:
What happens when you select an index
The first time you open the Chat section in your Cloud dashboard and pick an index, Meilisearch does a few things automatically:
- It samples documents from your index to understand the structure and content.
- Based on that, it generates a system prompt - not a generic one, but one grounded in what your data actually contains.
- It also writes a search description that tells the model how and when to invoke search, and
- Sets default guardrails that keep the chat scoped to your index rather than letting it wander into general LLM territory.
Auto-generated system prompts and guardrails

The result is a grounded chat experience from the first message. You do not need to write a single line of prompt engineering to get something functional.
From there, everything is configurable. If the default system prompt is close but not quite right, you can refine it. If the guardrails are too tight for your use case, you can adjust them. But the starting point is already wired up and working, which is what usually takes the most time.
The 'inspector' tab
The hardest part of debugging RAG is that failures are usually invisible.
A user asks a question. The model gives a wrong or irrelevant answer. You do not know if search was called. You do not know what query it ran. You do not know what documents came back or how the model used them. You are debugging a black box.
The inspector tab changes that. Every tool call, every raw LLM message, in sequence - exactly as it happened. If the model called search, you see the query it used and the results it got. If it hallucinated something it should have searched for, that is visible too.

This ships as the default in Cloud, not an opt-in debugging mode, because it is what you need when something goes wrong - and something always goes wrong at least once before you ship.
The ‘integrate’ tab
Once your chat is working the way you want, the integrate page gives you ready-to-use code snippets for connecting it to AI tools and assistants. Copy and paste into your app, your support portal, your internal tool - wherever this needs to live.

Who this is for
The most obvious use cases are the ones where users should get a direct answer, not a list of links: support portals, internal knowledge bases, documentation sites, product catalogues.
If someone asks your support chat "how do I reset my password on mobile," they want an answer. A list of five help articles is a worse experience, even if it is technically correct.
Meilisearch Chat is grounded in your index by default. It searches your documents, synthesizes a response, and stays within scope. It is not a general-purpose LLM interface, and that is deliberate. A grounded chat that answers your users' actual questions is more useful than a capable model with no constraints.
Try it yourself We've set up a live demo using real vacation rental data so you can experience Meilisearch Chat firsthand - no configuration needed. Ask it anything: find a beachfront property under a certain budget, a cabin in the mountains, a pet-friendly place in Europe. Open the demo →
What is coming next
This release covers chat for a single index. Here is what is already in progress:
- Multi-index chat - query across more than one index in a single chat session
- More granular guardrail configuration - fine-grained control over how the model behaves at the edges of your data
- Richer inspector tooling - deeper visibility into ranking scores, retrieved documents, and why the model made the choices it did
→ Keep an eye on our public roadmap.
Part of something bigger
This is Day 3 of Meilisearch Launch Week. Every day this week, we are shipping something new.
Tomorrow, a new feature drops - and it is free for all Cloud users. If you have ever wondered exactly which part of your search pipeline is eating your speed, you will get some answers 👀


