Why intent understanding is the hardest part of AI-powered search (and how to solve it)
The challenge isn't connecting to an LLM. It's figuring out what people actually mean.

In this article
The challenge isn't connecting to an LLM. It's figuring out what people actually mean.
Last week at MCP Connect Days in Paris, I watched a room full of engineers nod in unison. The speaker from The Fork was describing their biggest challenge building AI-powered search: not the LLM integration, not the API calls, not even the infrastructure. The hard part? Understanding what users actually want.
When someone types "Italian restaurant near Bastille for under 30€ with outdoor seating," the system needs to decompose that into: cuisine type (Italian), location (Bastille), price range(<30€), and amenity (outdoor seating). Four distinct filters, zero explicit structure. Just vibes.
This is the intent understanding problem. And after talking to dozens of teams building conversational search interfaces, I'm convinced it's where most implementations quietly fail.
What is intent understanding? Intent understanding is the process of translating natural language queries into structured search parameters. It means extracting the right keywords, filters, and facets from conversational input so a search engine can return relevant results.
The gap between natural language and structured queries
Search engines are remarkably good at what they do. Give them a query with the right filters, and they'll return relevant results in milliseconds. The technology is mature, battle-tested, fast.
The problem is that humans don't speak in structured queries.
Coming back to our Italian restaurant search: the system correctly identified four distinct parameters from a single sentence. But that's the simple case. Now scale this complexity. Real queries include negations ("not too touristy"), preferences ("preferably with a terrace"), comparisons ("somewhere nicer than last time"), and context that requires memory ("that place my colleague mentioned"). Each of these requires a different parsing strategy.
Most teams building natural language search underestimate this translation layer. They focus on the LLM integration, assuming the hard part is getting GPT to respond. But LLMs are commoditised. The translation from natural language to structured search? That's where the differentiation lives.
How The Fork tackles intent parsing
At MCP Connect Days, The Fork's engineering team shared their approach to building a ChatGPT app for restaurant discovery. Their challenge perfectly illustrates the intent understanding problem.
A typical query: "Where can I take my parents for their anniversary? They like French food, somewhere quiet, budget around 80€ per person."
Breaking this down:
- Occasion context → anniversary (implies upscale, romantic)
- Party composition → parents (older demographic, accessibility matters)
- Cuisine → French
- Atmosphere → quiet
- Budget → ~80€/person
The Fork's system needs to translate these signals into filters their search infrastructure understands. But "quiet" isn't a database field. Neither is "anniversary-appropriate." They've built a sophisticated interpretation layer that maps conversational signals to searchable attributes.
Their key insight: the quality of results depends almost entirely on how well you understand the query, not how powerful your LLM is. A perfectly tuned GPT-4 with mediocre intent parsing will lose to a simpler model with excellent query understanding.
The rise of conversational commerce
This isn't just a technical curiosity. Conversational commerce is reshaping how people shop and discover products online.
For online retailers, users now expect to interact with search the same way they'd ask a knowledgeable shop assistant. "I need running shoes for flat feet, nothing too flashy, under 150€" is a reasonable query for a human sales associate. It should work for your search too. The companies getting this right are seeing real results: better query understanding means higher conversion rates, lower bounce rates, and customers who actually find what they're looking for.
For classified marketplaces like Leboncoin, the challenge multiplies. "Looking for a used iPhone, good condition, around Lyon, under 400€" requires parsing: product category (smartphones), brand (Apple), model family (iPhone), condition (good), location (Lyon area), and price (<400€). Each maps to a different filter in their search system. Get one wrong, and you've lost the sale.
The pattern repeats everywhere. Marketplaces, travel sites, content platforms. All building custom intent parsing layers. All solving the same fundamental problem.
Why building this from scratch is painful
Every team I've talked to that's built conversational search has essentially reinvented the same wheel. They all build:
- Entity extraction – pulling out product types, brands, attributes
- Filter mapping – converting extracted entities to search parameters
- Ambiguity resolution – handling cases where intent is unclear
- Validation – ensuring the generated query actually makes sense
This is significant engineering effort. And it's duplicated across every company building conversational interfaces.
The build-vs-buy decision here is stark. You can spend months building a custom query parsing layer, tuning it for your domain, handling edge cases, maintaining it as your schema evolves. Or you can choose infrastructure that handles this natively.
How Meilisearch solves intent understanding
This is exactly why we built the chat route directly into Meilisearch.
Instead of requiring every developer to build their own intent parsing layer, Meilisearch handles the natural language to structured query translation natively. The search engine itself understands how to interpret conversational input.
Here's what this looks like in practice. When you send a natural language query to the chat route, Meilisearch:
- Parses the intent – identifies what the user is searching for versus filtering by
- Maps to your schema – automatically routes extracted entities to the appropriate filterable attributes
- Generates the query – produces an optimised search with the right combination of keywords and filters
- Returns results – with full context about how the query was interpreted
The key insight: your search engine already knows your data schema. It knows which fields are filterable, what values exist, what combinations make sense. This context is gold for intent understanding.
When Meilisearch sees "red derbies under 100€" and knows your schema has "color" (with "red" as a value), "category" (with "derbies"), and "price" (numeric, filterable), it can make intelligent parsing decisions. Building this at the search engine level is the right abstraction because that's where the schema knowledge lives.
This approach differs builds upon semantic search, which focuses on understanding meaning and similarity. Intent understanding goes further: it's about decomposing a query into actionable search parameters. You need both for truly conversational search, and Meilisearch combines them.
Why this matters for the AI search ecosystem
The MCP Connect Days conference made one thing clear: everyone is building conversational interfaces. ChatGPT apps, Claude integrations, MCP servers, custom assistants. The race is on.
But most of these implementations will be mediocre. Not because the LLMs are bad, but because intent understanding is treated as an afterthought. Teams will ship, users will try queries that don't quite work, and the magic of AI-powered search will feel more like a gimmick than a feature.
The teams that win will be the ones who invest in robust query understanding. Either by building sophisticated custom layers (expensive, time-consuming) or by choosing infrastructure that handles this natively.
At the conference, I heard the same frustration repeatedly: "The LLM part was easy. Making it actually understand what users want? That took months."
It doesn't have to.
The bottom line
Conversational search is coming to everything. The user expectation is set: if I can ask ChatGPT anything, why can't I ask your product search the same way?
But the gap between that expectation and reality is intent understanding. It's the unsexy, difficult, often-ignored layer between natural language and structured results.
You can build this yourself. Many teams do. But if you're evaluating search infrastructure for conversational use cases, ask one question: does it understand what my users mean, or do I have to teach it?
The search engine that answers "yes" to the first question is the one worth betting on.
Frequently Asked Questions
What is intent understanding in search?
Intent understanding is the process of interpreting what a user actually wants from their search query. It involves extracting keywords, identifying filters (like price ranges or categories), and understanding implicit requirements. For example, turning "cheap red sneakers" into a search for sneakers + colour filter (red) + price sort (ascending).
What's the difference between semantic search and conversational search?
Semantic search focuses on understanding the meaning of words and finding conceptually similar results. Conversational search goes further: it parses natural language queries into structured parameters, handles follow-up questions, and maintains context. Semantic search might understand that "sneakers" and "trainers" are related; conversational search understands that "under 100€" is a price filter.
How do you parse natural language search queries?
There are two main approaches: building a custom NLP pipeline (entity extraction, intent classification, slot filling) or using a search engine with built-in query understanding. Custom pipelines offer control but require significant engineering investment. Native solutions like Meilisearch's chat route handle parsing automatically using your existing schema.
Why is intent understanding harder than LLM integration?
LLM APIs are well-documented and relatively straightforward to integrate. Intent understanding requires domain-specific knowledge: understanding your product taxonomy, mapping natural language to your filter schema, handling ambiguous queries, and maintaining quality as your catalogue evolves. It's an ongoing engineering challenge, not a one-time integration.
Thomas Payet is COO and co-founder at Meilisearch. He attended MCP Connect Days in Paris in February 2026.


