How traditional search ranking works
Single-score ranking models
Traditional search engines compute a single numeric score per document, then sort by that number. The two most common approaches are: BM25 (Elasticsearch, OpenSearch, Lucene, MongoDB Atlas Search) — the industry standard for full-text search. It scores documents based on:- Term frequency (TF): How often the query term appears in the document
- Inverse document frequency (IDF): How rare the term is across all documents
- Document length normalization: Shorter documents get a slight boost
ts_rank — a simpler model used by PostgreSQL full-text search (and Supabase). It uses term frequency with optional document length normalization, but does not consider inverse document frequency. PostgreSQL also offers ts_rank_cd (cover density), which factors in proximity of matched terms. Both are less sophisticated than BM25.
In all cases, the engine produces a score like 8.72 or 3.14, and results are sorted by this number in descending order.
The problem with single-score ranking
BM25 was designed for information retrieval — finding research papers, legal documents, or web pages where term frequency genuinely signals relevance. But for application search — e-commerce, media catalogs, SaaS dashboards — this model breaks down:- Typos are invisible: BM25 treats “iPhone” and “iPhoone” as completely different terms. The misspelled query returns zero results
- Word order is ignored: Searching “dark knight” and “knight dark” produce identical scores, even though user intent clearly favors the first ordering
- Field importance is flattened: A match in a product title should matter more than a match in a review comment, but BM25 requires manual field boosting that’s fragile and hard to tune
- Scoring is opaque: A score of
8.72means nothing to a developer debugging why result A appears before result B - Prefix matching requires workarounds: A user typing “prog” expects to see “programming” — BM25 doesn’t do this without additional analyzers
How Meilisearch ranks results
Meilisearch replaces the single-score model with a multi-criteria bucket sort system. Instead of computing one number, it evaluates documents through a sequence of ranking rules, each acting as a successive filter.The ranking pipeline
When a user searches for"badman dark knight returns", Meilisearch applies ranking rules in order:
- A document matching 4/4 words with 2 typos always ranks above a document matching 3/4 words with 0 typos
- The
wordsrule has absolute priority overtypo, which has absolute priority overproximity, and so on - There is no way for a high score in one dimension to compensate for a low score in another
The seven default ranking rules
| Order | Rule | What it measures | Why it matters |
|---|---|---|---|
| 1 | words | Number of query terms matched | Documents matching more of what the user typed are more relevant |
| 2 | typo | Number of typos corrected | Exact matches are preferred, but typos still return results |
| 3 | proximity | Distance between matched terms | ”dark knight” in sequence beats “dark … knight” far apart |
| 4 | attributeRank | Which attribute matched | A title match is more important than a description match |
| 5 | sort | User-defined sort order | Only active when the query includes a sort parameter |
| 6 | wordPosition | Position of match within the attribute | Matching at the start of a title beats matching at the end |
| 7 | exactness | Exact match vs prefix/typo match | ”knight” exactly beats “knights” (prefix) |
Why this is better for application search
1. Typo tolerance is built in
BM25 either matches a term or it doesn’t. Meilisearch’stypo rule is a first-class ranking criterion: documents matching with fewer typos rank higher than those requiring more corrections, but all of them appear in results.
| Query | BM25 | Meilisearch |
|---|---|---|
"iphone" | Returns iPhone results | Returns iPhone results |
"iphoen" | 0 results | Returns iPhone results (1 typo) |
"ipohne" | 0 results | Returns iPhone results (2 typos) |
2. Word order and proximity matter
BM25 treats a document as a bag of words — the position and distance between terms don’t affect the score. Meilisearch’sproximity rule ensures that documents where query terms appear close together and in order rank higher.
Query: "new york pizza" | BM25 | Meilisearch |
|---|---|---|
| ”Best New York pizza places” | Same score as below | Ranks 1st (adjacent, in order) |
| “New restaurant in York with great pizza” | Same score as above | Ranks 2nd (words spread apart) |
3. Field importance is explicit and predictable
With BM25, field boosting requires numeric weights (title^3 description^1) that interact unpredictably with term frequency and document length. In Meilisearch, the attributeRank rule uses the order of searchableAttributes — the first field always wins over the second, no math involved.
title always outranks a match in description (assuming previous rules tied). No weights to tune, no interactions to debug.
4. Ranking is transparent and debuggable
BM25 produces opaque scores. Meilisearch lets you inspect exactly why a document ranks where it does usingshowRankingScoreDetails:
5. Prefix search works natively
When a user types"prog" in a search bar, they expect to see “programming”, “progress”, “program”. BM25 requires n-gram tokenizers or edge-gram analyzers to achieve this. Meilisearch handles it automatically — the last word in a query is always treated as a prefix.
6. No per-query tuning required
BM25 deployments often require extensive per-query tuning: function scores, field boosts, decay functions, script scores. Meilisearch’s ranking rules are configured once at the index level and work consistently across all queries. The same rules that rank “batman” well also rank “comfortable running shoes” well.Trade-offs to be aware of
Meilisearch’s approach is optimized for application and site search. There are scenarios where BM25 may be more appropriate:| Scenario | BM25 | Meilisearch |
|---|---|---|
| Log analytics | Better — term frequency matters for finding error patterns | Not designed for this use case |
| Academic paper search | Better — TF-IDF identifies topically relevant papers | Optimized for short, user-facing queries |
| Documents > 10KB | Handles naturally | Best with documents split into smaller chunks |
| Custom scoring formulas | Fully customizable via script scores | Fixed rule set with configurable order |
| Billions of documents | Horizontally scalable | Designed for millions of documents per index |
Combining ranking with semantic search
Meilisearch’s ranking system works alongside hybrid search. When you enable an embedder, Meilisearch combines keyword-based ranking (the rules above) with vector similarity in a single query:semanticRatio controls the blend: 0.0 uses only the multi-criteria ranking rules, 1.0 uses only vector similarity, and values in between merge both result sets. This gives you the best of both worlds — BM25-beating keyword relevancy plus semantic understanding — without managing two separate search systems.
Summary
| BM25 / ts_rank (Elasticsearch, PostgreSQL, etc.) | Meilisearch multi-criteria ranking | |
|---|---|---|
| Approach | Single numeric score per document | Sequential bucket sort through multiple rules |
| Typo handling | None (or via fuzzy query, separate step) | Built-in, ranked by typo count |
| Word proximity | Not a factor in BM25; basic in PostgreSQL ts_rank_cd | Dedicated ranking rule |
| Field importance | Numeric boosts with complex interactions | Ordered list, first field always wins |
| Prefix search | Requires analyzer config (BM25) or :* syntax (PostgreSQL) | Automatic on last query word |
| Debuggability | Opaque score | Per-rule breakdown via showRankingScoreDetails |
| Configuration | Per-query function scores and boosts | Per-index rules, consistent across queries |
| Semantic search | Separate system (kNN, vector DB, pgvector) | Integrated via hybrid parameter |
| Best for | Log analytics, research, large corpora | Application search, e-commerce, media catalogs |
Learn more
- Ranking rules — Configure and reorder the seven built-in rules
- Bucket sort — How the bucket sort algorithm works
- Ranking score — Understanding the 0.0–1.0 ranking score
- Custom ranking rules — Add business logic to ranking
- Ordering ranking rules — Best practices for rule ordering