Skip to main content
The indexing performance section of the monitoring dashboard shows how quickly Meilisearch processes indexing tasks. It is labeled Beta in the Cloud UI. You can filter all charts by index using the All indexes dropdown, set a date range, or enable real-time mode. Timestamps are displayed in UTC.

Indexing latency (TTS)

The indexing latency chart tracks time-to-search (TTS): the time from when an indexing task is enqueued to when the indexed documents become searchable. Latency is shown at four percentiles, measured in milliseconds: p75, p90, p95, and p99.
Indexing latency TTS chart showing p75, p90, p95, and p99 times in milliseconds over time
PercentileWhat it means
p7575% of indexing tasks completed within this time
p9090% of indexing tasks completed within this time
p9595% of indexing tasks completed within this time
p9999% of indexing tasks completed within this time — the slowest tasks
TTS is the metric that matters most for use cases where freshness is important, such as e-commerce catalog updates or live content indexing.

Batches

The Batches tab gives you a per-batch view of every indexing operation, along with a detailed trace of where time was spent. Use it when the TTS chart shows high latency and you need to identify the bottleneck.
Batches tab showing a list of processed batches with their status, index, batch ID, duration, and start time
Each row shows:
  • Status: succeeded, failed, or in progress
  • Index id: which index was written to
  • Batch id: unique identifier for the batch
  • Duration: total wall-clock time for the batch
  • Started date: when the batch began processing
Click a batch to open its detail panel on the right, or use the eye icon to open the full JSON view.

Progress trace

The detail panel includes a progressTrace object with timing for every internal step of the indexing pipeline:
Batch detail JSON panel showing progressTrace with per-step timing, internalDatabaseSizes, embedderRequests, and writeChannelCongestion
Key steps visible in the trace:
Trace pathWhat it measures
processing tasks > retrieving configLoading index configuration
processing tasks > computing document changesDiff between incoming and existing documents
processing tasks > reading payload statsParsing incoming document payloads
processing tasks > indexing > extracting documentsExtracting fields from documents
processing tasks > indexing > extracting facetsBuilding facet data
processing tasks > indexing > merging facetsMerging facet updates into the index
processing tasks > indexing > extracting wordsTokenizing document content
processing tasks > indexing > merging wordsMerging word data into the inverted index
processing tasks > indexing > writing embeddings to databasePersisting vector embeddings
processing tasks > indexing > post processing facetsFinalizing facet search structures
processing tasks > indexing > post processing wordsFinalizing word prefix structures
processing tasks > indexing > building geo jsonBuilding geo search structures
processing tasks > indexing > finalizingCommitting the batch to disk
writing tasks to diskPersisting the task record

Internal database sizes

The internalDatabaseSizes field shows the current on-disk size of each internal data structure, along with the delta from this batch:
FieldWhat it stores
wordPrefixPositionDocidsWord prefix position data for prefix search
fieldIdDocidFacetStringsFacet string data for filtering and faceting
vectorStoreVector embeddings for semantic/hybrid search
documentsRaw document storage
The delta (shown as +N KiB or +N MiB) tells you how much space each batch adds. A vectorStore growing much faster than documents indicates a high-dimensional embedding model.

Other fields

FieldWhat it shows
embedderRequests.totalNumber of embedding API calls made during this batch
embedderRequests.failedFailed embedding calls (non-zero means some documents may not be indexed for vector search)
writeChannelCongestion.attemptsNumber of write attempts
writeChannelCongestion.blocking_attemptsWrite attempts that had to wait (high values indicate write pressure)

Expert support for Enterprise customers

In most cases, the simplest way to improve indexing performance is to upgrade to a larger resource tier. More RAM and CPU directly reduce indexing time and TTS. You can change your resource tier at any time from the project settings. If upgrading does not resolve the issue, the Meilisearch team can help. Enterprise customers have direct access to experts who can analyze your batch traces, database sizes, and index configuration to optimize for your specific workload. Contact sales@meilisearch.com to learn more.

Common issues and fixes

SymptomLikely causeFix
High TTS across all percentilesLarge document batches or many indexed attributesReduce batch size, or reduce the number of filterableAttributes and sortableAttributes
merging words step slowLarge inverted index updateReduce the number of searchableAttributes or batch size
writing embeddings to database slowHigh vector dimensions or large batchReduce batch size; consider a lower-dimension model
embedderRequests.failed non-zeroEmbedder API errors or rate limitsCheck your embedder configuration and API key validity
High writeChannelCongestion.blocking_attemptsConcurrent write contentionAvoid concurrent indexing operations on the same index
TTS spikes periodicallyScheduled bulk imports competing with searchStagger indexing operations to off-peak hours
vectorStore growing faster than expectedHigh embedding dimensionsSwitch to a lower-dimension embedding model