Features
- Import CSV, NDJSON, and JSON (array of objects) files
- Handle datasets from thousands to 40+ million documents
- Automatic retry logic for failed batches
- Real-time progress tracking with ETA
- Configurable batch sizes for performance tuning
Prerequisites
- A Meilisearch instance (Cloud or self-hosted)
- One of:
- Rust/Cargo installed (for building from source)
- Pre-built binary from releases
Installation
- Cargo (Rust)
- Pre-built binary
Basic usage
Import a CSV file:Set your environment variables:
Supported formats
CSV
NDJSON (Newline-delimited JSON)
JSON array
Configuration options
| Option | Description | Default |
|---|---|---|
--url | Meilisearch URL | http://localhost:7700 |
--api-key | Meilisearch API key | None |
--index | Target index name | Required |
--file | Input file path | Required |
--batch-size | Documents per batch | 1000 |
--primary-key | Primary key field | Auto-detected |
Performance tuning
Batch size
Adjust batch size based on your document size and network:Primary key
Specify the primary key if auto-detection fails:Example: Import a large dataset
Import 10 million products with progress tracking:After import
Verify your import:Next steps
Configure settings
Set up searchable and filterable attributes
Debug performance
Identify and fix indexing bottlenecks