Introduction
This guide will walk you through the process of setting up a Meilisearch REST embedder with Hugging Face Inference Endpoints to enable semantic search capabilities.You can use Hugging Face and Meilisearch in two ways: running the model locally by setting the embedder source to
huggingface
, or remotely in Hugging Face’s servers by setting the embeder source to rest
.Requirements
To follow this guide, you’ll need:- A Meilisearch Cloud project running version >=1.13
- A Hugging Face account with a deployed inference endpoint
- The endpoint URL and API key of the deployed model on your Hugging Face account
Configure the embedder
Set up an embedder using the update settings endpoint:source
: declares Meilisearch should connect to this embedder via its REST APIurl
: replaceENDPOINT_URL
with the address of your Hugging Face model endpointapiKey
: replaceAPI_KEY
with your Hugging Face API keydimensions
: specifies the dimensions of the embeddings, which are 384 forbaai/bge-small-en-v1.5
documentTemplate
: an optional but recommended template for the data you will send the embedderrequest
: defines the structure and parameters of the request Meilisearch will send to the embedderresponse
: defines the structure of the embedder’s response
This example uses BAAI/bge-small-en-v1.5 as its model, but Hugging Face offers other options that may fit your dataset better.
Perform a semantic search
With the embedder set up, you can now perform semantic searches. Make a search request with thehybrid
search parameter, setting semanticRatio
to 1
:
q
: the search queryhybrid
: enables AI-powered search functionalitysemanticRatio
: controls the balance between semantic search and full-text search. Setting it to1
means you will only receive semantic search resultsembedder
: the name of the embedder used for generating embeddings