Quentin de Quelen

Meilisearch AI - Boost relevance and UX with fast hybrid and semantic search

🚀 Meilisearch is a superfast search engine for developers built in Rust. Our latest release introduces AI-powered semantic and hybrid search, blending full-text, semantic, and vector DB capabilities for smarter, faster results.

Add a comment

Replies

Best
Quentin de Quelen

Hey ProductHunt! I’m Quentin, one of the founders of the open-source search engine Meilisearch. Our goal is to make it easy for developers to ship good search UX, so I’m excited to launch Meilisearch AI today with the builders community here.


Meilisearch started when I was working in ecommerce and felt like the search options were either heavyweight like Elasticsearch or paid solutions with opaque pricing. We thought there was room for something transparent and dev-friendly that just works for most search use cases out of the box. Meilisearch AI is another step in this direction.


Our key features are:

🔎 Full-text search with simple, expressive ranking rules

🧠 AI-powered hybrid, semantic, and multi-modal search

⚡️ Sub 50ms latency, optimized for end user search


We’re excited to stabilize AI-powered search to allow developers to integrate fast and relevant search, retrieval-augmented generation, and recommendations in their apps. Building upon a year of open-source beta, our managed offer comes with the scalability, security, and monitoring you need to stay focused on building, not managing infrastructure.


For the occasion, we’re offering 2 months of free Meilisearch Cloud subscription. Use the ✨ "MeiliAI" ✨ coupon to get started.


We can’t wait to hear your thoughts!


For more details on our launch, check out: https://meilisearch.com/launch-week


Happy building 🚀

Jason Yu

@quentin_dq @carolina_ferreira2 @maya_shin2 @gmourier
I’ve always admired Meilisearch for bringing developer-first simplicity to full-text search, and it’s great to see you now tackling AI-powered semantic and hybrid search with the same ethos.

Also, love that you’re keeping latency under 50ms 🔥 — speed + semantic relevance is still a rare combo in this space.

Excited to try it out — and that MeiliAI coupon is a nice bonus for devs who want to explore! 🚀

Quentin de Quelen

@carolina_ferreira2  @maya_shin2  @gmourier  @kui_jason You will love what is coming next; we will be able to run on the same index, the same model on a local CPU for search and on a GPU for indexing. Best of both worlds, it will make Meilisearch the fastest and most scalable hybrid search 🔥

Laurent Cazanove

Excited to see the hybrid search API stabilized 🔥

Quentin de Quelen

@strift The full team is so excited by this launch 🚀


Guillaume Mourier

It’s been an exciting journey, and I’m thrilled to see it go GA. Can’t wait to see what you build with it!

Remco Strijdonk

Congrats on the Launch Quentin!

Any plans for a "plug and play" AI search in the future. As in, all embedding etc integrated in the product, no need for a 3rd party?

Quentin de Quelen

@strijdhagen Yes, it's in our plans! We are just about to release the Composite Embedder, which will be a good first step on this path. What are the pain points you want to solve in your case? Mostly, we see pricing or performance pain points that will be addressed, but anything different would be interesting to dive deeper into.

Remco Strijdonk

@quentin_dq Mostly lazyness, having everything in one please is awesome :)

Kay Kwak
Launching soon!

Can’t wait to see how fast this works! And by the way, your UI looks awesome — I really like it. Congrats on the launch !🎉

Charline Moncoucut

@kay_arkain That means a lot, thank you! I am the product designer responsible for the UI/UX, so it’s really great to hear you like it. 😊 We’re always looking to improve, especially when it comes to AI-specific flows, so if you (or anyone reading this) have any thoughts, we’d love to hear them, via Discord or directly the feedback button in the app.

Dobroslav Radosavljevič

NON AI Comment - Congrats to launch! It looks really good. Definitely trying it for some of my projects :)

Quentin de Quelen

@dobroslav_dev Thanks for the support! Let's try it; it takes only a few hours to have it running :)

Faizan Jan

Good luck with the launch.
How does Meilisearch's hybrid search functionality effectively combine full-text and vector search methodologies to enhance result relevance?


Quentin de Quelen

@faizanjan_ Thanks for the kind words! Great question.


When you perform a hybrid search in Meilisearch, we run two parallel searches: one using our full-text search engine, and the other using vector-based semantic search.

• The full-text search uses Meilisearch’s custom ranking rules to assign a highly precise score to each document based on how well it matches the query terms — better than standard BM25.

• The semantic search uses a KNN (nearest neighbor) search on embeddings to score documents based on how close their meaning is to the query.


What makes Meilisearch stand out is how we combine these two scores into a single, unified ranking. Instead of relying on simple techniques like Fusion Ranking, we apply a smart blending of both relevance signals to ensure the results are not only lexically accurate but also semantically meaningful.


This leads to much more relevant results than using just BM25, just vectors, or even most other hybrid approaches.


@kerollmops might have even more technical detail to add, but that’s the high-level view!

Sia Houchangnia

Super exciting! Congrats to the team for launching Meilisearch AI.

Jonas Urbonas

Meilisearch sounds like a great solution for developers looking for fast and simple search integration! How flexible are the ranking rules, and can they be adjusted easily as the app evolves?

Quentin de Quelen

Thanks @jonurbonas!

Yes — the ranking rules in Meilisearch are very flexible. You can reorder them, remove the ones you don’t need, or even define custom rules based on your use case — all directly from the Cloud UI.


We also make it easy to experiment: you can test new configurations on a separate index and only apply them to production once you’re confident. Everything is non-destructive, so it’s easy to roll back if needed.


For hybrid search, it’s super simple: you just adjust a ratio to control the balance between semantic and full-text scoring. That gives you fine-grained control over how AI influences the results.

Vu Thanh Tung

Congratulation on the launch of the product 🎊, the demo is absolutely amazing. I love how thoughtful the team is in allowing user to define their own text temple for the embedding.

I have several questions:

  1. Is there any plan to give the query a template too?

  2. AFAIK, the maximum number of query words is 10 (link). Is this still enforced for AI search?

  3. Would you mind sharing some tricks on how the team achieved search-as-you-type performance even with AI search?

    I am looking forward to your response 🙌

Quentin de Quelen

@topg Thanks, Vu! Indeed, it has been a wonderful addition to the product, and the idea comes from our open-source community!

To answer your questions:
1. Yes, it's in the plan to implement several workarounds for this, such as being able to provide context to a search query that will help personalize the search responses. However, I would be interested in getting more information about your expectations because I'm not sure I fully understand what a template would do in search.
2. Indeed, there is a limit on the full-text search, but no limits on the semantic search.
3. It's absolutely not a secret, and this will become even better in the future. To have the minimum latency on semantic search, the goal is to remove the network by running the model locally. It works perfectly fine as the searches have only a few words/tokens; with most of the models, it takes 1-5 ms max.

Vu Thanh Tung

@quentin_dq Thanks for the detailed answer, Quentin 🙌


As per the first question. I was assuming the AI search query is still reatrained by the number of token like tradational search (2nd question) which made me thinking about away to include more context in my query.


Jun Shen

Rust-based search sounds powerful! 😄

Quentin de Quelen

@shenjun Rust is the best language for building a search engine. It is secure, stable, and performant. 🦀🔥

Alessandro Colombo

Amazing launch@quentin_dq and Meilisearch team, congrats! Your search engine speed is crucial to our growth at Agora.


Amazing to see the product going to the next level!

Quentin de Quelen

@alessandro_colombo Thanks Alessandro! 🙏

Carolina Ferreira

Excited for the AI roadmap ahead! 🤩 Great job, team!

Marianna Tymchuk

We’re thrilled to see Meilisearch AI finally launch! The combination of full-text, semantic, and vector search capabilities offers developers a seamless and fast search solution. Exciting times ahead for building smarter search experiences!

Quentin de Quelen

@marianna_tymchuk Thank you so much! 🙌


We’re really excited about this step — combining full-text and semantic in one simple API like this has been a huge developer experience improvement, and we’re just getting started.

Aurelio

Awesome product that we use to power up @WP Umbrella. Congrats to the team for their great work!

Quentin de Quelen

@aureliovolle Thanks Aurelio! Very happy to power up @WP Umbrella 🔥

Param Jaggi

Congratulations on the launch @quentin_dq and the entire team!


We've been using Meilisearch AI at Agora for a few months now. It's the fastest search solution on the market hands down. Excited to see the team take the product to the next level with this GA release 🚀

Quentin de Quelen

@param_jaggi3 Thanks, Param! Happy to power the search for such an amazing product that is @Agora ❤️

Sofi Mohr

Congratulations on this launch, Quentin! It looks amazing

Quentin de Quelen

@sofi_mohr Thanks Sofi!

Victor Coisne

This looks awesome, well done team Meilisearch !

Quentin de Quelen

@vcoisne Thanks for the support Victor!

Parvez Akther

Best wishes for Melisearch team.

We built @ThriveDesk search and AI features on top of Melisearch platform and we 100% satisfied with it.

Quentin de Quelen

@ThriveDesk Thanks, Parvez! It's amazing! What upcoming features are you expecting?