Langtail is an LLMOps platform that helps teams speed up the development of AI-powered apps and ship to production with fewer surprises.
With Langtail, you can debug prompts, run tests, and observe what’s happening in production.
👋 Hi Product Hunters! I’m Petr, co-founder and CEO of Langtail.
Over the past year, I’ve spoken with dozens of founders and engineers about their experiences building AI-powered apps. I heard from almost everyone that taking an app from a simple demo to a great product that can be shipped to users is HARD WORK. Some of the common challenges I heard:
- LLMs can be unpredictable, making traditional testing tools more frustrating than helpful.
- Writing a good prompt is an art form, and sometimes the best person for the job isn’t an engineer - which is a problem if all of the prompts live in git.
- The best practices for using LLMs are constantly changing as new models are released and as teams get more experience.
We built Langtail to address the pain points that a team experiences as they take their proof-of-concept and turn it into a real product.
Here are the top features in Langtail:
🧪 **Playground:** Debug and collaborate on prompts with your team.
✔️ **Tests:** Create a suite of tests to evaluate how changes affect your prompt’s output.
🚀 **Deployments:** Publish prompts as API endpoints, allowing you to easily invoke them from your app.
📊 **Observability:** Logs provide insight into how your prompt is responding to real-world users and metrics help you monitor key metrics like latency and cost.
If you give Langtail a try today, use the promo code PH50 for 50% off for life. You can claim it in the Billing section.
If you have any thoughts about your first few minutes poking around the platform, drop them as a comment here and me or someone else on the team will respond!
Just yesterday I recommended Langtail to a friend who is starting to build with LLMs. I think it's the best choice out there to start with a simple prototype and move to production.
This is already a killer tool for many use-cases we are already using it for. Super excited for the upcoming features and good luck with the launch and further development! 💜
Hey! I’m Tomas, co-founder and CTO of Langtail.
I’m keen to hear about your first impression of Langtail! Could you take a minute to click around our website and then come back here and reply to this comment answering these two questions?
- What's one thing you’re hoping that Langtail can do for you?
- Are there any features that you expected but seem to be missing?
Many thanks for your help and support on our launch!
Been using LangTail for a few months now, highly recommend. It has kept me sane.
If you want your LLM apps to behave uncontrollably all the time, don't use LangTail. On the other hand, if you are serious about the product you are building, you know what to do :P
Love the product and the team's hard work.
Keep up the great work!
@snazzyham Thank you for your trust, Sohan, in using Langtail with your clients. We are working on new features that speed up your workflow with clients even more. 🚀
👋 Hey, I've tried your platform a few times now, and wow, the UX is outstanding! The effort that went into the tool you launched today is incredible - great job, @petrbrzek@rychlis and the Langtail team!
What’s in the pipeline for the future?
Any chance you’ll support hot new open-source models like Llama3?
Also, super keen to see if there's a plan for a self-hosted/on-premise gateway option.
Thanks, and good luck on your journey! 🚀
@good_lly Hello, thanks for your feedback!
Actually, last week when LLama3 came out, I was pretty intrigued and started experimenting with that myself.
If you upvote the feature request here, I will reach out and invite you to an experimental version of Langtail supporting more models in the coming weeks: https://lngt.io/mlVS7
Hello @good_lly, thanks for the great comment. We are preparing:
▶️ Support for multiple LLM providers
▶️ Focus on prompt evaluations
▶️ Support for OpenAI assistants
We are already internally testing Llama3 and will launch it soon.
LLM products are creating a flurry of bad experiences in their rush to hit the market quickly. But Petr and his team have been demonstrating since day one just how serious they are about doing this job with outstanding designs. I've been following them for over a year now and I highly recommend them to everyone. I'm certain they're going to reach fantastic places.
@tomas_hermansky Great to hear! We'd love to hear any feedback about your first impressions! You can leave a comment here or send them to our support 🙂
@tomas_hermansky That sounds great, Tomas! Let me know, and we'll personally onboard you and extend your trial so you can get the most value out of it.
Hello @mikias, we don't use Langchain on the backend, but we support Langchain, so it's possible to use Langtail with Langchain. https://langtail.com/docs/proxy/...
As for the OpenAI Assistant, we think they are great, and we are planning native support for them in Langtail. That said, we think there's value in offering a compatible Assistant API but with more flexibility and customization (like, for example, supporting different LLM providers).
Ever since I have known @petrbrzek, he and his team has always built tools you want to use - attention to detail is their brand. And current state of LLMs can really use a debugging & testing workbench :).
@tomaskafka We like to think so! We've been obsessing over every detail of Langtail in preparation for this launch. It isn't perfect, but we think it's the best UX out there today. 🙂
Replies
Macaly
Macaly
Valuemetrix
Contember
Langtail
Macaly
Avocode
Macaly
Langtail
Macaly
Langtail
Macaly
Macaly
Macaly
Macaly
Macaly
Spatulah
Langtail
Macaly
Macaly
Avocode
Langtail
Macaly
Macaly
Brokenatom
Langtail
Macaly
Macaly
Hear+
Macaly
Macaly
Thinkbuddy AI
Langtail
Macaly
Langtail
Macaly
Langtail
Macaly
VEED
Langtail
Macaly
Macaly
Langtail
Macaly
Macaly
Macaly
Macaly
Langtail
Macaly
Macaly
Langtail
Macaly
Macaly
Langtail
Macaly