
Have been using Langfuse for analytics for our chatbots and have to say its quite well done! Kudos to the team for such an amazing execution! I am definitely going to be a long term user.
We use Langfuse to monitor our GPT usage - specifically to watch for token usage, monitor hallucinations, and trace through request history when things go wrong. Great product and even greater team! Highly recommend trying Langfuse!
Stellar work @max_deichmann , @marc_klingen , @clemo_sf ! If anything, I dare to say a bit too good for a launch ;). Stoked to test it out on some projects - godspeed!
Easy to integrate and very intuitive to use!! Highly recommend for anyone developing chatGPT products. Amazing founding team who are super detail-oriented and understands LLM products deeply, and cool to see the evolution of the open source project every week!
So excited to see Langfuse go live — we've been a happy user for 4 weeks now. Most detailed latency and analytics in the market. Highly recommend for anyone using complex chains or with user-facing chat applications, where latency becomes crucial. Congrats, Clemens, Marc, and Max!
Easy to use -- it "just works". Great open-source experience as well.
Congrats on the launch - love the 2 minute demo. It shows how thoughtful this product is, and how much thought you've put into it! Congrats to the whole team!
This is awesome, congrats on the launch guys! LLMs always felt like such a black box for me, Langfuse seems to address these concerns. This is a huge win for the LLM space in general, I'm super excited to build on it!
This looks absolutely brilliant idea to track and stay informed about the performance of LLM applications. I am in the process of building one and certainly give it a try to measure my LLM app.