
Langfuse provides open-source observability and analytics for LLM apps.
Observability: Explore and debug complex logs & traces in a visual UI.
Analytics: Improve costs, latency and response quality using intuitive dashboards.
Langfuse is the open source LLM Engineering Platform. It provides observability, tracing, evaluations, prompt management, playground and metrics to debug and improve LLM apps. Langfuse is open. It works with any model, framework and you can export all data.
Langfuse is the #1 open source LLM engineering platform. It helps teams iterate on and improve their LLM applications: LLM tracing, metrics, llm-as-a-judge, evaluations, prompt management, datasets testing and more. Create a free account or self-host Langfuse.
Awesome!!! First, congratulations on the launch. Making LLM apps cost-effective is one of the priorities for the service that I am working on. This should help. Just a quick question, can this be used in tandem with Langsmith?
We use Langfuse to monitor our GPT usage - specifically to watch for token usage, monitor hallucinations, and trace through request history when things go wrong. Great product and even greater team! Highly recommend trying Langfuse!
So excited to see Langfuse go live — we've been a happy user for 4 weeks now. Most detailed latency and analytics in the market. Highly recommend for anyone using complex chains or with user-facing chat applications, where latency becomes crucial. Congrats, Clemens, Marc, and Max!
This is awesome, congrats on the launch guys! LLMs always felt like such a black box for me, Langfuse seems to address these concerns. This is a huge win for the LLM space in general, I'm super excited to build on it!
This looks absolutely brilliant idea to track and stay informed about the performance of LLM applications. I am in the process of building one and certainly give it a try to measure my LLM app.