Manouk Draisma

I build the world's first AI Agent Testing platform to run Agent simulations.

Hey Product Hunt 👋

Manouk here, I’m the co-founder of LangWatch, and today we’re incredibly excited to launch LangWatch Scenario, the first platform built for systematic AI agent testing.

Over the last 6months, we’ve seen a massive shift: teams are moving from simple LLM calls to full-blown autonomous agents, handling customer support, financial analysis, compliance, and more. But testing these agents is still stuck in the past.

Most teams rely on "vibe checks" or evals, which just don’t scale to complex, multi-turn, decision-making systems. That’s why we built Scenario: Agent simulations that test behavior like unit tests test code.

Here’s what that means:

  • Simulate real-world conversations to catch failures before production

  • Involve domain experts directly in the testing loop

  • Integrate with your existing stack: CI/CD, version control, pytest, and more

  • Boost speed and quality without compromising either

Our mission at LangWatch is to give AI teams confidence in every release, and this is a huge step towards making agent development as reliable as software engineering.

I’ll be hanging out all day to chat, AMA about the testing gap in agent systems, how we’re thinking about developer <> domain expert collaboration, or what it’s like building this at the edge of AI infra and product.

Thanks for the support and excited to hear your thoughts! 🚀

5 views

Add a comment

Replies

Best
Rachit Magon

Amazing work! We use lang graph for our agents so this might come in really handy. I was wondering, would you like to come on my podcast and talk about it?