Manouk Draisma

Manouk Draisma

Co-founder LangWatch.ai

About

Hey Product Hunters! 👋 I'm co-founder of LangWatch.ai - build up from the painpoint of having limited control over LLM apps. We've build the end-to-end evaluation framework for AI engineering teams. Not just obervability or evals, but finding that right eval for your Agents. The past 10+ years I have been working in the start-up tech space, and what a crazy ride it has been... 🤯 Started 10 years ago at a Start-up, which went IPO within the first years I worked there. Building teams, partnerships, connecting with users and customers is what I love. 🤝 ❤️ In the meantime, I will add value where ever possible and support new product launches. Connect with me here and also on LinkedIn! ✌️

Work

Founder & Leadership at LangWatch Agent Simulations

Badges

Buddy System
Buddy System
Plugged in 🔌
Plugged in 🔌
Gemologist
Gemologist
Top 5 Launch
Top 5 Launch
View all badges

Maker History

Forums

I build the world's first AI Agent Testing platform to run Agent simulations.

Hey Product Hunt

Manouk here, I m the co-founder of LangWatch, and today we re incredibly excited to launch LangWatch Scenario, the first platform built for systematic AI agent testing.

Over the last 6months, we ve seen a massive shift: teams are moving from simple LLM calls to full-blown autonomous agents, handling customer support, financial analysis, compliance, and more. But testing these agents is still stuck in the past.

Why using Agent Simulations might be the new standard for AI Agent testing

Curious what other devs think about this.

AI systems today are way past just LLM wrappers.

We re building autonomous agents, tools that reason, act, and adapt across complex workflows.

But testing?

🧵 Why AI agent testing needs a rethink

Curious what other devs think about this.

AI systems today are way past just LLM wrappers.

We re building autonomous agents, tools that reason, act, and adapt across complex workflows.

But testing?

View more