Manouk Draisma

LangWatch Scenario - Agent Simulations - Agentic testing for agentic codebases

As AI agents grow more complex, reasoning, using tools, and making decisions, traditional evals fall short. LangWatch Scenario simulates real-world interactions to test agent behavior. It’s like unit testing, but for AI agents.

Add a comment

Replies

Best
Manouk Draisma

Hey Product Hunt! 👋

We're excited to be launching LangWatch Scenario the first and only testing platform that allows you to test agents in simulated realities, with confidence and alongside domain expertise.

The problem that we’ve found is that teams are building increasingly complex agents, but testing them is still manual, time-consuming, and unreliable. You tweak a prompt, manually chat with your agent, hope it works better... and repeat. It's like shipping software without unit tests.

Our solution: Agent simulations that automatically test your AI agents across multiple scenarios. Think of it as a test suite for agents — catch regressions before they hit production, simulate edge cases alongside domain experts in a collaborative fashion, and ship with confidence.

What makes us different:

🧠 Agent simulations that act as unit tests for AI agents

🧪 Simulate multi-turn, edge-case scenarios

🧑‍💻 Code-first, no lock-in, framework-agnostic

👩‍⚕️ Built for domain experts and not just devs

🔍 Catch failures before users see them

✅ Trust your agent in production, not just evals

🏗️ Works with any agent framework (LangGraph, CrewAI, etc.)

LangWatch scenarios is our latest breakthrough that will allow teams to ship agents with confidence, not crossed fingers.

Get started today:

⭐ GitHub: https://github.com/langwatch/scenario

📖 Docs: https://docs.langwatch.ai/


🎮 Try Agent Simulations: https://langwatch.ai/

If you're building and testing AI agents, we'd love to hear what you're working on and how we can help.

A big thanks to the PH community for all your feedback and support.

We're here all day and can't wait to hear your thoughts, questions, and feedback!

Rogerio Chaves

Hello everyone! 👋

I'm Rogerio, founder of LangWatch, been developing software for 15+ years, and my career really changed once I started dominating unit tests, TDD and so on, not only delivering mission critical software with zero bugs but also having a much more pleasant experience in doing so.

So I couldn't be more excited for the Agent Simulations solution we are bringing today to the world, it feels like finally the missing piece in delivering agents, bringing much stronger craftsmanship to agent development.

I'll be your technical guide here, ask me anything!

Job Rietbergen
evals and quick testing of agents is much needed. will give this product a go. congrats on the launch!
Manouk Draisma

Thanks @jobrietbergen  AI agents need even more than just evals, give it a try!

Ankit Sharma

Love this shift, treating agents like software just makes sense. Do teams use it more pre- or post-deploy?

Rogerio Chaves

@startupsharma both! They use it pre-deploy to ensure it works well before going to production of course, but the work is not done there, as invariably they want keep improving their agent for next releases, try out newer models, handle more edge cases and so on, simulations guarantee everything is still working and allow them to keep going forward without being afraid of changing the prompts and breaking something

William Calderipe

Congrats for the launch 🚀

Scenario testing seems like a game changer for the non-deterministic nature of AI. It's very cool to see testing and quality tools finally emerging for this new wave of agent-based systems.

I've known @r0bertp3rry since 2016 and he's always been an enthusiast of the ML field, I remember a chat bot demo of him while back when it wasn't even a thing everyone talked about. So, it's awesome to see him building in this space now.

Huge congrats to the team! 👏

Manouk Draisma
@wcalderipe thanks! Exciting to hear so!
Paul Jansen

Seems like a logical next step and a great addition to the product!

Manouk Draisma
@mr_jansen thanks, it’s the next thing every ai team has to be ready for the future! Thanks for the support
Jan De Wulf

Congrats @manouk_dr & team, this is a huge step forward for reliable agent development. Feels like the missing test layer for AI. 👏

Manouk Draisma
@jansta 🙏 thanks so much! Happy to hear you believe the same!
Hidde van der Ploeg

Congrats on the launch! Looks great

Manouk Draisma
@hiddevdploeg getting support from such a big person in product design feels goood!
amir

Awesome stuff, congrats.

Manouk Draisma
@amirhouieh thanks! More OSS solutions like yours and ours are the way to go!
Rachitt Shah

Really like Langwatch, mainly their DSPy optimizers, super cool launch!

Rogerio Chaves
@rachitt_shah thanks Rachitt 🙌 we see the simulations as the next logical step: optimize the agent parts with DSPy, and test that it all works together Perhaps even using the simulations themselves as a metric so DSPy can auto optimize to find the best prompts later on!