Launched this week

Roark
Test, monitor, and improve your voice agents
418 followers
Test, monitor, and improve your voice agents
418 followers
Build voice agents you can trust. Roark tracks call metrics, runs evaluations, and stress-tests your agent with simulated callers across accents, languages, and speaking styles. Failed calls become tests - giving you visibility and continuous improvement.





Payment Required
Launch Team / Built With

Inbound β The email platform that lets you send, receive emails
The email platform that lets you send, receive emails
Promoted
Roark
π Hey Product Hunt!
Weβre @zammitjames & @danielgauci, co-founders of Roark (YC W25).
When we first built voice agents, we ran into the same problems every team faces:
Testing was manual - we literally called agents over and over just to check if they followed instructions.
Monitoring was missing - we didnβt know when failures happened, and even when they did, we had no idea which levers to pull to make the agent better.
Fixes didnβt stick - regressions kept popping back up without us noticing.
So we built Roark - a platform that brings reliability and visibility to Voice AI.
Hereβs what Roark does today:
πΉ Monitoring & Evaluation
Capture 40+ built-in call metrics and events (latency, instruction-following, repetition detections, sentiment, etc.) - plus define your own custom ones.
Support for calls with up to 15 speakers, with automatic speaker identification.
Analyze audio with models for emotion, vocal cues, and even fine-tuned transcriptions based on your use case.
Build dashboards, schedule reports, set up alerts, and trigger webhooks so your team is always in the loop.
Evaluate calls with best-in-class evaluators you can run on demand or automate via SDK/API.
πΉ Simulations & Personas
Run end-to-end simulations for both inbound and outbound agents, over the phone or WebSocket - so youβre testing the same paths real customers take.
Define tests as conversations - a sequence of turns between customer and agent, using a graph-based approach. This makes it easy to branch into edge cases or test variants, so your coverage reflects real-world complexity, not just happy paths.
Configure personas by gender, language, accent, background noise, and speech profile (pace, clarity, disfluencies).
Layer on behavior profiles like base emotion, intent clarity, confirmation style, memory reliability - even a backstory.
Stress-test across real-world variables and automatically generate test cases from live calls (failed calls β repeatable tests).
πΉ Developer-first Integrations
First-class SDKs in Node & Python + REST API.
Native support for LiveKit, Pipecat, VAPI, Retell, and Voiceflow.
Easiest integrations on the market - nothing bolted together overnight.
π In the past 6 months, Roark has already processed over 10M minutes of calls for companies like Radiant Graph, Podium, Aircall and BrainCX - helping them evaluate agents and run simulations at scale.
The result? A full lifecycle platform that closes the loop: monitor your live calls β spot failures β turn them into tests β improve continuously.
Think of Roark as the QA + Observability layer for Voice AI - robust, deeply thought-out, and built to last.
If youβre building voice agents, you can sign up today for 50% off with our PH discount, book a demo here if youβd like a walkthrough, or just drop me a note at james@roark.ai - weβd love to help.
- James & Daniel
@zammitjames @danielgauci
A few years ago I spent months building voice AI startup... We failed miserably π
One of the main reason: reliability!
It's something to build cool proto !
It's highly different and more complicated to have something robust you can have trust in !
What you guys do is allowing millions of dev to have the confidence to ship !
A super noble mission and a crucial one for the entire ecosystem!
π Keep pushing the Voice AI industry needs you to get too millions of devices and robots π€
Roark
@thibaulttbot Thank you for your support T-Bot - glad this resonated!!
Roark
@imnikhill10 Appreciate you Nikhil π thanks for the upvote!
@zammitjames @danielgauci Hey Team, Really impressed with how Roark builds trust in voice agents by tracking metrics, running evaluations, and even stress-testing with different callers.
One thought β many businesses using voice agents also deal with agreements, permissions, and compliance documents. Donβt you think it would be great to use an e-signature tool where you can chat with your clientsβ documents directly within the platform and send them for signing? This way, you wonβt need to ask your clients questions at all. Failed calls or successful ones often lead to next steps like consent forms, approvals, or contracts β and e-signatures can make that process smoother.
Let me know if you are open to chat on how this could complement Roark.
Regards,
Sania
Roark
@sania_khan10 thank you for the support and interesting thought! I'd be happy to chat further over a call -- feel free to book some time here!
@zammitjames Thank you for your time Jamesπ Iβll go ahead and book a slot that works best for you. Looking forward to our conversation and exploring how we can make things easier :)
AngelList
Let's go @zammitjames and team!
Roark
@thibqut Thank you for the support Thibaut!
@zammitjames @danielgauci
Congrats on the launch
Roark
@xueqin_lin Thanks Quinn!
Akiflow
Congrats on the launch @danielgauci π₯
Roark
Thanks @nunziomartinello!
Roark
@justinmfarrugia thank you! Appreciate the support!
Roark
Thank you @justinmfarrugia!