Built on 10 years of UC Berkeley research, RunLLM reads logs, code and docs to resolve complex support issues. Saves 30%+ eng time, cuts MTTR by 50%, deflects up to 99% of tickets. Trusted by Databricks, Sourcegraph and Corelight—try for free on your product.
RunLLM
Hi ProductHunt! My name is Vikram — I’m co-founder & CEO of RunLLM. RunLLM’s an AI Support Engineer that works how you work.
Background
The promise of AI is that customer support will become dramatically more scalable — so that your team can focus on high-value customer relationships. But anyone who’s building a complex product knows that a good support agent requires a lot more than a vector DB and GPT-4.1
The first version of RunLLM started off building an engine that generated the highest-quality answers we could get, and that helped us earn the trust of customers like Databricks, Monte Carlo Data, and Sourcegraph. But what we’ve found over the last 6 months is that there’s so much more we can do to help support teams operate efficiently.
RunLLM v2
In response to that feedback, we’ve built RunLLM v2, and we’re excited to share support for:
🤖 Agentic reasoning: Agents are all the rage, we know, but we promise this is for real. RunLLM’s reasoning engine focuses on deeply understanding user questions and can take actions like asking for clarification, searching your knowledge base, refining its search, and even analyzing logs & telemetry.
🖼️ Multi-agent support: You can now create agents tailored to the expectations that specific teams have — across support, success, and sales. Each agent can be given its own specific data and instructions, so you have full control over how it behaves.
⚙️ Custom workflows: Every support team is different, and your agent should behave accordingly. RunLLM’s new Python SDK enables you to control how your agent handles each situation, what types of responses it gives, and when it escalates a conversation.
Early Returns
Some of our early customers have been generous enough to share their feedback with us, and the results have been impressive:
- DataHub: $1MM of cost savings in engineering time
- vLLM: RunLLM handles 99% of all questions across the community
- Arize AI: 50% reduction in support workload
Try it & tell us what breaks
Spin up an agent on your own docs—for free—ask your hardest question, and see how far it gets. If it stumbles, let us know. We learn fast.
👉 Get started with a free account, then paste the URL to your documentation site. That’s it. In just a few minutes, we’ll process your data and you’ll be able to start asking questions about your own product.
We’re looking forward to your feedback!
@vsreekanti Finally, an AI support agent that doesn’t just parrot docs!
RunLLM
@masump This is critical for solving harder problems. It's fine to answer simple questions with what's in the docs, but resolving complex tickets requires much more work. That's what we're focused on. 🙂
RunLLM
@vsreekanti @masump Yes! It's amazing to think of how far we've come over the old chatbot technology of last decade. We're on the cutting edge of understanding a user's developer environment, pulling and debugging logs, writing validated custom code as a solution for a customer, and more. Our AI Support Engineer handles all this automatically, and updates documents, integrates across all the surfaces a team and its users works (think Docs site, Slack, Zendesk, etc.). We are definitely excited about all the things we can do beyond parroting docs! 🦜
Relay
Huge congrats on the launch @vsreekanti and the RunLLM team!! 👏🏽 It's been great to follow along how thoughtfully you've approached the core problem statement from the beginning.
RunLLM
@mrakashsharma Thanks Akash! Appreciate your support, and we're big fans of the community & content you all are building.
RunLLM
@mrakashsharma Thanks Akash for the support! Means a lot to us.
Relay
@vsreekanti Congratulations on the launch!!
RightNow AI
RunLLM
@jaber23, we wish we could've gotten this out sooner too. Some of it is just figuring out what customers need incrementally as you build, and some of it is that the tech wasn't quite ready yet. (e.g., Gemini 2.5 Flash is pretty important to our ability to do log analysis well). But we have a lot more coming soon — stay tuned!
RunLLM
@jaber23 Six months ago, it was already pretty awesome. But just think of how much better it is now that we've rebuilt it and added a new agentic planner with fine-grained reasoning and tool use support, a redesigned new UI that enables creating, managing, and inspecting multiple agents, and a Python SDK that allows you to exercise fine-grained control over support workflows! We'd love to get your impression of it. Note that you can try the full product absolutely free! We'll ingest documents and create a fine-tuned LLM to be an expert on your products. Then you can ask it hard questions about something you're familiar with to see how well it could work for you! 😻
RunLLM
@jaber23 Six months ago it was already strong — now it’s a whole new level!
BestPage.ai
Okay, this is brilliant—auto-resolving support tickets would save my team so much headache (and sleep). Does it handle really gnarly logs or just the easy stuff?
RunLLM
@joey_zhu_seopage_ai Hey Joey, after the agent uses tool call to fetch logs from systems like GCP, it uses an LLM to further extract parts that are relevant to the support tickets, so yeah it handles really gnarly logs. :)
RunLLM
@joey_zhu_seopage_ai Thanks for the feedback Joey! Glad to hear you see the value in this. We agree that stopping at the simple stuff would be boring and a bit underwhelming. We're always focused on solving our customers' hardest problems, so if you see any areas for improvement, send them our way!
RunLLM
@joey_zhu_seopage_ai We're built to handle advanced technical support (the hard stuff). Our customers consistently tell us that they are able to reclaim at least one third of each support engineer's time and have fewer escalations into engineering. So, based on what our customer are experiencing, we definitely know you could save time, headaches and reclaim some sleep!! 😴