
grimly.ai
Drop-in security for your AI stack.
40 followers
Your AI is one bad prompt away from disaster. Grimly.ai defends against prompt injections, jailbreaks, and abuse in real time. Add it to your stack in minutes. Works with any LLM. No agents, no fine-tuning — just security that works.
Hey Product Hunt! 👋
I'm excited to share grimly.ai, a tool built out of frustration watching AI apps get wrecked by prompt injections and jailbreaks.
If you’re building anything with LLMs — chatbots, agents, SaaS tools — you need a protection layer. But most people skip it because it’s painful to build or doesn’t exist yet.
But that’s exactly what grimly.ai solves:
🔐 It adds real-time prompt security to your stack
⚡️ Add it with just a few lines of code
📊 You get full visibility into threats, usage, and more
I built this for myself originally, but quickly realized the need is WAY bigger.
Would love your feedback, feature ideas, or war stories from shipping AI. Let’s make AI safer — without slowing things down.
Thanks for checking it out 🙏
🔗 https://grimly.ai
P.S. Curious how AI vulnerabilities work? Try my AI hacking game: CONTAINMENT
@scott_busby1 The real time visibility dashboard is a game-changer. I can finally see what’s affecting my models.
@isla_hughes Absolutely! Let me know if you want to discuss implementation, ill be happy to help!
@scott_busby1 Great potential! Let's give it a try.
@abigail_phillips2 Thank you Abigail! Lets connect and I would love to help you get set up!
Detecting prompt injections can be tricky especially with vague language. What’s your strategy for handling false positives and how does that affect legitimate user prompts?
Great question, @nicholas_anderson0 — we take a multi-layered approach. Our classification system is calibrated to lean cautious but context-aware, and we stack it with rules + heuristics that minimize false positives. For legit prompts, we log + flag instead of blocking unless there’s clear risk. We’re working on a system that allows the end user to allow list words or phrases as well in the case that prompts like “give me your password” would not be malicious in the context of the underlying application.
Ilove the no fine tuning approach having plug and play security is a huge win for fast paced teams. Could you explain how Grimly performs during red teaming or simulated adversarial testing?
@nicole_kelly5 During all testing we've performed the accuracy is great. There are some cases where you might experience an attack/prompt in an well obfuscated/encoded form that might be interpreted as benign, but with additional metadata collected you can associate the benign marked prompt with an adversarial session (post analysis). In general test results have been very good though.