OpenShield is a new-generation security layer for AI models—a transparent proxy between your application and your AI model(s). You don't need to modify your code. You can keep it simple!
Hi!
I'm David, co-founder of OpenShield. OpenShield is a transparent security layer that integrates your applications and AI models seamlessly. Building on traditional transparent proxies, it enhances them with cutting-edge AI techniques. Compatible with all significant LLM APIs, OpenShield allows you to apply rule-based policies and WAF or chain multiple LLMs to address issues like prompt injection, language forcing, and PII data classification. Simply replace the base URL in your API calls to benefit from robust, adaptable security tailored for AI-driven applications.
Why do you need this?
If you use AI in your product, it's important to know that it can create new security risks. You should take steps to protect your software, interfaces, and AI systems from outside attacks.
Key features:
Rate-limiting
Caching
Virtual keys
Rule-based policies
LLM model-based policies ( English language forcing, prompt injection, invisible characters)
Coming soon:
Rag based policies
Hosted version
Multiple AI provider support
OpenShield is an open-source project. Please support us with a Github star!
@davidpapp love that you chose a proxy. The last thing I want to do is modify application code and maintain security related code. What's your plan with open-source? Do you plan on keeping it open forever?
Great to see OpenShield launching, David! The seamless integration with existing applications looks impressive. Excited to see how it enhances security for AI usage. Definitely giving this an upvote!