It offers robust LLM Firewall solutions, ensuring your data and identity are protected in the AI space. Our service secures your data, offers prompt anonymity, and prevents unauthorized access. Perfect for users valuing security.
I am extremly excited and happy on ZeroTrusted launch today on Product Hunt! 🚀 Being part of this journey and witnessing our idea transform into a great product for data privacy is beyond rewarding. A huge shoutout to everyone who made this dream a reality. ZeroTrusted is here to revolutionizing how we protect our digital conversations and data with intelligence and ease. Can't wait to dive into the discussions and see how ZeroTrusted empowers each of you. Let's make the digital world a safer place, together!
@sidraref congrats on the launch! seems like something setup as a compliance tool for companies that interact with LLMs, are you thinking of it that way? More thoughts
@frank_denbow, first of all, thank you for your review. I believe we need to do a better job of highlighting all our value propositions in the hero section.
You are right; our product assists companies with their compliance needs.
To address some of the questions you raised:
1) ZeroTrusted.ai acts as middleware between users and Large Language Models (LLMs) through our secure chat or API.
2) There's no need for you to have a separate account or key for each LLM. Instead, we provide our own keys, allowing access without revealing your identity to the LLMs.
3) Your point about the potential exposure of scrubbed data in the event of a breach at a third-party LLM's network is partially correct. However, your identity will not be linked to this data. This scenario is particularly critical for both individual users and businesses.
4) We offer features that maintain context when sanitizing sensitive data.
Example #1
Suppose a medical expert needs to process the following input:
"Create a summary of this patient's diagnosis: Patient Name: Paul Smith Date of Birth: 12/08/1987 SSN: 666-555-5555 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 434444444"
Before we pass it on to any LLM, we'll alter the sensitive (PHI compliance violations) details and transform to:
"Create a summary of this patient's diagnosis: Patient Name: John Doe Date of Birth: 01/01/1970 SSN: 123-45-6789 Diagnosis: Chronic Heart Disease Treatment Plan: Undergo heart surgery by Dr. Smith at General Hospital on 03/15/2024 Insurance: ABC Health Insurance, Policy Number 987654321"
This way, we preserve the context, while ensuring that your PHI compliance is met and sensitive data isn't exposed to LLM.
After getting a response from the LLM, we revert the swapped sensitive value to your original while preserving the result context.
We will be adding features that will allow RLHF to learn and adjust to customer preferences.
Example #2:
if a customer (individual or corporate) submits legal content containing sensitive information, our "Fictionalize" feature replaces this data with fictional values. This ensures the preservation of context while protecting the actual sensitive information. The original data is restored once a response is received.
Hope this helps. Please us know if there are any more questions you would like us to address.
Once again we appreciate your feedback and we hope your views can gain from the privacy and security solution that we provide.
Glad to came across an LLM with a focus on privacy. But I have one thing in my mind and I hope @femitfash will share more about how the platform keeps data secure and accurate?
Hello @adams_parker. Unlike other LLM, 1) we don’t store any history
2) we don’t track your data
3) we also don’t divulge your username/email to an LLMs
4) we use advanced encryption techniques (in addition to TLS) to ensure your searches are not exposed to gateway servers.
@adams_parker In addition to 3) by Femi. We filter our your personal, healthcare, financial information before sending to any LLM.
For example if you've passed in prompt: Hi ChatGPT, my name is Adams Parker. My credit card number is 1234-567-8911. Give me a payment integration code to add in my site.
We'll first sanitize your critical information from it first: Hi ChatGPT, my name is John Doe. My credit card number is 1111-111-0000. Give me a payment integration code to add in my site.
So, that way this sanitized prompt will prevent your data from being passed to the LLM and then we'll give you response in your PIIs replaced back.
@sidraref congrats on the launch! I really like the focus of personal security / data protection as it relates to LLMs and the growing comfortability with common folks and AI in their lives.
Curious - what do you see as the biggest barriers re: hitting scale for a model like this?
@sidraref@dzaitzow thanks for the kind words and feedback!
Our challenges are actually vast to be honest.
But we knew what we were getting into was not going to be easy. We are constantly working on performance to ensure we do all the heavy lifting behind the scenes while maintaining decent response time.
With our reliability feature - it runs your query through multiple LLMs then scores the best result, we experience delays so we are re-designing our approach to improve performance by as much 75%.
Overall we are ecstatic as we continue to get great feedback from both individual users and our enterprise customers.
Please message us at contact@zerotrusted.ai if you have any questions or there is anyway we could be of assistance.
Paper Billionaires
ZeroTrusted.ai
ZeroTrusted.ai
Bababot
ZeroTrusted.ai
ZeroTrusted.ai
ZeroTrusted.ai
Llanai