Hello Product Hunt,
I'm Alex, here with Jean-Marie, Andrey, and the rest of the Giskard team. We're thrilled and slightly nervous to present Giskard 2.0. This has been 2 years in the making, involving a group of passionate ML engineers, ethicists, and researchers, in partnerships with leading standard organizations, such as AFNOR and ISO.
So, why Giskard? Because we understand the dilemma you face. Manually creating test cases, crafting reports, building dashboards, and enduring endless review meetings - testing ML models can take weeks, even months!
With the new wave of Large Language Models (LLMs), testing models becomes an even more impossible mission. The questions keep coming: Where to start? What issues to focus on? How to implement the tests? π«
Meanwhile, the pressure to deploy quickly is constant, often pushing models into production with unseen vulnerabilities. The bottleneck? ML Testing systems.
Our experience includes leading ML Engineering at Dataiku and years of research in AI Ethics. We saw many ML teams struggling with the same issues: slowed down by inefficient testing, allowing critical errors and biases to slip into production.
Current MLOps tools fall short. They lack transparency and donβt cover the full range of AI risks: robustness, fairness, security, efficiency, you name it. Add to this compliance to AI regulations, some of which could be punitive, costing up to 6% of your revenue (EU AI Act).
Enter Giskard:
π¦ A comprehensive ML Testing framework for Data Scientists, ML Engineers, and Quality specialists. It offers automated vulnerability detection, customizable tests, CI/CD integration, and collaborative dashboards.
π An open-source Python library for automatically detecting hidden vulnerabilities in ML and LLMs, tackling issues from robustness to ethical biases.
π An enterprise-ready Testing Hub application with dashboards and visual debugging, built to enable collaborative AI Quality Assurance and compliance at scale.
β Compatibility with the Python ML ecosystem, including Hugging Face, MLFlow, Weights & Biases, PyTorch, Tensorflow, and Langchain.
βοΈ A model-agnostic approach that serves tabular models, NLP, and LLMs. Soon, we'll also support Computer Vision, Recommender Systems, and Time Series.
Equip yourself with Giskard to defeat your AI Quality issues! π’π‘οΈ
We build in the open, so weβre welcoming your feedback, feature requests and questions.
For further information:
Website: https://www.giskard.ai/
GitHub: https://github.com/Giskard-AI/gi...
Discord Community: https://gisk.ar/discord
Best,
Alex & the Giskard Team
Artificial intelligence is expanding rapidly, and product teams are under pressure to integrate new AI features into their products quickly. While it may be easy to put together a prototype for a demonstration, releasing it for production comes with many other considerations.
In particular, LLMs can produce hallucinations (misinformation) and show biases. These errors can harm both product quality and the trust users place in our technology.
Hey Alex and the Giskard Team! π I'm amazed at the comprehensive approach Giskard 2.0 is providing to tackle ML Testing. The fact this is open-source and compatible with multiple platforms makes it even more appealing. π Could you shed some light on how Giskard manages AI risk range and the mitigation process? Also, it's impressive how you aim to cover different model types. π Looking forward to seeing Giskard revolutionize ML Testing!
@builov84 thanks so much for the kind words! π Our detailed open-source documentation pages outline the specific vulnerabilities that Giskard 2.0 can detect. For mitigation strategies, our enterprise hub offers robust debugging features, designed to not only identify risks but also provide actionable insights into the sources of the issues detected. Feel free to dive into our docs and reach out with any further queries! ππ οΈ
https://docs.giskard.ai/en/lates...
Hi @builov84, in order to estimate the risk range,
1. We first curate a list of the most relevant and checked issues that reflect critical risks if they are detected. For tabular and NLP models, we have several categories: Performance, Robustness, Calibration, Data Leakage, Stochasticity, etc. For LLMs, we have Injection attacks, Hallucination & misinformation, Harmful content generation, Stereotypes, Information disclosure, and Output formatting.
2. Under each category, we mostly rely on tailored statistical procedures and metrics to estimate the probability of occurrence, statistical significance, and severity level for each of the issues found. We provide the option to use procedures like Benjamini-Hochberg to decrease the false discovery rate. We also provide an explanation of the impact an issue could have on your ML pipeline.
3. Although our default risk range assessment is carefully crafted, we provide the user with the option to set up his own by configuring the statistical threshold and severity levels based on his own use case if needed.
Our Giskard Hub is then dedicated to the mitigation process,
1. From the issues found during the scan, the user can automatically generate a set of tests and upload them into our Hub. Each of the tests generated reflects an issue found and embeds a quantitative metric (the one we relied on to estimate the severity level).
2. Once uploaded to the Hub, it becomes possible to customize these tests, use them with other models and datasets for comparison, and most importantly, use them to debug a specific model by investigating, one by one, the samples in your data that made these tests fail.
3. While debugging, we equip you with explanation tools like SHAP in order to shed some light on the features' importance for tabular and NLP models.
4. Per sample investigated, we provide you automatically with additional insights that allow you to detect critical patterns in your data, create additional tests, and assess the stability of your model against small data perturbations.
Giskard is a highly promising tool that excels in both functionality and user experience. I particularly appreciate its intuitive interface and robust features, which make handling complex tasks simple and efficient. While there is still some room for minor improvements, such as adding more customization options, overall, Giskard is already a mature and reliable tool. I would give it a high rating of 4.5 stars and look forward to future updates that could further enhance the user experience.
Giskard
Typeform
Giskard
Scade.pro
Giskard