Product Hunt logo dark
  • Launches
    Coming soon
    Upcoming launches to watch
    Launch archive
    Most-loved launches by the community
    Launch Guide
    Checklists and pro tips for launching
  • Products
  • News
    Newsletter
    The best of Product Hunt, every day
    Stories
    Tech news, interviews, and tips from makers
    Changelog
    New Product Hunt features and releases
  • Forums
    Forums
    Ask questions, find support, and connect
    Streaks
    The most active community members
    Events
    Meet others online and in-person
  • Advertise
Subscribe
Sign in
Subscribe
Sign in

Giskard

The Testing platform for AI systems

5.0
β€’2 reviewsβ€’

983 followers

The Testing platform for AI systems

5.0
β€’2 reviewsβ€’

983 followers

Visit website
Testing and QA software
β€’
AI Metrics and Evaluation
Fast LLM System at scale πŸ›‘οΈ Detect hallucinations & biases automatically πŸ” Enterprise LLM Evaluation Hub ☁️ Self-hosted / cloud 🀝 Integrated with πŸ€—, MLFlow, W&B
  • Overview
  • Launches1
  • Reviews2
  • Alternatives
  • Team
  • Awards
  • More
Company Info
giskard.aiGitHub
Giskard Info
Launched in 2023View 1 launch
Forum
p/giskard-2
  • Blog
  • β€’
  • Newsletter
  • β€’
  • Questions
  • β€’
  • Forums
  • β€’
  • Product Categories
  • β€’
  • Apps
  • β€’
  • About
  • β€’
  • FAQ
  • β€’
  • Terms
  • β€’
  • Privacy and Cookies
  • β€’
  • X.com
  • β€’
  • Facebook
  • β€’
  • Instagram
  • β€’
  • LinkedIn
  • β€’
  • YouTube
  • β€’
  • Advertise
Β© 2025 Product Hunt
SocialLinkedInThreads

Similar Products

Xcode
Xcode
Develop, test, and distribute apps for all Apple platforms
4.9(106 reviews)
Code editorsTesting and QA software
Sentry
Sentry
Application monitoring and error tracking software
4.9(56 reviews)
Issue tracking softwareTesting and QA software
Langfuse
Langfuse
Open Source LLM Engineering Platform
5.0(49 reviews)
AI Infrastructure ToolsAI Metrics and Evaluation
GitHub Actions
GitHub Actions
Automate your workflow from idea to production
5.0(22 reviews)
Automation toolsTesting and QA software
Postman
Postman
Build APIs together
4.6(62 reviews)
Team collaboration softwareTesting and QA software
View more
Giskard gallery image
Giskard gallery image
Giskard gallery image
Giskard gallery image
Giskard gallery image
Giskard gallery image
Free
Launch tags:
Open Sourceβ€’Developer Toolsβ€’Artificial Intelligence
Launch Team
Nicolas GreniΓ©Alex CombessieAndrey Avtomonov

What do you think? …

Alex Combessie
Alex Combessie

Giskard

Maker
πŸ“Œ
Hello Product Hunt, I'm Alex, here with Jean-Marie, Andrey, and the rest of the Giskard team. We're thrilled and slightly nervous to present Giskard 2.0. This has been 2 years in the making, involving a group of passionate ML engineers, ethicists, and researchers, in partnerships with leading standard organizations, such as AFNOR and ISO. So, why Giskard? Because we understand the dilemma you face. Manually creating test cases, crafting reports, building dashboards, and enduring endless review meetings - testing ML models can take weeks, even months! With the new wave of Large Language Models (LLMs), testing models becomes an even more impossible mission. The questions keep coming: Where to start? What issues to focus on? How to implement the tests? 🫠 Meanwhile, the pressure to deploy quickly is constant, often pushing models into production with unseen vulnerabilities. The bottleneck? ML Testing systems. Our experience includes leading ML Engineering at Dataiku and years of research in AI Ethics. We saw many ML teams struggling with the same issues: slowed down by inefficient testing, allowing critical errors and biases to slip into production. Current MLOps tools fall short. They lack transparency and don’t cover the full range of AI risks: robustness, fairness, security, efficiency, you name it. Add to this compliance to AI regulations, some of which could be punitive, costing up to 6% of your revenue (EU AI Act). Enter Giskard: πŸ“¦ A comprehensive ML Testing framework for Data Scientists, ML Engineers, and Quality specialists. It offers automated vulnerability detection, customizable tests, CI/CD integration, and collaborative dashboards. πŸ”Ž An open-source Python library for automatically detecting hidden vulnerabilities in ML and LLMs, tackling issues from robustness to ethical biases. πŸ“Š An enterprise-ready Testing Hub application with dashboards and visual debugging, built to enable collaborative AI Quality Assurance and compliance at scale. ∭ Compatibility with the Python ML ecosystem, including Hugging Face, MLFlow, Weights & Biases, PyTorch, Tensorflow, and Langchain. ↕️ A model-agnostic approach that serves tabular models, NLP, and LLMs. Soon, we'll also support Computer Vision, Recommender Systems, and Time Series. Equip yourself with Giskard to defeat your AI Quality issues! πŸ’πŸ›‘οΈ We build in the open, so we’re welcoming your feedback, feature requests and questions. For further information: Website: https://www.giskard.ai/ GitHub: https://github.com/Giskard-AI/gi... Discord Community: https://gisk.ar/discord Best, Alex & the Giskard Team
Report
2yr ago
Adrian
Adrian
@a1x Great product and indeed strong potential to improve MLOps across the board
Report
2yr ago
Nicolas GreniΓ©
Nicolas GreniΓ©
Typeform

Typeform

Hunter
Artificial intelligence is expanding rapidly, and product teams are under pressure to integrate new AI features into their products quickly. While it may be easy to put together a prototype for a demonstration, releasing it for production comes with many other considerations. In particular, LLMs can produce hallucinations (misinformation) and show biases. These errors can harm both product quality and the trust users place in our technology.
Report
2yr ago
Jean-Marie John-Mathews
Jean-Marie John-Mathews

Giskard

Maker
@picsoung Yes absolutely!
Report
2yr ago
Thelonious Thackeray
Thelonious Thackeray
@picsoung nice
Report
2yr ago
Alexandr Builov
Alexandr Builov
Scade.pro

Scade.pro

Hey Alex and the Giskard Team! πŸ‘‹ I'm amazed at the comprehensive approach Giskard 2.0 is providing to tackle ML Testing. The fact this is open-source and compatible with multiple platforms makes it even more appealing. πŸš€ Could you shed some light on how Giskard manages AI risk range and the mitigation process? Also, it's impressive how you aim to cover different model types. πŸ™Œ Looking forward to seeing Giskard revolutionize ML Testing!
Report
2yr ago
Luca Martial
Luca Martial
Maker
@builov84 thanks so much for the kind words! 🌟 Our detailed open-source documentation pages outline the specific vulnerabilities that Giskard 2.0 can detect. For mitigation strategies, our enterprise hub offers robust debugging features, designed to not only identify risks but also provide actionable insights into the sources of the issues detected. Feel free to dive into our docs and reach out with any further queries! πŸš€πŸ› οΈ https://docs.giskard.ai/en/lates...
Report
2yr ago
Rabah Abdul Khalek
Rabah Abdul Khalek

Giskard

Maker
Hi @builov84, in order to estimate the risk range, 1. We first curate a list of the most relevant and checked issues that reflect critical risks if they are detected. For tabular and NLP models, we have several categories: Performance, Robustness, Calibration, Data Leakage, Stochasticity, etc. For LLMs, we have Injection attacks, Hallucination & misinformation, Harmful content generation, Stereotypes, Information disclosure, and Output formatting. 2. Under each category, we mostly rely on tailored statistical procedures and metrics to estimate the probability of occurrence, statistical significance, and severity level for each of the issues found. We provide the option to use procedures like Benjamini-Hochberg to decrease the false discovery rate. We also provide an explanation of the impact an issue could have on your ML pipeline. 3. Although our default risk range assessment is carefully crafted, we provide the user with the option to set up his own by configuring the statistical threshold and severity levels based on his own use case if needed. Our Giskard Hub is then dedicated to the mitigation process, 1. From the issues found during the scan, the user can automatically generate a set of tests and upload them into our Hub. Each of the tests generated reflects an issue found and embeds a quantitative metric (the one we relied on to estimate the severity level). 2. Once uploaded to the Hub, it becomes possible to customize these tests, use them with other models and datasets for comparison, and most importantly, use them to debug a specific model by investigating, one by one, the samples in your data that made these tests fail. 3. While debugging, we equip you with explanation tools like SHAP in order to shed some light on the features' importance for tabular and NLP models. 4. Per sample investigated, we provide you automatically with additional insights that allow you to detect critical patterns in your data, create additional tests, and assess the stability of your model against small data perturbations.
Report
2yr ago
AutoForm
AutoForm β€” Automate the busywork from your files and your tools.
Automate the busywork from your files and your tools.
Promoted

Do you use Giskard?

5.0
Based on 2 reviews
Review Giskard?
Reviews
Helpful

You might also like

Arbor
Arbor
Read 1 aggregated summary to skip 100 similar content.`
AlphaFold 3
AlphaFold 3
Predicting the structure & interactions of life’s molecules
Google.dev
Google.dev
Google Developer Profiles
Mystic AI
Run any AI model on your cloud or ours...
LLM Explorer
LLM Explorer
Find the best large language model for a local inference
Prompto
Prompto
Interact with various LLMs in your browser
View more
Andrew Han Zheng
Andrew Han Zheng
β€’1 review
Giskard is a highly promising tool that excels in both functionality and user experience. I particularly appreciate its intuitive interface and robust features, which make handling complex tasks simple and efficient. While there is still some room for minor improvements, such as adding more customization options, overall, Giskard is already a mature and reliable tool. I would give it a high rating of 4.5 stars and look forward to future updates that could further enhance the user experience.
Report
7mo ago
Alex Combessie
Alex Combessie

Giskard

Thanks for your kind and positive words! We're always keen to improve. What kind of customization options would you find most useful?
Report
7mo ago
oussama ababsa
oussama ababsa
β€’1 review
It was a good job, thank you very much you helped me
Report
2yr ago