John Licato

Hallucina-Gen - Spot where your LLM might make mistakes on documents

Using LLMs to summarize or answer questions from documents? We auto-analyze your PDFs and prompts, and produce test inputs likely to trigger hallucinations. Built for AI developers to validate outputs, test prompts, and squash hallucinations early.

Add a comment

Replies

Best
John Licato
Hi fellow makers, We’re Actualization.AI, and our mission is to make AI safer, more accurate, and genuinely helpful to people. We built this tool for AI developers or tinkerers who are using LLMs to work with documents, especially those building things like policy chatbots, legal assistants, or internal tools over PDFs. LLMs are powerful, but they’re also prone to hallucinating. This is especially true when summarizing or answering questions from dense documents like regulations, contracts, or technical reports. We see this all the time in our own AI projects, and we wanted a way to catch those errors before they reach users. This project started with our own frustrations. One of us is a professor and AI researcher at USF. The other has a PhD specializing in AI/NLP. We kept running into the same issue: LLMs giving confident, wrong answers in high-stakes domains. So we built a tool to make AI development more robust. Here’s how it works: - Upload your PDFs - Tell us what LLM you're using and the prompts you're testing - We analyze the documents and generate a spreadsheet of inputs likely to trigger mistakes It helps you test prompts, improve guardrails, and understand how your model might fail. You don't need to run live model calls or set up complex eval pipelines. This is primarily for: - Developers building AI tools that reason over documents - Teams deploying RAG systems or internal copilots - Researchers and QA folks focused on LLM evaluation and reliability - People just tinkering with chatbots that want to know how accurate they are We’re providing Product Hunt users a free preview of the hallucinations we find, and we only ask for one thing: We’d love feedback from early users. What’s helpful? What’s missing? How can we make this better? Thanks for checking it out. We’re here all day if you have questions or want to go deeper. - The Actualization.AI team
Om Deore

It might be useful, good work👍

Manas Sanjay Pakalapati

Hey there, awesome makers! 👋

We’re super excited to share our new tool that helps catch those tricky AI mistakes in your document-based projects. Give it a try and let us know what you think!

Darsh Vaidya

Really neat tool for anyone building with LLMs over documents. You upload PDFs, plug in your prompts, and it flags spots where your model is most likely to mess up—super helpful for testing and sanity-checking without needing to wire up eval pipelines. Definitely worth checking out if you’re working on RAG or document-based assistants.