Launches
Coming soon
Upcoming launches to watch
Launch archive
Most-loved launches by the community
Launch Guide
Checklists and pro tips for launching
Products
News
Newsletter
The best of Product Hunt, every day
Stories
Tech news, interviews, and tips from makers
Changelog
New Product Hunt features and releases
Forums
Forums
Ask questions, find support, and connect
Streaks
The most active community members
Events
Meet others online and in-person
Advertise
Subscribe
Sign in
Clear text
recent
p/introduce-yourself
by
Will Wei
•
3mo ago
Building Enterprise Knowledge Bases with LLMs in Public: Lessons from 15+ Projects
... applications. Why? Because we saw so many businesses excited by LLMs but completely stumped on how to actually implement them. It's a real pain point! Right now, our main focus is helping companies build out their enterprise knowledge
bases
basically waking up all that dormant data and documentation with AI so it can actually be useful. Somehow, we've already worked on or consulted for 15+ of these knowledge
base
1
6
p/bolt-new
by
Rumana R
Featured
•
5mo ago
Everything I Learned Building My Landing Page and Web Application on Bolt.new
... p/bolt-new community, I recently started building Couples Hub ( https://coupleshub.io/ ) a React-
based
application and Next.js
based
25
57
p/fleet-cockpit
by
Robin Marillia
•
6mo ago
The differences between prompt context, RAG, and fine-tuning and why we chose prompting
... window sizes (now reaching hundreds of thousands of tokens). However, it can become expensive with large inputs and may suffer from context overflow. RAG reduces token usage by retrieving only relevant snippets, making it efficient for large knowledge
bases
. However, it requires maintaining an embedding database and tuning retrieval mechanisms. Fine-Tuning offers the best customization, improving response quality and efficiency. However, it demands significant resources, time, and ongoing model updates. Why We Chose Prompt Context ... ... current needs, prompt context was the most practical choice: It allows for a fast development cycle without additional infrastructure. Large context windows (100k+ tokens) are sufficient for our small knowledge
base
6
19
Subscribe
Sign in