
Prompt recipes: from boilerplate to production
AI coding is pretty mindblowing but sometimes it's a headache built on a mountain of bugs. Usually that comes down to issues with how you prompt. So, with that in mind, I'm starting a crowdsourcing discussion so we can all improve our prompts and in turn our apps.
Share the exact prompts that turn blank AI requests into real‑world code. Show us how you go from “generate a REST API” to a deployable service in just a few steps.
What to post:
Prompt text (copy‑&‑paste ready) and its purpose (e.g., Node + Express, React, Python/FastAPI).
Key tweaks: how you layered in error handling, authentication, database setup, tests, Docker, CI/CD, etc.
Parameter files: JSON/YAML examples you feed in to auto‑generate endpoints, configs, or components.
Bonus points:
A single prompt that reads a JSON spec (e.g., service name, endpoints, DB choice) and spins up an entire project.
A chain of prompts covering frontend → API client → backend → deployment script.
Let’s build a lean, battle‑tested cookbook of prompts anyone can copy, tweak, and ship!
Replies
Product Hunt
Okay so while not vibe coding this is my favorite prompt to use on @Krea to get some good images for my projects.
For Vibe coding, it really depends on the language but for Swift I try to break it up by file type and so my first prompt to an LLM usually goes something like [Enter app's desired goal/function/utility] and build me a step-by-step guide on how to build it. Break down each step by the files needed for it, the file types, and what the do. Make each step a phase. Lets have no more than 5 phases total.
Then what I do is I put those phases into a README, take that README into Cursor, give it the same prompt I gave the LLM, then tell it to refer to the README, then have it list out anything it might think on how to improve the process, if so, to update the README. So that prompt is like:
Once it updates the README I tell @Cursor to then:
From there I rinse and repeat until I finish all the phases.
This way I can
Debug easier (per phase)
Save my repo accordingly and revert back to changes easier (Phase 2 messed up what we had? No problem, Phase 1 is sae)
Helps the LLM zone in on a specific context and focus so it's less likely to make mistakes
The README helps give it continuous context of where we came from and where we're going for
Product Hunt
@gabe this was super informative! I like the idea of limiting the phases, for me a big issue, is AI forgets halfway through what it's building if it has a long build process.
@README @gabe Whilst not a coder any more I also like this and get the process and may try it myself over the coming days - many thanks!
@README @gabe
In line with what you are doing here having @Cursor make a plan, check out this video. It's worth a watch. You download a custom instructions file from Git, and use that and a few simple questions to come up with the app's overall task list, then @Cursor breaks it into subtasks with check boxes, and you work your way through the file.
https://www.youtube.com/watch?v=cniTWVMGD08&list=LL&index=1
How Vibe Coding Goes PRO
Product Hunt
I basically mind dump into Grok 4 and tell it to ask me clarifying questions until it can write an effective prompt to build X. Then I put that into Claude Code or Windsurf. I can't be bothered to use my brain to write the real prompt, that's what the robot is for.
Product Hunt
@steveb thats a smart move! i like that!
For me, the initial prompt isn't everything - AI can still go off the rails with a great prompt. But I've found that when it's not doing well, it's good to clear the context (e.g. fork everything into a new chat), or you'll keep having the same issue. It's like giving the AI a clean slate / fresh perspective, without the context of failure.
Product Hunt
@frederikb yeah, i've found that too. it's like it gets overwhelmed sometimes
@aaronoleary great way to put it! feels almost human sometimes
The best results I've found come from "crossing the streams," so to speak. As others mentioned, to take a spec from one GPT and apply it with another. I've found that breaking things into fine chunks, ideally "hopped" between GPTs, provides dramatically decreased errors, and increased end-to-end function.
Pretty Prompt
Hey! So I’m very involved in the Vibe Coding world and prompt engineering (or as it’s called now “context engineering “!)
And built a tool that basically helps with all the issues with prompting. It’s called Pretty Prompt and works like Grammarly, but for prompting.
I’d love to hear feedback from the community if anyone has tried with different prompting techniques.