Stitch by Google Labs is a new AI experiment that turns prompts & images into UI designs & frontend code. Leverages Gemini 2.5 Pro, exports to Figma & code.
Google Labs has just released Stitch, and it's a really cool new tool focused specifically on front-end UI design and development. You can give it a text prompt or even an image of a sketch or wireframe, and it generates UI designs along with the frontend code.
It uses Gemini 2.5 Pro under the hood and lets you iterate on designs, then paste them into Figma or export the code. The idea is to make that handoff from design to functional UI much smoother and faster.
This is interesting because while we see a lot of innovation in LLM capabilities themselves, how we interact with these AI-powered applications on the front-end is just as crucial, if not more so. Tools like Stitch that focus on improving that agentic UI/UX creation process are super important for making AI actually usable and delightful.
Thanks all for trying Stitch (and thanks Zac for hunting us!). Stitch is still in beta, we have tons of ideas to make it better. Stay tuned, the next couple of months will be exciting.
@vnallatamby Wow, this is crazy..You all have done a great job with Stitch! The community's enthusiastic response definitely shows its positioning is accurate. I'm looking forward to seeing what a great product Stitch will grow into!
Stitch shows real potential in bridging idea to interface—turning prompts and images into functional UI and frontend code with Gemini 2.5 Pro. Exporting directly to Figma and code makes it especially useful for rapid prototyping.
I tried using it and then it timed out... probably from everybody trying it. :'''-D But this looks awesome - I feel like this is what v0 and Lovable are missing!
Tried Stitch today and it’s seriously impressive. The sketch-to-UI flow feels like magic—and the fact that it gives you production-ready frontend code? Wild.
What I love most is how it bridges that awkward gap between design and dev. No more translating mockups pixel by pixel. Just iterate, export, tweak, done. This is what AI should be doing—removing friction, not adding more layers.
Excited to see how this evolves. Tools like this make AI feel less like a novelty and more like an actual teammate.
Replies
Hi everyone!
Google Labs has just released Stitch, and it's a really cool new tool focused specifically on front-end UI design and development. You can give it a text prompt or even an image of a sketch or wireframe, and it generates UI designs along with the frontend code.
It uses Gemini 2.5 Pro under the hood and lets you iterate on designs, then paste them into Figma or export the code. The idea is to make that handoff from design to functional UI much smoother and faster.
This is interesting because while we see a lot of innovation in LLM capabilities themselves, how we interact with these AI-powered applications on the front-end is just as crucial, if not more so. Tools like Stitch that focus on improving that agentic UI/UX creation process are super important for making AI actually usable and delightful.
minimalist phone: creating folders
I created a very raw draft. Using 4 – 5 lines of text prompt, I think it is quite solid :D :)
@busmark_w_nika Super cool hah?👀
Works as I expected, will try more complex structure!
Congratulations :)
Routerra
Hi Zac, congrats with launch. Great idea!
As a co-founder I fully share the challange to have a top notch design for any service in modern days.
Wish you all the best in this journey
AI Test Kitchen
Hi folks - Vincent from the Stitch team here!
Thanks all for trying Stitch (and thanks Zac for hunting us!). Stitch is still in beta, we have tons of ideas to make it better. Stay tuned, the next couple of months will be exciting.
@vnallatamby Wow, this is crazy..You all have done a great job with Stitch! The community's enthusiastic response definitely shows its positioning is accurate. I'm looking forward to seeing what a great product Stitch will grow into!
Stitch shows real potential in bridging idea to interface—turning prompts and images into functional UI and frontend code with Gemini 2.5 Pro. Exporting directly to Figma and code makes it especially useful for rapid prototyping.
This helps a lot. I was looking for a tool like this. :)
It worked! It would be even cooler if there is a way to feed a landing page to copy existing brand style.
I tried using it and then it timed out... probably from everybody trying it. :'''-D But this looks awesome - I feel like this is what v0 and Lovable are missing!
Chance AI
Tried Stitch today and it’s seriously impressive. The sketch-to-UI flow feels like magic—and the fact that it gives you production-ready frontend code? Wild.
What I love most is how it bridges that awkward gap between design and dev. No more translating mockups pixel by pixel. Just iterate, export, tweak, done. This is what AI should be doing—removing friction, not adding more layers.
Excited to see how this evolves. Tools like this make AI feel less like a novelty and more like an actual teammate.