Your AI art copilot. Drag-and-drop “nodes” (text, image, video) onto a whiteboard and chain AI text, image and video AI models together Pipe text or image output to other nodes to refine prompts, turn them into videos or explain images. Easily compare model
Been looking for something like this. Playing around with it, Its incredibly intuitive and I like how composable each node is with other nodes. The creative opportunities are truly endless 🔥🚀
I'm a solo dev who just launched Blooming where you can drag-and-drop nodes onto a whiteboard to chain AI text, image and video models together.
The goal is simple: Let creators work with multiple AI models in one place and visually connect them without juggling tabs or losing track of your creative process.
I built Blooming after struggling with scattered AI tools – having 5+ subscriptions, 1000+ tabs open and constantly losing track of prompts, images and videos across different services.
Blooming does the following:
- Provides a node-based canvas for visual workflow creation
- Allows multi-model switching to test different models side by side
- Lets you pipe text or image outputs to other nodes to refine your creations
- Supports iteration across multiple versions in the same workspace
- Enables downloading outputs easily without watermarks
This is my first product in this space and I'm still figuring things out. The tool is live now and ready to use.
I'd love your thoughts. What's confusing or missing for AI image and video power users? Which models should I integrate next? Any bottlenecks in the experience?
Website: https://blooming1000.com
@edrick_dch Really like the idea of Blooming! It sounds like a great solution to the chaos of managing multiple AI tools. I agree, real-time collaboration would be a great addition, and the audio model could definitely be cool for expanding creative possibilities. Keep it up, excited to see how it evolves!
@jeahong Thank you for the feedback and trying it out! You're right, the video generation currently blocks - working on a fix. Also realizing UI could communicate better to the user on how long it will take to generate a video (few minutes).
Does this have any link up with Sand.ai (Magi video gen's cloud infra) The canvas mode image to video gen is eerily similar. Nonetheless, this was a great concept, so is this, BYOK + clean frontends are such a convenience.
AutoPosts AI
@edrick_dch Really like the idea of Blooming! It sounds like a great solution to the chaos of managing multiple AI tools. I agree, real-time collaboration would be a great addition, and the audio model could definitely be cool for expanding creative possibilities. Keep it up, excited to see how it evolves!
@edrick_dch Local model support? Something like a frontend extension for ComfyUI. That will actually be useful to a lot of DEVs.
Shit Drop Game
I tried using Blooming, and I think it will be very useful for creative directors and film makers.
One quick recommendation is, it will be much nicer if the generations take place in parallel.
The video generation was blocking the whole pipeline at the beginning, and I thought something was wrong for a while.
AutoPosts AI
@jeahong Thank you for the feedback and trying it out! You're right, the video generation currently blocks - working on a fix. Also realizing UI could communicate better to the user on how long it will take to generate a video (few minutes).
Does this have any link up with Sand.ai (Magi video gen's cloud infra) The canvas mode image to video gen is eerily similar. Nonetheless, this was a great concept, so is this, BYOK + clean frontends are such a convenience.