Over the last few months, we’ve been deep in “vibe coding” with AI and somehow ended up with Daf·thunk — a visual workflow automation platform that lives right in your browser. It’s open-source (give it a star here), surprisingly powerful, and honestly, kind of fun to use.
Built on Cloudflare (because we like speed and not dealing with servers), Daf·thunk lets you wire up automations with emails, APIs, and cron jobs. It handles the heavy lifting with serverless execution, smart processing, and persistent storage — so you can focus on automating your automations.
Take it for a spin, tell us what you think, or just drop in and say hi. Contributions, feedback, memes — all welcome.
Here are some technical details regarding our development process. We primarily used Cursor with its agent and tab modes, alongside Claude Sonnet 3.5, 3.7, 4, and Gemini 2.5 Pro. Occasionally, we switched to MAX Mode when working on or reviewing more complex changes. We tried to regularly refine our Cursor rules and began applying specific rules to different parts of the codebase (backend, frontend, database, etc.). We also indexed documentation and used it extensively in prompts. For large refactors, we often referenced previous commits to reapply patterns elsewhere in the code.
Overall, we feel that prompting for small, incremental, and easy-to-review changes scales well when coding with LLMs and the results are really impressive. In this regard, Andrej Karpathy’s talk “Software Is Changing (Again)” resonates deeply. John Ousterhout’s concept of deep modules has also been a useful mental model: our Cursor rules ask for simple APIs that hide rich internal logic, and avoid “wide” interfaces that mirror implementation details.
Aside from frequent commits, we didn’t document our process much, as we were mainly exploring and trying to build intuition around what works and what doesn’t. Since we decided early on to err on the side of trusting the LLM more than usual, we’ve released everything under the MIT License and without warranty. Surprisingly, the need for unit tests decrease as more powerful models were released. This will bite us hard very soon and contributions are welcome ;)
MEMsched
Hey Hunters,
Over the last few months, we’ve been deep in “vibe coding” with AI and somehow ended up with Daf·thunk — a visual workflow automation platform that lives right in your browser. It’s open-source (give it a star here), surprisingly powerful, and honestly, kind of fun to use.
Built on Cloudflare (because we like speed and not dealing with servers), Daf·thunk lets you wire up automations with emails, APIs, and cron jobs. It handles the heavy lifting with serverless execution, smart processing, and persistent storage — so you can focus on automating your automations.
Take it for a spin, tell us what you think, or just drop in and say hi. Contributions, feedback, memes — all welcome.
The Dafthunk Team
Here are some technical details regarding our development process. We primarily used Cursor with its agent and tab modes, alongside Claude Sonnet 3.5, 3.7, 4, and Gemini 2.5 Pro. Occasionally, we switched to MAX Mode when working on or reviewing more complex changes. We tried to regularly refine our Cursor rules and began applying specific rules to different parts of the codebase (backend, frontend, database, etc.). We also indexed documentation and used it extensively in prompts. For large refactors, we often referenced previous commits to reapply patterns elsewhere in the code.
Overall, we feel that prompting for small, incremental, and easy-to-review changes scales well when coding with LLMs and the results are really impressive. In this regard, Andrej Karpathy’s talk “Software Is Changing (Again)” resonates deeply. John Ousterhout’s concept of deep modules has also been a useful mental model: our Cursor rules ask for simple APIs that hide rich internal logic, and avoid “wide” interfaces that mirror implementation details.
Aside from frequent commits, we didn’t document our process much, as we were mainly exploring and trying to build intuition around what works and what doesn’t. Since we decided early on to err on the side of trusting the LLM more than usual, we’ve released everything under the MIT License and without warranty. Surprisingly, the need for unit tests decrease as more powerful models were released. This will bite us hard very soon and contributions are welcome ;)