Peter Lae

Would a shared dataset of real mobile apps supercharge LLM “vibe coding”? 🤔

Hey fellow makers!

As we build Droidrun, an agent framework that navigates apps via real UI structure, we’ve hit a common blocker: there’s no public dataset of real Android apps with their UI hierarchies, screen flows, or metadata.

Would a dataset of real mobile apps help make vibe coding better? 🤔📱

Vibe coding agents for mobile UIs is still tricky , since a lot of tuning, guessing, and rebuilding flows that “feel right.”

But what if we had a dataset of full mobile apps, including UI trees, screen flows, and component types?

Would that help agents generalize better across apps?

Imagine if we had:

A curated dataset of real-world apps (shopping, socials, finance, utilities)

Structured UI metadata: buttons, lists, input fields + screen transitions

Context: categories, UX patterns, navigation flows

Question for you:

Would you use such a dataset for vibe coding your agent prompts?

Would it help get better alignment?

Help reduce repetitive prompt tuning?

Or is this just another dead end?

Keen to hear your thoughts, past experiences, frustrations. Is this worth building, or just noise?

Peter from Droidrun

5 views

Add a comment

Replies

Be the first to comment