gm and welcome back to your weekly roundup of all things tech, shipping, and launching. In today's issue: five of the coolest products last week, a breakdown of AI media, and a forum all about what Cursor can and can't do — yet.






AI visuals are still in their ✨prolific era✨ and now video’s catching up. OpenAI’s image model in GPT-4o kicked things off with Ghibli portraits and meme-worthy selfies, while Midjourney V7 added a faster draft mode and voice prompts so you can just say your way into concept art.
But the real plot twist? Runway Gen-4. It’s their newest video model, and it’s finally solving a big AI video problem: consistency. Characters stay recognizable, shots flow together, and scenes actually make sense. You can upload a single reference image, type a prompt, and get a short clip that doesn’t feel like it came from five different timelines.
It’s a huge win for indie creators, but also a growing headache for artists whose styles keep showing up in AI outputs they never agreed to. As the tools get better, the questions get louder about ownership, credit, and what counts as creative work when the machine’s doing the heavy lifting.
So yeah, the vibes are strong. The rules? Still in beta.

Hyuntak Lee asked where Cursor couldn’t quite deliver, and the responses were honest but thoughtful. One dev had it suggest restarting a project entirely, only to later fix the bug with a few tweaks. Another said Cursor feels like a super capable intern—great most of the time, but occasionally too eager.
A few mentioned edge cases, context loss, or chain-reaction edits that solved one thing and broke three others. But no one was rage quitting. It was more like: this tool is powerful, but sometimes you still need to double check its work.
Building with Cursor? This thread’s full of tips for where to keep a closer eye.