rabbit has created a first-of-its-kind mobile device loaded with rabbit OS that lets people use intuitive input methods to accomplish tasks in a tangible way. The new standalone device – r1 – eliminates the need for users to navigate multiple apps.
As an AI tech writer, lecturer and enthusiast obsessed over wearable AI devices its great to see the unique approach of rabbit with their "large action model" which was a new foundation model designed to understand human intentions and adapt to those requests. Great to see what the founders and team at rabbit are building.
I just watched the presentation, and here are my thoughts.
1. For all the voice commands with actions like play music, book an Uber, etc, it's nothing different from what Alexa does
2. For the trip planning and other info-related commands, it is basically doing what ChatGPT/Bard does
3. For the demo about ordering a pizza, the command was too specific about ordering the popular item from Pizza Hut. What if I wasn't sure about what to eat and wanted to look for available vegan options near me? If it is going to give me a list that I have to scroll through, I don't see how it is different from using a food ordering app.
4. For the flights/hotels/car rental demo, he never shows the screen and just says confirm while looking at the screen. The same concern here. If I have to choose from multiple options for each category, it is no different from using Expedia or some other travel app.
5. For the Midjourney and spreadsheet demos, that's too much effort to use the device than just using my laptop to get the task done.
Overall, it was a good demo, but nothing exciting. And definitely not something that can replace the current app-based smartphone interface.
Cheers!
definitely forward thinking, from app based to app less is a way forward. I think an option or mode to use as a regular phone should reduce concerns about compatibility with the larger base of users. but still there will be early adopters for sure.
looks pretty cool, I watched the whole keynote. One thing I'm curious about is what platform is used to run the background tasks such as the Midjourney image generation task in Discord. Does it use your laptop which you used to actually teach the R1 the task in the first place or does it use the R1 itself? In the case that it runs the tasks locally, what if you're not logged into Discord on your R1? Conversely in the other scenario, what if you're on the go and your laptop is off?
I watched the whole launch video. It looks super cool and I love the hardware design by Teenage Engineering. It's all very slick.
I did have a few questions though. They talked about their LAM and how the Rabbit can understand any UI. Meanwhile, Jesse doesn't show the R1 or their tech actually interacting with a UI. I think the closest he gets is pointing the R1's camera at a spreadsheet and asking it to add a column.
I can imagine a different or "companion" piece where it IS desktop software, has access to the pixels on your screen and can learn a UI and point and click for you. That's where I thought he was going when he talked about their LAM and had the outlines of all these user interfaces.
$199 is a bit much to spend on a toy... or maybe not?
After watching the keynote, I'm highly dubious this rabbit will hop.
The industrial design is gorgeous (thanks TE!) but the meandering keynote was basically a collection of recent agentive AI hacks (basically using LangSmith or similar) to execute tasks. Definitely the future, but I'm not convinced Apple and other device makers won't disrupt themselves to bring these innovations to market.
The post-app future was also the one we were building towards in the bot era — but it turns out human eyes are an essential aspect of any computing experience, which the rabbit CEO even demonstrated and admitted to ("We're not trying to replace your phone!").
$199 is far too low a price point to be able cover customer service and marketing needs unless he has Bytedance-level money to spend on growth.
@chrismessina This is the sort of critical thinking that I want to see on a product like this. It looks super cool and the price point is hard to resist. But will I still be using it in, say, a year's time?
@chrismessina this is soo valid, after watching the entire keynote i was not convinced but the key player here is the price point, its so low that markets with low income groups can also at least give this product a try (only if its cool) because masses can only tapped by either hype or immense value addition.
@chrismessina Both occupy the same space for me:
1. They excite me a lot because they feel very sci-fi become real
2. I didn't understand 100% what they do at launch.