@rrhoover Thank Ryan. Yeah, in fact we had something like Gabsee developed already last year, with generic avatars, but decided not to release it. We felt that without the face-tech, the content lacked context and therefore wouldn't be something that users felt like sharing...
Thank you so much for featuring us on Product hunt! We are humbled:)
This project has been a fun one, though certainly challenging in many aspects. After we launched Rawr Messenger we started to explore bringing our avatars into AR and pretty quickly had a prototype where they where animated and performing in the real world. It was pretty good, but after we got over the "this is augmented reality" hype, and started further testing it with real users, we came to the realisation that whilst the fun is there, the context of the user creating the content was not, that created a disconnect between the sender and receiver. In hindsight, looking at the success of Gabsee, we probably should have just released it:)
So we started to work on face-tech and set ourselves two clear goals:
1. No cloud computing, the magic had to work instantly and on the device. After using MyIdol (which was a ton of fun), we found that waiting a few minutes for your image to come back from a server was super frustrating. And if you didn't like your photo, taking a new one and sending it back again (and waiting another 3 minutes or so), was something few wanted to do.
2. One selfie vs. scanning around your face. This was more of a usability thing than anything else. We had tried Seene and other 3D modelling apps and felt that moving your phone around your head or face was something people didn't really want to do. People are used to taking selfies, so we wanted to produce a 3D face from one selfie image and also detect facial expressions.
Another thing we thought about was offering the user an avatar creator vs. ready characters. If you've tried Rawr Messenger then you'll see that we've developed a pretty advanced avatar creator, so we naturally gravitated towards that approach. However, yet again, in user testing we found we were wrong. Users saw this as a content creation thing, and wanted to jump in and out of the experience quickly and effortlessly. They wanted the ready character route vs. creator. Users wanted us to create funny content, which still remains to be a challenge to interpret "what is funny". Fortunately, we have analytics plugged into the app so we can see what content they are enjoying and hopefully through that we can learn and get better at content creation:)
So the first version is out, and would love your feedback. We are currently working on surface detection for Miso Happy and have new character content coming into the app continuously and directly through the backend (if anyone has suggestions for characters, we are all ears:) )
Thank you for the support,
Ozz
Product Hunt
Miso Happy
Miso Happy
Miso Happy
Miso Happy