Realtime facial motion capture SDK enabling live animation of 3D avatars, NFT PFPs etc. iOS, Android, Web 42 tracked facial expressions rigid head pose in 3D space blendshapes weight values per frame eye, tongue tracking 3MB model 60+ FPS (iPhone SE 1st gen)
This is really cool! Would be interesting to create another layer on top of this data that will analyse, and measure success of video calls (particularly in sales/interviews).
Congrats on the launch, and good luck! π
@roee_eidan more abstractly, sentiment analysis would be awesomely powerful. Would envision it working similar to NLP sentiment analysis - complicated under the hood but spits out some easy-to-use data like: βexpressionβ:βsmilingβ, βconfidenceβ:β0.877β
Hey PH π we're excited to make our character animation technology accessible to all developers by releasing mocap4face.
Why? We think avatars are the future of how we connect on the Internet, without the privacy concerns of showing your face yet empowered to express your ideal self. But, some essential building blocks of web3 like facial motion capture are still inaccessible to many next-gen social apps and games developers. We know this problem well from our early daysβ¦
In 2017, after months of research and a ton of disappointment with existing solutions, performance, costs, and hackability, we concluded it would be faster to build our own from scratch instead of bending existing SDKs with unclear roadmaps.
With this SDK, you can drive live avatars, build Snapchat-like lenses, AR experiences, face filters that trigger actions, VTubing apps, and more with as little energy impact and CPU/GPU use as possible.
Here's how to get started:
1. Check code examples on Github
2. Sign up for Facemoji Studio
3. Get an API key
We're looking forward to seeing what you will build π
Questions? π https://discord.gg/facemoji
This is an extremely interesting project for Metaverse uses. I can actually think of about one bazillion ways to use this. Really interesting and forward thinking. How about mouth movements for certain sounds we make? That'd be interesting to transfer to an Avatar via a tool like this for spoken real-time avatar-based conversations. I know this only captures some few dozen expressions, but seems like the tip of the iceberg.
Super interesting @robinraszka - well done. And open-source too.
Carbon Neutral Club
StarLens
mocap4face
mocap4face
Carbon Neutral Club
mocap4face
mocap4face