mocap4face

mocap4face

A free SDK for realtime facial motion capture

5 followers

Realtime facial motion capture SDK enabling live animation of 3D avatars, NFT PFPs etc. iOS, Android, Web 42 tracked facial expressions rigid head pose in 3D space blendshapes weight values per frame eye, tongue tracking 3MB model 60+ FPS (iPhone SE 1st gen)
mocap4face gallery image
mocap4face gallery image
Free
Launch Team

What do you think? …

Roee Eidan
This is really cool! Would be interesting to create another layer on top of this data that will analyse, and measure success of video calls (particularly in sales/interviews). Congrats on the launch, and good luck! 🙌
Max Tkacz
@roee_eidan more abstractly, sentiment analysis would be awesomely powerful. Would envision it working similar to NLP sentiment analysis - complicated under the hood but spits out some easy-to-use data like: “expression”:”smiling”, “confidence”:”0.877”
Jon Slimak
@roee_eidan thats a great use case
Robin Raszka
@roee_eidan @max_tkacz if it can run on the client, offline—can be fun API to use for devs... we have few experiments in the works 🤓
Roee Eidan
So many interesting use cases, really awesome technology! Big congrats to the team! 🎉
Robin Raszka
Hey PH 👋 we're excited to make our character animation technology accessible to all developers by releasing mocap4face. Why? We think avatars are the future of how we connect on the Internet, without the privacy concerns of showing your face yet empowered to express your ideal self. But, some essential building blocks of web3 like facial motion capture are still inaccessible to many next-gen social apps and games developers. We know this problem well from our early days… In 2017, after months of research and a ton of disappointment with existing solutions, performance, costs, and hackability, we concluded it would be faster to build our own from scratch instead of bending existing SDKs with unclear roadmaps. With this SDK, you can drive live avatars, build Snapchat-like lenses, AR experiences, face filters that trigger actions, VTubing apps, and more with as little energy impact and CPU/GPU use as possible. Here's how to get started: 1. Check code examples on Github 2. Sign up for Facemoji Studio 3. Get an API key We're looking forward to seeing what you will build 👀 Questions? 👉 https://discord.gg/facemoji
Daryll
This is an extremely interesting project for Metaverse uses. I can actually think of about one bazillion ways to use this. Really interesting and forward thinking. How about mouth movements for certain sounds we make? That'd be interesting to transfer to an Avatar via a tool like this for spoken real-time avatar-based conversations. I know this only captures some few dozen expressions, but seems like the tip of the iceberg. Super interesting @robinraszka - well done. And open-source too.
Robin Raszka
@dimwetoth we have more SDKs in the works e.g. audio2blendshapes, check Discord 🙌