This was just released at the Watson Developer Conference (where I'll be speaking later today):
One of the best examples of where Project Intu might be able to help out developers is in the area of conversation, language and visual recognition. Here, developers can integrate Watson’s abilities with a device’s capabilities to effectively “act out” interactions with users. So, rather than the developer having to program each device or avatar’s individual movements, Project Intu does it for them, combining movements that are appropriate for the specific task the device or avatar is performing, such as greeting a visitor at a hotel, or helping out a customer in a retail store.
Here's the source code.
Raycast