Sebastian Starke to present at SIGGRAPH Asia on 19 November

Deep neural network generates realistic character-scene interactions

Computer scientists from the University of Edinburgh and Adobe Research have developed a novel, data-driven technique that uses deep neural networks to precisely guide animated characters by inferring a variety of motions - sitting in chairs, picking up objects, running, side-stepping, climbing through obstacles and through doorways - and achieves this in a user-friendly way with simple control commands.  The researchers will demonstrate their work, Neural State Machine for Character-Scene Interactions, at ACM SIGGRAPH Asia, held November 17 to 20 in Brisbane, Australia.

"Achieving this in production-ready quality is not straightforward and very time-consuming. Our Neural State Machine instead learns the motion and required state transitions directly from the scene geometry and a given goal action," says Sebastian Starke, senior author of the research and a PhD student at the University of Edinburgh in Taku Komura's lab. "Along with that, our method is able to produce multiple different types of motions and actions in high quality from a single network."

Along with Sebastian Starke and Taku Komura, the researchers behind Neural State Machine for Character-Scene Interactions include He Zhang (University of Edinburgh) and Jun Saito (Adobe Research-USA).

SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in computer graphics, animation, interactivity, gaming, and emerging technologies. 

Useful links

EurekaAlert!  

Siggraph Asia 2019