Wednesday, 15 November 2017 (6:00pm – 7:30pm)
Shared Autonomy: The Future of Interactive Robotics
The next generation of robots are going to work much more closely with humans and with other robots, and interact significantly with the environment around them. As a result, the key paradigms are shifting from isolated decision-making systems to one that involves shared control – with significant autonomy devolved to the robot platform; and end-users in the loop making only high-level decisions.
This talk will introduce technologies ranging from robust multi-modal sensing and shared representations to compliant actuation and scalable machine-learning techniques for real-time learning and adaptation, enabling us to reap the benefits of increased autonomy while still feeling securely in control.
This also raises some fundamental questions; e.g., while the robots are ready to share control, what is the optimal trade-off between autonomy and control with which we are comfortable?
Domains where this debate is relevant include self-driving cars, mining, shared manufacturing, exoskeletons for rehabilitation, active prosthetics, large-scale scheduling (e.g., transport) systems, as well as oil and gas exploration, to list a few.