IEEE International Conference on Robotics and Automation (ICRA) 2023

SLMC and The Alan Turing Institute present one keynote, five papers and a workshop at ICRA2023


Professor Sethu Vijayakumar is delivering one of the prestigious ICRA 2023 keynotes entitled 'From Automation to Autonomy: Machine Learning for Next-generation Robotics' on Thursday 1st June at 5pm at the Main ICC Auditorium for Plenary and Keynotes.  [details]

Paper 1 (15:00 - 16:40 | Thurs 1 June | PH PODS 34-36 | ThPO2S-16.06)

Jaehyun Shim, Carlos Mastalli, Thomas Corbères, Steve Tonneau, Vladimir Ivan and Sethu Vijayakumar, Topology-Based MPC for Automatic Footstep Placement and Contact Surface Selection, Proc. IEEE International Conference on Robotics and Automation (ICRA 2023), London, UK (2023). [pdf][video][citation]


State-of-the-art approaches to footstep planning assume reduced-order dynamics when solving the combinatorial problem of selecting contact surfaces in real time. However, in exchange for computational efficiency, these approaches ignore limb dynamics and joint torque limits. In this work, we address these limitations by presenting a topology-based approach that enables model predictive control (MPC) to simultaneously plan full-body motions, torque commands, contact surfaces, and footstep placements in real time. To determine if a robot’s foot is inside a polygon, we borrow the winding number concept from topology. Specifically, we use winding number and electric potential to create a contact-surface penalty function that forms a harmonic field. Using our topology-based penalty function, MPC can then select a contact surface from all candidate surfaces in the vicinity and determine footstep placements within it. We highlight the benefits of our approach by showing the impact of considering full-body dynamics, which includes joint torque limits and limb dynamics, in the selection of footstep placements and contact surfaces. Additionally, we demonstrate the feasibility of deploying our topology-based approach in an MPC scheme through a series of experimental and simulation trials.

Supported by: EU project Memory of Motion (MEMMO) and Alan Turing Institute

Paper 2 (09:00 - 10:40 | Thurs 1 June | Poster Hall N10/N11)

Namiko Saito, Joao Moura, Tetsuya Ogata, Marina Y. Aoyama, Shingo Murata, Shigeki Sugano and Sethu Vijayakumar, Structured Motion Generation with Predictive Learning: Proposing Subgoal for Long-Horizon Manipulation, Proc. IEEE International Conference on Robotics and Automation (ICRA 2023), London, UK (2023). [pdf][video][citation]


For assisting humans in their daily lives, robots need to perform long-horizon tasks, such as tidying up a room or preparing a meal. One effective strategy for handling a long-horizon task is to break it down into short-horizon subgoals, that the robot can execute sequentially. In this paper, we propose extending a predictive learning model using deep neural networks (DNN) with a Subgoal Proposal Module (SPM), with the goal of making such tasks realizable. We evaluate our proposed model in a case-study of a long-horizon task, consisting of cutting and arranging a pizza. This task requires the robot to consider: (1) the order of the subtasks, (2) multiple subtask selection, (3) coordination of dual-arm, and (4) variations within a subtask. The results confirm that the model is able to generalize motion generation to unseen tools and objects arrangement combinations. Furthermore, it significantly reduces the prediction error of the generated motions compared to without the proposed SPM. Finally, we validate the generated motions on the dual-arm robot Nextage Open

Supported by: EU project HARMONY, Kawada Robotics Corporation and Alan Turing Institute

Paper 3 (09:00 - 10:40 | Thursday 1 June | Room T8 | ThPO1S-25.12)

Daniel F. N. Gordon, Andreas Christou, Theodoros Stouraitis, Michael Gienger and Sethu Vijayakumar, Learning Personalised Human Sit-to-Stand Motion Strategies via Inverse Musculoskeletal Optimal Control, Proc. IEEE International Conference on Robotics and Automation (ICRA 2023), London, UK (2023). [pdf][citation]


Physically assistive robots and exoskeletons have great potential to help humans with a wide variety of collaborative tasks. However, a challenging aspect of the control of such devices is to accurately model or predict human behaviour, which can be highly individual and personalised. In this work, we implement a framework for learning subject-specific models of underlying human motion strategies using inverse musculoskeletal optimal control. We apply this framework to a specific motion task: the sit-to-stand transition. By collecting sit-to-stand data from 4 subjects with and without perturbations, we show that humans modulate their sit-to-stand strategy in the presence of instability, and learn the corresponding models of these strategies. In the future, the personalised motion strategies resulting from this framework could be used to inform the design of real-time assistance strategies for human robot collaboration problems

Supported by: Alan Turing Institute and Honda Research Institute Europe

Paper 4 (09:00 - 10:40 | Thurs 1 June | Poster Hall N10/N11)

Christopher Edwin Mower, Joao Moura, Nazanin Zamani Behabadi, Sethu Vijayakumar, Tom Vercauteren and Christos Bergeles, OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization and Model Predictive Control, Proc. IEEE International Conference on Robotics and Automation (ICRA 2023), London, UK (2023). [pdf][video][citation]


This paper presents OpTaS, a task specification Python library for Trajectory Optimization (TO) and Model Predictive Control (MPC) in robotics. Both TO and MPC are increasingly receiving interest in optimal control and in particular handling dynamic environments. While a flurry of software libraries exists to handle such problems, they either provide interfaces that are limited to a specific problem formulation (e.g. TracIK, CHOMP), or are large and statically specify the problem in configuration files (e.g. EXOTica, eTaSL). OpTaS, on the other hand, allows a user to specify custom nonlinear constrained problem formulations in a single Python script allowing the controller parameters to be modified during execution. The library provides interface to several open source and commercial solvers (e.g. IPOPT, SNOPT, KNITRO, SciPy) to facilitate integration with established workflows in robotics. Further benefits of OpTaS are highlighted through a thorough comparison with common libraries. An additional key advantage of OpTaS is the ability to define optimal control tasks in the joint-space, task-space, or indeed simultaneously. The code for OpTaS is easily installed via pip, and the source code with examples can be found at

Supported by: EU project HARMONY, Kawada Robotics Corporation and Alan Turing Institute

Paper 5 (15:10 - 15:20 | Wed 31 May | ICC Capital Suite 2-4 | WeBT3.02)

Christian Rauch, Ran Long, Vladimir Ivan and Sethu Vijayakumar, Sparse-Dense Motion Modelling and Tracking for Manipulation without Prior Object Models, IEEE Robotics and Automation Letters (RAL), vol. 7(4), pp. 11394-11401 (2022). (Presented at: IEEE International Conference on Robotics and Automation (ICRA 2023)) [pdf] [DOI] [video] [citation]


This work presents an approach for modelling and tracking previously unseen objects for robotic grasping tasks. Using the motion of objects in a scene, our approach segments rigid entities from the scene and continuously tracks them to create a dense and sparse model of the object and the environment. While the dense tracking enables interaction with these models, the sparse tracking makes this robust against fast movements and allows to redetect already modelled objects. The evaluation on a dual-arm grasping task demonstrates that our approach 1) enables a robot to detect new objects online without a prior model and to grasp these objects using only a simple parameterisable geometric representation, and 2) is much more robust compared to the state of the art methods.

Supported by: EU project HARMONY, the Alan Turing Institute and Kawada Robotics Corporation

Workshop - Embracing contacts. Making robots physically interact with our world (June 2)

Humans can physically interact with the world by employing a diverse set of contact-rich strategies, such as pushing, throwing, catching, sliding, rolling, and pivoting. However, we are yet to empower robots with such ability of embracing contacts during manipulation.

Amid a growing interest in making robots manipulate in human-centred environments, e.g. home-care, logistics, healthcare, this workshop will bring together researchers from academia and industry to discuss the future of contact-rich manipulation and to tackle the unique challenges that handling contacts brings in the presence of under-actuated, hybrid, and uncertain dynamics.

****** Invited Speakers ******

Emo Todorov (University of Washington) Aude Billard (EPFL) Danica Kragic (KTH) Animesh Garg (University of Toronto and NVIDIA) Marc Toussaint (TU Berlin) Evan Drumwright (Dextrous Robotics) Yuval Tassa (DeepMind) Andy Zeng (Google Brain) Mel Vecerik (DeepMind and UCL)

Supported by: The Alan Turing Institute, Honda Research Institute EU, EU project HARMONY, DeepMind and Create

Accepted  Workshop Papers

Saeid Samadi, Wenqian Du and Sethu Vijayakumar, Geometric Evaluation of Balance Regions for Multi-Contact Humanoids Using Contact Stability Criteria, Geometric Representations: The Roles of Modern Screw Theory, Lie Algebra, and Geometric Algebra in Robotics (LINK)

Marina Y. Aoyama, Joao Moura, Namiko Saito and Sethu Vijayakumar, Few-Shot Semi-supervised Learning From Demonstration for Generalisation of Force-Based Motor Skills Across Objects Properties, ICC Capital Suite 7

Sandor Felber, Joao Moura, and Sethu Vijayakumar, A Control System Framework for Robust Deployability of Teleoperation Devices in Shared Workspaces (Link). Bridging the Lab-to-Real Gap: Conversations with Academia, Industry, and Government, ICC Capital Suite 7 (Link)


Further Information

Embracing Contacts Workshop
Sethu Vijayakumar Keynote Speech - Details