IEEE International Conference on Robotics and Automation (ICRA) 2024
SLMC and the Alan Turing Institute present five papers and a workshop at ICRA 2024
Paper 1 (Thurs 17 May | 13.30-15.00 | ThBT28-NT.4)
Russell Buchanan*, Adrian Röfer, Joao Moura, Abhinav Valada and Sethu Vijayakumar, Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing, Proc. IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan (2024). [pdf] [video] [citation]
Abstract
From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning methods can provide powerful vision-based priors on the affordance of articulated objects from previous, possibly simulated, experiences. In contrast, many works estimate articulation by observing the object in motion, requiring the robot to already be interacting with the object. In this work, we propose to use the best of both worlds by introducing an online estimation method that merges vision-based affordance predictions from a neural network with interactive kinematic sensing in an analytical model. Our work has the benefit of using vision to predict an articulation model before touching the object, while also being able to update the model quickly from kinematic sensing during the interaction. In this paper, we implement a full system using shared autonomy for robotic opening of articulated objects, in particular objects in which the articulation is not apparent from vision alone. We implemented our system on a real robot and performed several autonomous closed-loop experiments in which the robot had to open a door with unknown joint while estimating the articulation online. Our system achieved an 80% success rate for autonomous opening of unknown articulated objects.
Supported by: EU project HARMONY and The Alan Turing Institute
Paper 2 (Tues 15 May | 16.30-18.00 | TuCT14-AX.8)
Andreas Christou, Antonio J. del-Ama, Juan C. Moreno and Sethu Vijayakumar, Adaptive Control for Triadic Human-Robot-FES Collaboration in Gait Rehabilitation: A Pilot Study, Proc. IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan (2024). [pdf] [video] [citation]
Abstract
The hybridisation of robot-assisted gait training and functional electrical stimulation (FES) can provide numerous physiological benefits to neurological patients. However, the design of an effective hybrid controller poses significant challenges. In this over-actuated system, it is extremely difficult to find the right balance between robotic assistance and FES that will provide personalised assistance, prevent muscle fatigue and encourage the patient's active participation in order to accelerate recovery.
In this paper, we present an adaptive hybrid robot-FES controller to do this and enable the triadic collaboration between the patient, the robot and FES. A patient-driven controller is designed where the voluntary movement of the patient is prioritised and assistance is provided using FES and the robot in a hierarchical order depending on the patient's performance and their muscles' fitness. The performance of this hybrid adaptive controller is tested in simulation and on one healthy subject. Our results indicate an increase in tracking performance with lower overall assistance, and less muscle fatigue when the adaptive hybrid controller is used, compared to its non adaptive counterpart. This suggests that our hybrid adaptive controller may be able to adapt to the behaviour of the user to provide assistance as needed and prevent the early termination of physical therapy due to muscle fatigue.
Supported by: HONDA Research Institute, Engineering and Physical Sciences Research Council (EPSRC, grant reference EP/L016834/1) and The Alan Turing Institute
Paper 3 (Thurs 17 May | 10.30-12.00 | ThAT11-CC.6)
Marina Y. Aoyama, Joao Moura, Namiko Saito and Sethu Vijayakumar, Few-Shot Learning of Force-Based Motions From Demonstration Through Pre-training of Haptic Representation, Proc. IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan (2024). [pdf] [video] [citation]
Abstract
Learning to adapt motion to various physical properties of manipulated objects in contact-rich tasks requires extensive human demonstrations. Our semi-supervised Learning from Demonstration approach decouples the learned model into a haptic representation encoder and a motion generation decoder. This enables us to pre-train the first using a large amount of unsupervised data while using few-shot LfD to train the second, leveraging the benefits of learning skills from humans. We validate our approach in a wiping task with sponges of varying stiffness and surface friction, showcasing improved physical property recognition and motion generation for unseen objects on the KUKA iiwa robot arm.
Supported by: EU project HARMONY, Japan Science and Technology (JST) Moonshot and The Alan Turing Institute
Paper 4 (Thurs 17 May 16.30-18.00 | ThCT13-AX.1)
Keyhan Kouhkiloui Babarahmati, Mohammadreza Kasaei, Michael Mistry and Sethu Vijayakumar, Robust and Dexterous Dual-arm Tele-Cooperation using Adaptable Impedance Control, Proc. IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan (2024). [pdf] [video] [citation]
Abstract
In recent years, the need for robots to transition from isolated industrial tasks to shared environments, including human-robot collaboration and teleoperation, has become increasingly evident. Building on the foundation of Fractal Impedance Control (FIC) introduced in our previous work, this paper presents a novel extension to dual-arm tele-cooperation, leveraging the non-linear stiffness and passivity of FIC to adapt to diverse cooperative scenarios. Unlike traditional impedance controllers, our approach ensures stability without relying on energy tanks, as demonstrated in our prior research. In this paper, we further extend the FIC framework to bimanual operations, allowing for stable and smooth switching between different dynamic tasks without gain tuning. We also introduce a telemanipulation architecture that offers higher transparency and dexterity, addressing the challenges of signal latency and low-bandwidth communication. Through extensive experiments, we validate the robustness of our method, and the results confirm the advantages of the FIC approach over traditional impedance controllers, showcasing its potential for applications in planetary exploration and other scenarios requiring dexterous telemanipulation. This paper's contributions include the seamless integration of FIC into multi-arm systems, the ability to perform robust interactions in highly variable environments, and the provision of a comprehensive comparison with competing approaches, thereby significantly enhancing the robustness and adaptability of robotic systems.
Supported by: EU project HARMONY and The Alan Turing Institute
Paper 5 (Thurs 17 May 16.30-18.00 | ThCT28.02)
Elle Miller, Maximilian Durner, Matthias Humt, Gabriel Quere, Wout Boerdijk, Ashok M. Sundaram, Freek Stulp, Jorn Vogel, Unknown Object Grasping for Assistive Robotics, Proc. IEEE International Conference on Robotics and Automation (ICRA 2024), Yokohama, Japan (2024). [pdf]
Abstract
We propose a novel pipeline for unknown object grasping in shared robotic autonomy scenarios. State-of-the-art methods for fully autonomous scenarios are typically learning-based approaches optimised for a specific end-effector, that generate grasp poses directly from sensor input. In the domain of assistive robotics, we seek instead to utilise the user's cognitive abilities for enhanced satisfaction, grasping performance, and alignment with their high level task-specific goals. Given a pair of stereo images, we perform unknown object instance segmentation and generate a 3D reconstruction of the object of interest. In shared control, the user then guides the robot end-effector across a virtual hemisphere centered around the object to their desired approach direction. A physics-based grasp planner finds the most stable local grasp on the reconstruction, and finally the user is guided by shared control to this grasp. In experiments on the DLR EDAN platform, we report a grasp success rate of 87% for 10 unknown objects, and demonstrate the method's capability to grasp objects in structured clutter and from shelves.
Workshop - Cooking Robotics: Perception and motion planning [Link to Workshop Webpage]
The realisation of cooking robots presents numerous real-world challenges. This workshop explores the latest advancements in 'robots in cooking,' addressing hardware considerations, multimodal perception, motion planning and control, experimental methodologies, and benchmarking approaches.
Amid labour shortages and increased hygiene concerns in the post-COVID world, automation in the food industry and personalised cooking assistants have become essential. This workshop will bridge the gap between academia, industry, and professionals by inviting leading researchers, collaborating with a competition, the Food Topping Challenge, and showcasing robot demonstrations.
**Invited speakers
Fumiya Iida (University of Cambridge, UK) Tamim Asfour (Karlsruhe Institute of Technology, Germany) Oliver Kroemer/Kevin Zhang (Carnegie Mellon University, United States) Akihiko Yamaguchi (Tohoku University, Japan) Dorsa Sadigh/Priya Sundaresan (Stanford University, United States) Mako Miyatake (freelancer chef, Japan) Josie Hughes (EPFL)
Supported by: Sony AI, EU project HARMONY, Japan Science and Technology (JST) Moonshot and The Alan Turing Institute
Accepted Workshop Papers
Lei Yan, Theodoros Stouraitis, João Moura, Wenfu Xu, Michael Gienger, and Sethu Vijayakumar, Impact-Aware Bimanual Catching of Large-Momentum Objects, IEEE ICRA 2024 Workshop on Agile Robotics: From Perception to Dynamic Action (2024).
Russell Buchanan, Adrian Röfer, Joao Moura, Abhinav Valada and Sethu Vijayakumar, Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing, IEEE ICRA 2024 Workshop on A Future Roadmap for Sensorimotor Skill Learning for Robot Manipulation (2024).
Keyhan Kouhkiloui Babarahmati, Mohammadreza Kasaei, Carlo Tiseo, Michael Mistry and Sethu Vijayakumar, TeleCoop-FIC: A Novel Dual-Arm Adaptable Impedance Control for Human-Robot Tele-Cooperation, IEEE ICRA 2024 Workshop on Exploring Role Allocation in Human Robot Co-Manipulation (2024).
Jiayi Wang, Sanghyun Kim, Teguh Santoso Lembono, Wenqian Du, Jaehyun Shim, Saeid Samadi, Ke Wang, Vladimir Ivan, Sylvain Calinon, Sethu Vijayakumar, and Steve Tonneau, Online Multi-contact Receding Horizon Planning via Value Function Approximation, IEEE ICRA 2024 Workshop on Humanoid Whole-body Control: From human motion understanding to humanoid locomotion (2024).
Marina Y. Aoyama, Joao Moura, Namiko Saito, and Sethu Vijayakumar Few-Shot Learning of Force-Based Motions From Demonstration Through Pre-training of Haptic Representation, IEEE Icra 2024 Workshop on A Future Roadmap for Sensorimotor Skill Learning for Robot Manipulation (2024).