Papers accepted at IROS 2024

Two papers and one workshop at the International Conference on Intelligent Robots and Systems (IROS) 2024, being held in Abu Dhabi

 

Andreas Sochopoulos, Michael Gienger and Sethu Vijayakumar, Learning Deep Dynamical Systems using Stable Neural ODEs, Proc. 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE (2024). [pdf] [video] [citation]

Learning complex trajectories from demonstrations in robotic tasks has been effectively addressed through the utilization of Dynamical Systems (DS). State-of-the-art DS learning methods ensure stability of the generated trajectories;however, they have three shortcomings: a) the DS is assumed to have a single attractor, which limits the diversity of tasks it can achieve, b) state derivative information is assumed to be available in the learning process and c) the state of the DS is assumed to be measurable at inference time. We propose a class of provably stable latent DS with possibly multiple attractors, that inherit the training methods of Neural Ordinary Differential Equations, thus, dropping the dependency on state derivative information. A diffeomorphic mapping for the output and a loss that captures time-invariant trajectory similarity are proposed. We validate the efficacy of our approach through experiments conducted on a public dataset of handwritten shapes and within a simulated object manipulation task.

 

Namiko Saito, Joao Moura, Hiroki Uchida and Sethu Vijayakumar, Latent Object Characteristics Recognition with Visual to Haptic-Audio Cross-modal Transfer Learning , Proc. 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE (2024). [pdf] [video] [citation]

Recognising the characteristics of objects while a robot handles them is crucial for adjusting motions that ensure stable and efficient interactions with containers. Ahead of realising stable and efficient robot motions for handling/transferring the containers, this work aims to recognise the unobservable latent object characteristics. While vision is commonly used for object recognition by robots, it is ineffective for detecting hidden objects. However, recognising objects indirectly using other sensors is a challenging task. To address this challenge, we propose a cross-modal transfer learning approach from vision to haptic-audio.We initially train the model with vision, directly observing the target object. Subsequently, we transfer the latent space learned from vision to a second module, trained only with haptic-audio and motor data. This transfer learning framework facilitates the representation of object characteristics using indirect sensor data, thereby improving recognition accuracy. For evaluating the recognition accuracy of our proposed learning framework we selected shape, position, and orientation as the object characteristics. Finally, we demonstrate online recognition of both trained and untrained objects using the humanoid robot Nextage Open.

 

Collecting, Managing and Utilizing Data through Embodied Robots - Workshop

In recent years, embodied robots have been increasingly playing a significant role in real-world scenarios such as daily tasks, healthcare, caregiving, and agriculture, and interacting with humans and environments. Nevertheless, the collection and application of sensorimotor data in real-world settings pose challenges, given the associated high costs, inherent random noise, requisite tailored processing, and potential inclusion of personal data. This workshop would provide a platform to discuss efficient data collection, high-quality data assurance, sophisticated processing and utilization, and ethical considerations.

Workshop webpage

Further information

IROS 2024