Papers accepted at IROS 2021 Conference

Three SLMC papers accepted at IROS 2021

Three SLMC papers have been accepted at the International Conference on Intelligent Robots and Systems (IROS2021) to be held in Prague, Czech Republic. 

 

  • Wolfgang Xaver Merkt, Vladimir Ivan, Traiko Dinev, Ioannis Havoutis and Sethu Vijayakumar, Memory Clustering using Persistent Homology for Multimodality and Discontinuity-Sensitive Learning of Optimal Control Warm Starts  [pdf] [DOI] [video]

Presented at: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic (2021)

Shooting methods are an efficient approach to solving nonlinear optimal control problems. As they use local optimization, they exhibit favorable convergence when initialized with a good warm-start but may not converge at all if provided with a poor initial guess. Recent work has focused on providing an initial guess from a learned model trained on samples generated during an offline exploration of the problem space. However, in practice the solutions contain discontinuities introduced by system dynamics or the environment. Additionally, in many cases multiple equally suitable, i.e., multi-modal, solutions exist to solve a problem. Classic learning approaches smooth across the boundary of these discontinuities and thus generalize poorly. In this work, we apply tools from algebraic topology to extract information on the underlying structure of the solution space. In particular, we introduce a method based on persistent homology to automatically cluster the dataset of precomputed solutions to obtain different candidate initial guesses. We then train a Mixture-of-Experts within each cluster to predict state and control trajectories to warm-start the optimal control solver and provide a comparison with modality-agnostic learning. We demonstrate our method on a cart-pole toy problem and a quadrotor avoiding obstacles, and show that clustering samples based on inherent structure improves the warm-start quality.

 

  • Huayan Zhang, Tianwei Zhang, Tin Lun Lam and Sethu Vijayakumar, PoseFusion2: Simultaneous Background Reconstruction and Human Shape Recovery in Real-time [pdf] [video] [digest]

Presented at: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic (2021)

Dynamic environments that include unstructured moving objects pose a hard problem for Simultaneous Localization and Mapping (SLAM) performance. The motion of rigid objects can be typically tracked by exploiting their texture and geometric features. However, humans moving in the scene are often one of the most important, interactive targets – they are very hard to track and reconstruct robustly due to non-rigid shapes. In this work, we present a fast, learning-based human object detector to isolate the dynamic human objects and realise a real-time dense background reconstruction framework. We go further by estimating and reconstructing the human pose and shape. The final output environment maps not only provide the dense static backgrounds but also contain the dynamic human meshes and their trajectories. Our Dynamic SLAM system runs at around 26 frames per second (fps) on GPUs, while additionally turning on accurate human pose estimation can be executed at up to 10 fps.

 

  • Tianwei Zhang, Huayan Zhang, Xiaofei Li, Junfeng Chen, Tin Lun Lam and Sethu Vijayakumar, AcousticFusion: Fusing Sound Source Localization to Visual SLAM in dynamic environments [pdf] [video] [digest]

Presented at: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS 2021), Prague, Czech Republic (2021)

Dynamic objects in the environment, such as people and other agents, lead to challenges for existing simultaneous localization and mapping (SLAM) approaches. To deal with dynamic environments, computer vision researchers usually apply some learning-based object detectors to remove these dynamic objects. However, these object detectors are computationally too expensive for mobile robot on-board processing. In practical applications, these objects output noisy sounds that can be effectively detected by on-board sound source localization. The directional information of the sound source object can be efficiently obtained by direction of sound arrival (DoA) estimation, but the depth estimation is difficult. Therefore, in this paper, we propose a novel audio-visual fusion approach that fuses sound source direction into the RGB-D image and thus removes the effect of dynamic obstacles on the multi-robot SLAM system. Experimental results of multirobot SLAM in different dynamic environments show that the proposed method uses very small computational resources to obtain very stable self-localization results.

 

Further Information

IROS2021