IPAB Workshop - 21/04/2022
Speaker: Kiyoon Kim
Title: Capturing Temporal Information in a Single Frame: Channel Sampling Strategies for Action Recognition
Abstract: We address the problem of capturing temporal information for video classification in 2D networks, without increasing computational cost. Existing approaches focus on modifying the architecture of 2D networks (e.g. by including filters in the temporal dimension to turn them into 3D networks, or using optical flow, etc.), which increases computation cost. Instead, we propose a novel sampling strategy, where we re-order the channels of the input video, to capture short-term frame-to-frame changes. We observe that without bells and whistles, the proposed sampling strategy improves performance on multiple architectures (e.g. TSN, TRN, and TSM) and datasets (CATER, Something-Something-V1 and V2), up to 24% over the baseline of using the standard video input. In addition, our sampling strategies do not require training from scratch and do not increase the computational cost of training and testing. Given the generality of the results and the flexibility of the approach, we hope this can be widely useful to the video understanding community.
Speaker: Ran Long
Title: RGB-D SLAM in Indoor Planar Environments with Multiple Large Dynamic Objects
Abstract: In this work, I will present a novel dense RGB-D SLAM approach for dynamic planar environments that enables simultaneous multi-object tracking, camera localisation and background reconstruction. Previous dynamic SLAM methods either rely on semantic segmentation to directly detect dynamic objects; or assume that dynamic objects occupy a smaller proportion of the camera view than the static background and can, therefore, be removed as outliers. Our approach, however, enables dense SLAM when the camera view is largely occluded by multiple dynamic objects with the aid of camera motion prior. The dynamic planar objects are separated by their different rigid motions and tracked independently. The remaining dynamic non-planar areas are removed as outliers and not mapped into the background. I will also demonstrate that our approach outperforms the state-of-the-art methods in terms of localisation, mapping, dynamic segmentation and object tracking.
Speaker: Robert Mitchell
Title: Multimodal cue integration in insect orientation
Abstract: Travelling in a straight line is more taxing than one would think. Without an allothetic orientation cue, any system will eventually fail due to motor noise. Ball-rolling dung beetles are capable of very robust straight-line orientation behavoir using a variety of different orientation cues, stored in an orientation 'snapshot'. Recent work has shown that these cues are likely integrated using a form of vector summation. I will discuss a neural model of the insect head-direction circuit, how it relates to vector summation, a possible substrate for the orientation snapshot, and the potential capacity for storing cue meta-information in the snapshot.
IPAB Workshop - 21/04/2022
G.03, IF