IPAB Workshop-01/02/2024

 

Ebtehal Alotaibi will be the first speaker, the details of her talk are as follows:

 

Title - Digital Twinning and Research Collaboration: Innovating Autonomous Food Delivery with Pixconvey

 

Abstract - Pixconvey, an Edinburgh-based startup pioneering autonomous food delivery, is poised to revolutionize the industry through the integration of cutting-edge technologies and innovative approaches. Building upon its successful track record and recent achievements, including securing two grant funds and partnering with KB Catering Service for trial operations, Pixconvey is committed to advancing the frontier of research-driven solutions.

Central to Pixconvey's mission is the development of a cloud-based simulation tool, embodying the concept of digital twinning. This tool serves as a virtual replica of Pixconvey's autonomous delivery system, enabling researchers to explore, analyze, and optimize algorithmic performance in a simulated environment. Through the digital twin, they can execute their simulation-test algorithms on the real-test bed, empowering researchers to drive experimentation, iteration, and innovation.

Pixconvey's simulation tool offers a comprehensive dashboard, empowering researchers to monitor individual robot performance, system utilization, and operational metrics in real-time. By harnessing the power of digital twinning, researchers can predict outcomes, optimize strategies, and drive impactful advancements in autonomous technology.

 

 

Titas Anciukevicius

 

Title – Denoising Diffusion via Image-based Rendering

 

Abstract – Generating and reconstructing 3D scenes is a challenging open problem, which requires synthesizing plausible content that is fully consistent in 3D space. While recent methods such as neural radiance fields excel at view synthesis and 3D reconstruction, they cannot synthesize plausible details in unobserved regions since they lack a generative capability. Conversely, existing generative methods are typically not capable of reconstructing detailed, large-scale scenes in the wild, as they use limited-capacity 3D scene representations, require aligned camera poses, or rely on additional regularizers. In this work, we introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes. To achieve this, we make three contributions. First, we introduce a new neural scene representation that can efficiently and accurately represent large 3D scenes, dynamically allocating more capacity as needed to capture details visible in each image. Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images without the need for any additional supervision signal such as masks, depths, or regularizers, required by prior works. Third, we introduce a fast and unified architecture that supports conditioning on varying numbers of images, including a priori generation of 3D scenes and reconstruction from one or several images. We evaluate the model on several challenging datasets of real and synthetic images, and demonstrate superior results on generation, novel view synthesis and 3D reconstruction.

 

 

Feb 01 2024 -

IPAB Workshop-01/02/2024

Ebtehal Alotaibi and Titas Anciukevicius

IF, G.03