Skip to content

A Light Field Front-End for Robust SLAM in Dynamic Environments

A Light Field Front-End for Robust SLAM in Dynamic Environments

There is a general expectation that robots should operate in urban environments often consisting of potentially dynamic entities including people, furniture and automobiles. Dynamic objects pose challenges to visual SLAM algorithms by introducing errors into the front-end. This paper presents a Light Field SLAM front-end which is robust to dynamic environments.

A Light Field captures a bundle of light rays emerging from a single point in space, allowing us to see through dynamic objects occluding the static background via Synthetic Aperture Imaging(SAI). We detect apriori dynamic objects using semantic segmentation and perform semantic guided SAI on the Light Field acquired from a linear camera array. We simultaneously estimate both the depth map and the refocused image of the static background in a single step eliminating the need for static scene initialization. The GPU implementation of the algorithm facilitates running at close to real time speeds of 4 fps. We demonstrate that our method results in improved robustness and accuracy of pose estimation in dynamic environments by comparing it with state of the art SLAM algorithms.

Affiliated Faculty: Hanumant Singh

02/24/21 – BOSTON, MA – Students test medical protective gear using robots in the ISEC building on Feb 24, 2021. Photo by Matthew Modoono/Northeastern University

We use cookies to improve your experience on our sites. By continuing to use our sites, you agree to our Privacy Statement.