Camera-LiDAR registration using LiDAR feature layers and deep learning

Loading...
Thumbnail Image

Date

2024-10

Journal Title

Journal ISSN

Volume Title

Publisher

University of New Brunswick

Abstract

This thesis focuses on a new pipeline reducing registration error between optical camera images and LiDAR data, integrating the strengths of both modalities to improve spatial awareness. The first part presents an approach that enhances aerial camera-LiDAR correspondences through weighted and combined LiDAR feature layers comprising intensity, depth, and bearing angle attributes. Correspondences are attained using a 2D-2D Graph Neural Network pipeline and then registered using a 6-parameter affine transformation model, demonstrating pixel-level accuracies that improve its baselines. The second part introduces a new method for camera-LiDAR registration when the modalities come from different projection models, using combined LiDAR feature layers with state-of-the-art deep learning matching algorithms. We evaluate the SuperGlue and LoFTR models on terrestrial datasets from the TX5 scanner, and from a custom-made, low-cost Mobile Mapping System named SLAMM-BOT, across diverse scenes. Registration is achieved using collinearity equations and RANSAC.

Description

Keywords

Citation