A Multi-Feature Fusion Using Deep Transfer Learning for Earthquake Building Damage Detection

Thumbnail Image



Journal Title

Journal ISSN

Volume Title


Taylor and Francis


With the recent tremendous improvements in the spatial, spectral, and temporal resolutions of remote sensing imaging systems, there has been a dramatic increase in the applications of remote sensing images. Amongst different applications of very high-resolution remote sensing images, damage detection for rapid emergency response is one of the most challenging ones. Recently, deep learning frameworks have enhanced the performance of earthquake damage detection by automatic extraction of strong deep features. However, most of the existing studies in this area focus on using nadir satellite images or orthophotos which limits the available data sources. This limitation decreases the temporal resolution of the practical images, which is a serious issue considering the emergency nature of damage detection applications. The objective of this study is to present a multimodal integrated structure to combine orthophoto and off-nadir images for earthquake building damage detection. In this context, a multi-feature fusion method based on deep transfer learning is presented, which contains four different steps, namely pre-processing, deep feature extraction, deep feature fusion, and transfer learning. To validate the presented framework, two comparative experiments are conducted on the 2010 Haiti earthquake using pre- and post-event off-nadir satellite images, which were collected by WorldView-2 (WV-2) satellite platform as well as a post-event airborne orthophoto. The results demonstrate considerable advantages in identifying damaged and non-damaged buildings with over 83% for the overall accuracy.