Browsing by Author "Jabari, Shabnam"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item 3D information supported urban change detection using multi-angle and multi-sensor imagery(University of New Brunswick, 2015) Jabari, Shabnam; Zhang, YunThis PhD research is focused on urban change detection using very high resolution (VHR) imagery acquired by different sensors (i.e. airborne and satellite sensors) and different view angles. Thanks to high amount of details provided in VHR images, urban change detection is made possible. On the other hand, due to the complicated structure of 3D urban environments when projected into the 2D image spaces, detection of changes becomes complicated. In general, change detection is divided into two major steps: I. Establishment of a relation between bi-temporal images so that the corresponding pixels/segment are related; this is called co-registration; II. Comparison of the spectral properties of the co-registered pixels/segment in the bi-temporal images in order to detect changes. As far as Step 1 is concerned, establishment of an accurate global co-registration between bi-temporal images acquired by the different sensors is not possible in urban environments due to different geometric distortions in the imagery. Therefore, the majority of studies in this field avoid using multi-sensor and multi-view angle images. In this study, a novel co-registration method called "patch-wise co-registration" is proposed to address this problem. This method integrates the sensor model parameters into the co-registration process to relate the corresponding pixels and, by extension, the segments (patches). In Step 2, the brightness values of the matching pixels/segments are compared in order to detect changes. Thus, variations in the brightness values of the pixels/segments identify the changes. However, there are other factors that cause variations in the brightness values of the patches. One of them is the difference of the solar illumination angles in the bi-temporal images. In urban environment, the shape of the objects such as houses with steeply-sloped roofs (steep roofs) cause difference in the solar illumination angle resulting in difference in the brightness values of the associated pixels. This effect is corrected using irradiance topographic correction methods. Finally, the corrected irradiance of the co-registered patches is compared to detect changes using Multivariate Alteration Detection (MAD) transform. Generally, in the last stage of change detection process, "from-to" information is produced by checking the classification labels of the pixels/segments (patches). In this study, a fuzzy rule-based image classification methodology is proposed to improve the classification results, compared to the crisp thresholds, and accordingly increase the change detection accuracy. In total, the key results achieved in this research are: I. Including the off-nadir images and airborne images as the bi-temporal combinations in change detection; II. Solving the issue of geometric distortions in image co-registration step, caused by various looking angles of images, by introducing the patch-wise co-registration; III. Combining a robust spectral comparison method, which is the MAD transform, with the patch-wise change detection; IV. Removing the effect of illumination angle difference on the urban objects to improve change detection results; V. Improving classification results by using fuzzy thresholds in the image classification step. The outputs of this research provide an opportunity to utilize the huge amount of archived VHR imagery for automatic and semi-automatic change detection. Automatic classification of the images especially in urban area is still a challenge due to the spectral similarity between urban classes such as roads and buildings. Therefore, generation of the accurate “from-to” information is still remaining for future researches.Item A Multi-Feature Fusion Using Deep Transfer Learning for Earthquake Building Damage Detection(Taylor and Francis, 2021) Abdi, Ghasem; Jabari, ShabnamWith the recent tremendous improvements in the spatial, spectral, and temporal resolutions of remote sensing imaging systems, there has been a dramatic increase in the applications of remote sensing images. Amongst different applications of very high-resolution remote sensing images, damage detection for rapid emergency response is one of the most challenging ones. Recently, deep learning frameworks have enhanced the performance of earthquake damage detection by automatic extraction of strong deep features. However, most of the existing studies in this area focus on using nadir satellite images or orthophotos which limits the available data sources. This limitation decreases the temporal resolution of the practical images, which is a serious issue considering the emergency nature of damage detection applications. The objective of this study is to present a multimodal integrated structure to combine orthophoto and off-nadir images for earthquake building damage detection. In this context, a multi-feature fusion method based on deep transfer learning is presented, which contains four different steps, namely pre-processing, deep feature extraction, deep feature fusion, and transfer learning. To validate the presented framework, two comparative experiments are conducted on the 2010 Haiti earthquake using pre- and post-event off-nadir satellite images, which were collected by WorldView-2 (WV-2) satellite platform as well as a post-event airborne orthophoto. The results demonstrate considerable advantages in identifying damaged and non-damaged buildings with over 83% for the overall accuracy.Item Camera-LiDAR registration using LiDAR feature layers and deep learning(University of New Brunswick, 2024-10) Leahy, Jennifer; Jabari, ShabnamThis thesis focuses on a new pipeline reducing registration error between optical camera images and LiDAR data, integrating the strengths of both modalities to improve spatial awareness. The first part presents an approach that enhances aerial camera-LiDAR correspondences through weighted and combined LiDAR feature layers comprising intensity, depth, and bearing angle attributes. Correspondences are attained using a 2D-2D Graph Neural Network pipeline and then registered using a 6-parameter affine transformation model, demonstrating pixel-level accuracies that improve its baselines. The second part introduces a new method for camera-LiDAR registration when the modalities come from different projection models, using combined LiDAR feature layers with state-of-the-art deep learning matching algorithms. We evaluate the SuperGlue and LoFTR models on terrestrial datasets from the TX5 scanner, and from a custom-made, low-cost Mobile Mapping System named SLAMM-BOT, across diverse scenes. Registration is achieved using collinearity equations and RANSAC.Item Crack detection and dimensional assessment using smartphone sensors and deep learning(University of New Brunswick, 2024-02) Tello-Gil, Carlos; Jabari, Shabnam; Waugh, LloydThis thesis addresses the critical challenge of deteriorating civil infrastructure due to natural processes and aging, emphasizing the importance of early detection for public safety. Surface cracks in concrete structures serve as vital indicators of deterioration, prompting the development of automatic defect detection using deep learning. Manual inspections, the basis of structural health monitoring, struggle with the complexities of crack patterns. The first part of this thesis focuses on training a Mask R-CNN network for crack detection, using augmented real-world data to enhance accuracy. The second part introduces a cost-effective methodology utilizing smartphone sensors' imagery and 3D data for automated crack detection and precise dimension assessment with YOLOv8 and Mask R-CNN. This research aims to advance a multi-modal approach combining LiDAR observations with image masks for accurate 3D crack measurements, establishing a pipeline for dimensional assessment, and evaluating state-of-the-art CNN-based networks for crack detection in real-life images.Item Dynamic flood mapping using hydrological modeling and machine learning(University of New Brunswick, 2021) Esfandiari, Morteza; Jabari, Shabnam; Coleman, DavidFlooding is one of the most devastating natural hazards around the globe. Having access to abundant sources of data such as Light Detection and Ranging (LiDAR) and satellite images in Geographic Information System (GIS), it is possible to estimate the geospatial extent of floods. Currently, Machine Learning plays an essential role in GIS applications and flood mapping. In this study, the aim was to provide a precise flood model by improving a hydrological model called Height Above Nearest Drainage (HAND) using one of the most robust machine learning algorithms, Random Forest (R.F.). In this study, first, the essential conditioning factors contributing to flooding were identified using optical satellite images as a reference. Then, using the most efficient conditioning factors, an R.F. classifier was trained to predict flooded areas with training data selected using the HAND model. However, since the HAND model has uncertainties in flood mapping, the Random Sample Consensus (RANSAC) paradigm is used along with the essential conditioning factors to remove outliers. Since the proposed method uses the HAND model predictions as pseudo training points, it is called flood mapping using Pseudo Supervised Random Forest (PS-RF). The accuracy of PS-RF for flood extent prediction was tested in 5 different flood events in Fredericton, NB, and one event in Ottawa, ON, confirming that PS-RF improves the flood mapping results of the HAND model without requiring any ground truth training data.Item Level of detail 2, 3D city model creation using a semi-automatic hybrid-driven approach(University of New Brunswick, 2021-11) Krafczek, Mitchell; Jabari, ShabnamToday 55% of the world’s population lives in urban areas, a proportion that is expected to increase to 68% by 2050 (UN, 2018). Many cities already face challenges in meeting the needs of their growing urban population and basic services have become overwhelmed and inaccessible to many. 3D city models can be used to prepare for the future city, enabling informed analysis and sustainable development. This research proposes a new semi-automatic hybrid-driven method for the creation of a LOD2 3D city model. This model will be the base for future research and analysis and help to guide the future city as urbanization continues to grow. Each 3D city model has a level of detail (LOD) assigned to them: a standard set out by the Open Geospatial Consortium (OGC). The LOD outlines the overall usability of each model, determining how they can be used for informed analysis. There are 5 pre-defined LODs (LOD0-4) with LOD0 being a two-dimensional (2D) building footprint and LOD4 being a realistic building model. Currently, LOD0 and LOD1 are readily available but are limited in overall usability. LOD2, 3, and 4 are better for informed analysis but generally require massive amounts of data and powerful computers to make new 3D models; for practical reasons they can only be created over small areas limiting their usability. Therefore, for this thesis project, LOD2 creation methods were the primary focus. There are currently three methods for creating 3D city models – model-driven, datadriven, and hybrid-driven approaches. Model-driven approaches are the fastest and create a 3D city over large areas; however, these approaches created inaccurate models when the expected data did not fit one of the pre-defined libraries. Data-driven approaches are more accurate than model-driven approaches but often require large datasets and complex computer systems and are typically only used to re-create small sections of cities. Hybrid methods are a combination of the model and data-driven approaches combining the pros of both methods. For these reasons, this study focuses on the semi-automatic creation of 3D city models through a hybrid method.Item SAR-based flood detection in urban areas(University of New Brunswick, 2022-10) Baghermanesh, Shadi Sadat; Jabari, Shabnam; McGrath, HeatherIn Canada, flooding causes more damage to buildings, infrastructures, and people than any other natural disaster. Since floods enormously impact large groups of people, immediate large-scale monitoring is crucial during and after a flood event. In that regard, remote sensing technologies including optical and SAR sensors have been widely used for flood mapping. Nevertheless, SAR sensors have superiority due to their intrinsic day/night and all-weather image acquisition capabilities. While SAR backscatter intensity and phase correlation have been successfully employed in flood detection, other SAR products such as PolSAR decompositions and InSAR phase have been neglected in flood mapping. Moreover, the majority of existing studies are dedicated to flood mapping in rural areas while flood detection in urban areas using SAR imagery has not attracted enough attention. This is mainly due to the complexity of urban structures that pose challenges to the interpretation of backscatter patterns and flood mapping in urban areas. In this study, we examine the synergistic use of PolInSAR features in the improvement of flood mapping in urban environments. Also, the effectiveness of including SAR simulated reflectivity maps that represent the geometric distortion in the SAR image of constructed objects is investigated. Two supervised Machine Learning (ML) models are proposed that employ PolInSAR features along with five auxiliary features namely - elevation, slope, aspect, distance from the river, and land use/land cover- which are well-known to improve flood mapping algorithms. These auxiliary features are considered as the baseline model since they have shown effectiveness in the flood mapping literature. The first ML model is tested on medium-resolution Sentinel1-A images while the second model employs high-resolution TerraSAR-X images along with SAR simulated reflectivity maps. The results show promising improvements with respect to the baseline model: 5.6% overall accuracy improvement in the first ML model, and 9.6% in the second one. This improvement can be interpreted as the successful employment of the selected features and the effectivity of the classification models.