Open Theses & Dissertations
Permanent URI for this collection
Browse
Browsing Open Theses & Dissertations by Subject "Geodesy and Geomatics Engineering"
Now showing 1 - 20 of 56
Results Per Page
Sort Options
Item 3D information supported urban change detection using multi-angle and multi-sensor imagery(University of New Brunswick, 2015) Jabari, Shabnam; Zhang, YunThis PhD research is focused on urban change detection using very high resolution (VHR) imagery acquired by different sensors (i.e. airborne and satellite sensors) and different view angles. Thanks to high amount of details provided in VHR images, urban change detection is made possible. On the other hand, due to the complicated structure of 3D urban environments when projected into the 2D image spaces, detection of changes becomes complicated. In general, change detection is divided into two major steps: I. Establishment of a relation between bi-temporal images so that the corresponding pixels/segment are related; this is called co-registration; II. Comparison of the spectral properties of the co-registered pixels/segment in the bi-temporal images in order to detect changes. As far as Step 1 is concerned, establishment of an accurate global co-registration between bi-temporal images acquired by the different sensors is not possible in urban environments due to different geometric distortions in the imagery. Therefore, the majority of studies in this field avoid using multi-sensor and multi-view angle images. In this study, a novel co-registration method called "patch-wise co-registration" is proposed to address this problem. This method integrates the sensor model parameters into the co-registration process to relate the corresponding pixels and, by extension, the segments (patches). In Step 2, the brightness values of the matching pixels/segments are compared in order to detect changes. Thus, variations in the brightness values of the pixels/segments identify the changes. However, there are other factors that cause variations in the brightness values of the patches. One of them is the difference of the solar illumination angles in the bi-temporal images. In urban environment, the shape of the objects such as houses with steeply-sloped roofs (steep roofs) cause difference in the solar illumination angle resulting in difference in the brightness values of the associated pixels. This effect is corrected using irradiance topographic correction methods. Finally, the corrected irradiance of the co-registered patches is compared to detect changes using Multivariate Alteration Detection (MAD) transform. Generally, in the last stage of change detection process, "from-to" information is produced by checking the classification labels of the pixels/segments (patches). In this study, a fuzzy rule-based image classification methodology is proposed to improve the classification results, compared to the crisp thresholds, and accordingly increase the change detection accuracy. In total, the key results achieved in this research are: I. Including the off-nadir images and airborne images as the bi-temporal combinations in change detection; II. Solving the issue of geometric distortions in image co-registration step, caused by various looking angles of images, by introducing the patch-wise co-registration; III. Combining a robust spectral comparison method, which is the MAD transform, with the patch-wise change detection; IV. Removing the effect of illumination angle difference on the urban objects to improve change detection results; V. Improving classification results by using fuzzy thresholds in the image classification step. The outputs of this research provide an opportunity to utilize the huge amount of archived VHR imagery for automatic and semi-automatic change detection. Automatic classification of the images especially in urban area is still a challenge due to the spectral similarity between urban classes such as roads and buildings. Therefore, generation of the accurate “from-to” information is still remaining for future researches.Item A geospatial web application (GEOWAPP) for supporting course laboratory practices in surveying engineering(University of New Brunswick, 2015) Garbanzo, Jaime; Stefanakis, Emmanuel; Kingdon, Robert WilliamAlthough most of the university courses are somehow supported by a Learning management system (e.g Desire2Learn), field practices in survey engineering are not interactively supported by these systems. Also, the internet is available in almost every place today, and there are a wide range of internet services on the web. By combining these advantages with e-learning, survey practicums can be enhanced with a web-based application. The survey practicums are very specialized with precise traditional techniques used for checking measurements in the field. Thus, the combination of E-learning and practicums is not straight forward. In order to achieve this combination, there is a need to define a framework of survey exercises and a way of effectively delivering the information to the student making the process more efficient. Different outlines of surveying courses were studied in order to provide a set of exercises that can be supported by a GEOWAPP (Geospatial Web Application). This thesis proposes a combination of processing tools, created in Python, JavaScript and PHP, and Google Maps. The main objectives is to enhance the experiences that students have in the field as well as evaluating their techniques for surveying. Accuracy was chosen as the pillar of this application, which helps to gather information about students technique and computations, and to locate students’ mistakes easily. This specific application is intended for self-reviewing. A prototype of the application was developed, which contains five (5) operational tools. These tools were tested with artificial and real data; this testing gave a good insight of such an application requirements. User reviews were carried out showing that students embrace the idea of similar applications. Finally, GEOWAPP showed some learning enhancing characteristics. However, a test with a real course remains to be carried out to determine whether it is beneficial to students.Item A scalable web tiled map management system(University of New Brunswick, 2017) Kotsollaris,, Menelaos; Stefanakis, Emmanuel; Zhang, YunModern map visualizations are built using data structures for storing tile images, while their main concerns are to maximize efficiency and usability. The core functionality of a web tiled map management system is to provide tile images to the end user; several tiles combined construe the web map. This thesis presents a comprehensive end-to-end analysis for developing and testing scalable web tiled map management systems. To achieve this, several data structures are showcased and analyzed. Specifically, this thesis focuses on the SimpleFormat, which stores the tiles directly on the file system; the ImageBlock, which divides each tile folder (a folder where the tile images are stored) into subfolders that contain multiple tiles prior to storing the tiles on the file system; the LevelFilesSet, a data structure that creates dedicated Random-Access files, wherein the tile dataset is first stored and then parsed in files to retrieve the tile images; and, finally, the LevelFilesBlock, a hybrid data structure which combines ImageBlock and LevelFilesSet data structures. This work signifies the first time this hybrid approach has been implemented and applied in a web tiled map context. Specifically, each data structure was implemented in Java. The JDBC API was used for integrating with the PostgreSQL database. This database was then used to conduct cross-testing amongst the data structures. Subsequently, several benchmark tests on local and cloud environments are developed anew and assessed under different system configurations to compare the data structures and provide a thorough analysis of their efficiency. These benchmarks showcased the efficiency of LevelFilesSet, which retrieved tiles up to 3.3 times faster than the other data structures. Peripheral features and principles of implementing scalable web tiled map management systems among different software architectures and system configurations are analyzed and discussed.Item Accuracy of the classical height system(University of New Brunswick, 2018) Foroughi, Ismael; Santos, Marcelo; Vaníček, PetrMeasuring the quality of the classical height system through its self-consistency (congruency) is investigated in this dissertation. Measuring the congruency is done by comparing the geoidal heights determined from a gravimetric geoid model with test geoidal heights derived at GNSS/Leveling points. The components of this measurement are computed as accurately as possible, e.g., the Stokes-Helmert approach is used to determine the geoid model, gravimetric and topographic corrections are applied to the spirit leveling observations to derive rigorous orthometric heights at test points, and finally, the geodetic heights are taken from GNSS observations. Four articles are included in this dissertation, one is discussing a modification to the Stokes-Helmert approach for using the optimal contribution of the Earth gravitational models and the local data. The second paper applies the methodology presented in the first paper and presents the detail results for a test area. The third paper is a discussion on the accuracy of the classical height system against Molodensky’s system and presents a numerical study to show that the classical system can be computed as accurately as Molodensky’s. The last paper presents a methodology to find the most probable solution of the downward continuation of surface gravity to the geoid level using the least-squares technique. The uncertainties of the geoidal heights are estimated using least-square downward continuation and a priori variance matrix of the input gravity data. The total estimation of the uncertainties of the geoidal heights confirms that geoid can be determined with sub-centimetre accuracy in the flat areas when, mainly, the effect of topographic mass density is taken into account properly, the most probable solution of downward continuation is used, and the improved satellite-only global gravitational models are merged with local data optimally.Item An evolutionary graph framework for analyzing fast-evolving networks(University of New Brunswick, 2019) Ikechukwu, Maduako Derek; Wachowicz, MonicaFast-evolving networks by definition are real-world networks that change their structure, becoming denser over time since the number of edges and nodes grows faster, and their properties are also updated frequently. Due to the dynamic nature of these networks, many are too large to deal with and complex to generate new insights into their evolution process. One example includes the Internet of Things, which is expected to generate massive networks of billions of sensor nodes embedded into a smart city infrastructure. This PhD dissertation proposes a Space-Time Varying Graph (STVG) as a conceptual framework for modelling and analyzing fast-evolving networks. The STVG framework aims to model the evolution of a real-world network across varying temporal and spatial resolutions by integrating time-trees, subgraphs and projected graphs. The proposed STVG is developed to explore evolutionary patterns of fast-evolving networks using graph metrics, ad-hoc graph queries and a clustering algorithm. This framework also employs a Whole-graph approach to reduce high storage overhead and computational complexities associated with processing massive real-world networks. Two real-world networks have been used to evaluate the implementation of the STVG framework using a graph database. The overall results demonstrate the application of the STVG framework for capturing operational-level transit performance indicators such as schedule adherence, bus stop activity, and bus route activity ranking. Finally, another application of STVG reveals evolving communities of densely connected traffic accidents over different time resolutions.Item An IoT platform for occupancy prediction using support vector machine(University of New Brunswick, 2019) Parise, Alec; Wachowicz, MonicaThe Internet of Things (IoT) is a network of devices able to connect, interact and exchange data without human intervention. Most of today’s research focuses on collecting indoor sensor data with the purpose of reducing costs of operation facilities management. Innovative approaches ranging from context aware sensing platforms to dynamic robot sensing have been proposed in previous research work, but the challenge remains on understanding how sensor data can be used to predict occupancy usage patterns in smart buildings. This research aims at developing a non-intrusive sensing method for gathering sensor data for predicting occupancy usage patterns in indoor environments. There are several potential applications ranging from that can benefit from occupancy prediction. Smart building management systems; establishing communication with the HVAC system when an accurate occupancy classification and prediction for optimization of energy consumption. Towards this end, an IoT platform based on an open source architecture consisting of Arduino and Raspberry Pi 3 B+ is designed and deployed in three different environments at two University campuses. By utilizing temperature and humidity for observing indoor environmental characteristics while combining PIR motion sensors, CO2, and sound detectors a robust occupancy detection model is created, and by applying Support Vector Machine, occupancy usage patterns are predicted. This IoT platform is a low-cost and highly scalable both in terms of the variety of on-board sensors and portability of the sensor nodes, which makes it well suited for multiple applications related to occupancy usage and environmental monitoring.Item Analysis of EV charging station clusters using a novel representation of temporally varying structures(University of New Brunswick, 2021-11) Richard, René; Church, Ian; Wachowicz, MonicaTransport electrification introduces new opportunities in supporting sustainable mobility. Fostering Electric Vehicle (EV) adoption integrates vehicle range and infrastructure deployment concerns. An understanding of EV charging patterns is crucial for optimizing charging infrastructure placement and managing costs. Clustering EV charging events has been useful for ensuring service consistency and increasing EV adoption. However, clustering presents challenges for practitioners when first selecting the appropriate hyperparameter combination for an algorithm and later when assessing the quality of clustering results. Ground truth information is usually not available for practitioners to validate the discovered patterns. As a result, it is harder to judge the effectiveness of different modelling decisions since there is no objective way to compare them. This work proposes a clustering process that allows for the creation of relative rankings of similar clustering results. The overall goal is to support users by allowing them to compare a clustering result of interest against other similar groupings over multiple temporal granularities. The efficacy of this analytical process is demonstrated with a case study using real-world EV charging event data from charging station operators in New Brunswick.Item Application of footstep sound and lab colour space in moving object detection and image quality measurement using opposite colour pairs(University of New Brunswick, 2019) Roshan, Aditya; Zhang, YunThis PhD dissertation is focused on two of the major tasks of an Atlantic Innovation Fund (AIF) sponsored “Triple-sensitive Camera Project”. The first task focuses on the improvement of moving object detection techniques, and second on the evaluation of camera image quality. Cameras are widely used in security, surveillance, site monitoring, traffic, military, robotics, and other applications, where detection of moving objects is critical and important. Information about image quality is essential in moving object detection. Therefore, detection of moving objects and quality evaluation of camera images are two of the critical and challenging tasks of the AIF Triple-sensitive Camera Project. In moving object detection, frame-based and background-based are two major techniques that use a video as a data source. Frame-based techniques use two or more consecutive image frames to detect moving objects, but they only detect the boundaries of moving objects. Background-based techniques use a static background that needs to be updated in order to compensate for light change in a camera scene. Many background modelling techniques involving complex models are available which make the entire procedure very sophisticated and time consuming. In addition, moving object detection techniques need to find a threshold to extract a moving object. Different thresholding methodologies generate varying threshold values which also affect the results of moving object detection. When it comes to quality evaluation of colour images, existing Full-Reference methods need a perfect colour image as reference and No-Reference methods use a gray image generated from the colour image to compute image quality. However, it is very challenging to find a perfect reference colour image. When a colour image is converted to a grey image for image quality evaluation, neither colour information nor human colour perception is available for evaluation. As a result, different methods give varying quality outputs of an image and it becomes very challenging to evaluate the quality of colour images based on human vision. In this research, a single moving object detection using frame differencing technique is improved using footstep sound which is produced by the moving object present in camera scene, and background subtraction technique is improved by using opposite colour pairs of Lab colour space and implementing spatial correlation based thresholding techniques. Novel thresholding methodologies consider spatial distribution of pixels in addition to the statistical distribution used by existing methods. Out of four videos captured under different scene conditions used to measure improvements, a specified frame differencing technique shows an improvement of 52% in object detection rate when footstep sound is considered. Other frame-based techniques using Optical flow and Wavelet transform such are also improved by incorporating footstep sound. The background subtraction technique produces better outputs in terms of completeness of a moving object when opposite colour pairs are used with thresholding using spatial autocorrelation techniques. The developed technique outperformed background subtraction techniques with most commonly used thresholding methodologies. For image quality evaluation, a new “No-Reference” image quality measurement technique is developed which evaluates quantitative image quality score as it is evaluated by human eyes. The SCORPIQ technique developed in this research is independent of a reference image, image statistics, and image distortions. Colour segments of an image are spatially analysed using the colour information available in Lab colour space. Quality scores from SCORPIQ technique using LIVE image database yield distinguished results as compared to quality scores from existing methods which give similar results for visually different images. Compared to visual quality scores available with LIVE database, the quality scores from SCORPIQ technique are 3 times more distunquishable. SCORPIQ give 4 to 20 times distinguishable results compared to statistics based results which also does not follow the quality scores as evaluated by human eyes.Item Atmospheric delay modelling for ground-based GNSS reflectometry(University of New Brunswick, 2020) Nikolaidou, Thalia; Santos, Marcelo; Geremia-Nievinski, FelipeSeveral studies have demonstrated the utility of global navigation satellite system reflectometry (GNSS-R) for ground-based coastal sea-level altimetry. Recent studies evidenced the presence of atmospheric delays in GNSS-R sea-level retrievals and by-products such as tidal amplitudes. On the one hand, several ad-hoc atmospheric correction formulas have been proposed in the literature. On the other hand, ray-tracing studies applied for GNSS-R show little information about the methods and algorithms involved. This dissertation is based on three articles which establish the theoretical framework of the atmospheric delay experienced in ground-based GNSS-R altimetry. In the first article, we defined the atmospheric interferometric delay in terms of the direct and reflected atmospheric delays as well as the vacuum distance and radio length. Then, we clarified the roles of linear and angular refraction, derived the respective delays and combined them in the total delay. We also introduced for the first time two subcomponents of the atmospheric geometric delay, the geometric-shift and the geometric-excess, unique for reflected signals. The atmospheric altimetry correction necessary for unbiased sea-level retrievals was defined as half the rate-of-change of the atmospheric delay with respect to the sine of satellite elevation angle. We developed a ray-tracing procedure to solve rigorously the three-point boundary value problem involving transmitting satellite, reflecting surface, and receiving antenna. We hence evaluated the atmospheric bias in sea-level retrievals for a range of typical scenarios, showing its dependence on elevation angle and reflector height. In the second article, we demonstrated that rigorous ray-tracing of the bent ray can be simplified by a judicious choice of rectilinear wave propagation model. This facilitates the adaptation by existing GNSS ray-tracing procedures, besides numerical and speed advantages. Further it was emphasized that mapping functions developed for GNSS positioning cannot be reused for GNSS-R purposes without adaptations. In the third article, we developed closed-form expressions of the atmospheric delay and altimetry correction for end-users without access or expertise in ray-tracing. These expressions rely only on direct elevation bending and mean refractivity at the site. Finally, we determined cut-off elevation angle and reflector height, for neglecting atmospheric delays. These limiting conditions are useful in observation planning and error budgeting of the GNSS-R altimetry retrievals.Item Automated handling of reflection for elimination of incorrect points and object reconstruction from laser point cloud(University of New Brunswick, 2023-08) Okunima, Enuenweyoi Daniel; Dare, PeterMirrors are common in our everyday lives and cause reflection of 3D points captured during laser scanning, necessitating their elimination in data post-processing. However, reflection can also be beneficial for capturing hard-to-reach parts of objects. This study aims to develop an automated solution for handling reflected points encountered during 3D laser scanning of buildings and facilities. Therefore, this research reformulates the mirror detection problem as identifying reflected points, beginning with frame detection. Ten generic deep learning networks/models and existing custom models used for mirror detection in RGB images were investigated for frame detection. The proposed methodology involves identifying frames and categorizing them as picture frames or mirrors using DBSCAN and eliminating or correcting the reflected points when mirrors are detected. The developed method was validated on a dataset containing 50 scans with mirrors and pictures, showed that generic models outperformed custom ones for frame detection, and the proposed method achieved satisfactory results, even with multiple frames in the point cloud.Item Automatic mid-water target detection using multibeam water column(University of New Brunswick, 2012) Videira Marques, Carlos Rubrio; Clarke, J. HughesA potential new automatic application in multibeam water column is the recognition and precise location of suspended mid-water targets. This is already being applied manually in the ArcticNet program for searching for lost under-ice mooring hardware. The pattern of the scattering field around a suspended point mid-water target is directly related to the multi beam imaging geometry, including pulse length, transmission and reception main lobe beam-widths as well as side lobe spacing and suppression. Knowing this geometry-specific scattering pattern, optimal 3D matched filters can be designed to pick out faint targets from noise. Having picked an object in this manner, its location can be derived with the same positioning uncertainty that we already associate with depth. Equivalent detection of objects can be achieved manually by the trained operator when carefully inspecting all the data, but is a very long and tedious task. An automatic algorithm developed as the main component of this thesis can be used to perform this task more rapidly and reliably, as well as tracking the object's movement. These new capabilities can be used in oceanographic research, in search and rescue, also for military purposes, and to track geological activity. A specific case study used as an example is the monitoring of suspended targets over seabed markers that are progressively displaced by landslides.Item Automatic processing of Arctic crowd-sourced hydrographic data while improving bathymetric accuracy and uncertainty assessment(University of New Brunswick, 2019) Arfeen, Khaleel; Church, IanMelting sea ice has led to an increase in navigation in Canadian Arctic waters. However, these waters are sparsely surveyed and pose a risk to mariners. Recognizing this issue, the government of Canada has granted funds towards the development of a pilot program to begin collecting bathymetric data through a novel crowd-sourced approach. The project is a coalition between four Canadian partners from across the country; The University of New Brunswick’s Ocean Mapping Group is tasked with the processing of the collected data and this thesis will focus on this aspect. Through an automated approach the data has been processed with the end-product being a final depth measurement with the associated uncertainty. The software is Python based and has been broken down into several modules to complete the task at hand. Utilizing specialized hydrographic equipment, designed to be low-cost and simple to operate, participating communities in the Canadian Arctic have been given the opportunity to collect bathymetric data while traversing their local waterways. As the pilot phase of the project is done, this thesis delves into the steps taken to fulfill the processing goals. The primary motivation surrounds how the processing workflow was completed through automation while mitigating errors and achieving transparency in the uncertainty assessment in the crowd-sourced bathymetric (CSB) data. Particular emphasis is placed upon the issues of collecting valuable hydrographic data from the Arctic with analysis of different methods to process the data with efficiency in mind. These challenges include obtaining a reliable GNSS signal through post-processing, qualification of the GNSS data for vertical reference, utilizing the HYCOM hydrodynamic model to collect sound velocity profiles and the identification and quantification of uncertainty as part of the Total Propagated Uncertainty (TPU) model. Several case study type examples are given where an investigation is conducted using processed collected and/or model data. Discussions surround the results of multi-constellation vs. single-constellation GNSS in the Arctic and the effects on the qualification rate for use as vertical referencing. Similarly, work towards comparing the model used to collect SVP data with equivalent real-world data collected by the Canadian Coast Guard is discussed. Finally, uncertainty has been quantified and assessed for the collected data and the results of the uncertainty assessment are provided using CHS/IHO survey standards as a benchmark.Item Building detection in off-nidar very high resolution satellite images based on stereo 3D information(University of New Brunswick, 2017) Suliman, Alaelidn Muhmud Housat; Zhang, YunMapping or updating maps of urban areas is crucial for urban planning and management. Since buildings are the main objects in urban environments, building roof detection is an important task in urban mapping. The ideal geo-spatial data source for mapping building information is very high resolution (VHR) satellite images. On the other hand, because buildings are elevated objects, incorporating their heights in building detection can significantly improve the accuracy of the mapping. The most cost-effective source for extracting the height information is stereo VHR satellite images that can provide two types of stereo 3D information: elevation and disparity. However, most VHR images are acquired off-nadir. This acquisition type causes building leaning in the images and creates major challenges for the incorporation of building height information into roof detection. Thus, this PhD research focuses on finding solutions to mitigate the problems associated with 3D-supported building detection in off-nadir VHR satellite images. It also exploits the potential of extracting disparity information from off-nadir image pairs to support building detection. In the research, several problems associated with building leaning need to be solved, such as building roof offsetting from its footprint, object occlusion, and building façades. Moreover, the variation of the roofs offsets based on the building heights. While the offsets of building roof create difficulties in the co-registration between image and elevation data, the building façades and occlusions create challenges in automatically finding matching points in off-nadir image pairs. Furthermore, due to the variation in building-roof offsets, the mapped roofs extracted from off-nadir images cannot be directly geo-referenced to existing maps for effective information integration. In this PhD dissertation, all of the above identified problems are addressed in a progressively improving manner (i.e., solving the problems one after another while improving the efficiency) within the context of 3D-supported building detection in off-nadir VHR satellite images. Firstly, an image-elevation co-registration technique is developed that is more efficient than the currently available techniques. Secondly, the computation cost is then reduced by generating disparity information instead of the traditional elevation data. This allows bypassing a few time-consuming steps of the traditional method. Thirdly, the disparity generation is then extended from using one pair of off-nadir images to using multiple pairs for achieving an enriched disparity map. Finally, the enriched disparity maps achieved are then used to efficiently derive elevations that are directly co-registered with pixel-level accuracy to the selected reference image. Based on these disparity-based co-registered elevations, building roofs are successfully detected and accurately geo-referenced to existing maps. The outcome of this PhD research proved the possibility of using off-nadir VHR satellite images for accurate urban building detection. It significantly increases the data source scope for building detection since most (> 95%) of VHR satellite images are off-nadir and traditional methods cannot effectively handle off-nadir images.Item Carrier-phase multipath mitigation in RTK-based GNSS dual-antenna systems(University of New Brunswick, 2013) Serrano, Luis; Langley, Richard; Kim, DonCarrier-phase multipath mitigation in GPS/GNSS real-time kinematic (RTK) mode has been studied for several years, at least since on-the-fly ambiguity resolution techniques were introduced, and receiver hardware improvements to the point that GNSS RTKbased systems provide position estimates at the mm to cm-level accuracy in real-time. This level of accuracy has heralded a new era of applications where the use of GNSS RTK-based techniques have become a very practical navigation tool, especially in the fields of machine automation, industrial metro logy, control, and robotics. However, this incredible surge in accuracy tied with real-time capabilities comes with a cost: one must also ensure continuity, and integrity (safety). Typical users of these systems do not expect heavy machinery, guided and/or controlled by GNSS-based systems, to output erroneous solutions even in challenging multipath environments. In multipath-rich scenarios, phase-multipath reflections can seriously degrade the RTK solutions, and in worst scenarios, integer fixed solutions are no longer available. This dissertation intends to deal with these scenarios, where the rover algorithms should deal with multiple reflections and, in real-time, be able to ameliorate/mitigate their effect. GNSS-based heading/attitude is usually obtained combining the data from two or more antennas (also known as a moving baseline). Many companies provide commercial systems based on this technique, hence this dissertation finds its main applicability here. Typical heavy construction machinery includes dozers, motor-graders, excavators, scrappers, etc., which are being equipped more frequently with GNSS dual-antenna systems to provide positioning and orientation information to the operator. We have not used and collected data from one of these machines, although the author has worked extensively with such machinery and their GNSS-based systems. However, the theory developed throughout this dissertation and the proof of concept through controlled tests that mimic the machinery/installed GNSS dual-antenna systems, are the basis of this dissertation. Moreover the algorithms developed here are meant to be used independently from the receiver hardware, as well as from GNSS signals. Hence GLONASS, and/or Galileo signals can be processed too. This dissertation is based on the fundamental relationship between multiple multipath reflections from close-by strong reflections, and their effect on GNSS RTK-based dual-antenna systems. Two questions were answered: Firstly, is it possible to retrieve strong multipath reflectors in kinematic applications? Second, once these strong reflectors are correctly identified, how accurate/reliable are the corrections to the raw carrier-phase multipath, knowing that the host platform performs unpredictable manoeuvres? Based on the results, we can conclude that it is possible to estimate m real-time multipath parameters based on a strong effective reflector. In most of the tests it takes at least 2 minutes to obtain initial values (after Kalman filter convergence). Once they are determined, multipath corrections can be determined straightforwardly for each satellite being tracked, as long as there are no cycle-slips (mostly due to the combination of the machinery high dynamics, especially within the areas where antennas are located, and the machinery itself blocking momentarily satellite signals).Item Data stream affinity propagation for clustering indoor space localization data(University of New Brunswick, 2021) Eshraghi Ivari, Nasrin; Wachowicz, MonicaIn the age of Internet of Things, the ability to find spatio-temporal patterns of people and devices moving in indoor spaces has become crucial for developing new applications. In particular, clustering indoor localization data streams has gained popularity in recent years due to their potential of generating relevant information for planning building automation, evaluating energy efficiency scenarios, and simulating emergency protocols. In this thesis, a data stream Affinity Propagation (DSAP) clustering algorithm is proposed for analyzing indoor localization data generated from e-counters and WiFi localization systems. The data sets are a sequence of potentially infinite and non-stationary data streams, arriving continuously where random access to the data is not feasible and storing all the arriving data is impractical. The DSAP algorithm is implemented based on a two-phase approach (i.e., online and offline clustering phases) using the landmark time window model. The proposed DSAP is non-parametric in the sense of not requiring any prior knowledge about the number of clusters and their respective labels. The validation and performance of the DSAP algorithm are evaluated using real-world data streams from two experiments aimed at finding stair usage patterns and occupancy behaviour in indoor spaces.Item De-correlation of tropospheric error and height component on GNSS using combined zenith-dependent parameter(University of New Brunswick, 2016) Ahn, Yong-Won; Dare, PeterFor high precision GNSS positioning, the troposphere is one of the most problematic error sources. Typically, the effect is minimal due to the spatio-temporal correlation when the baseline length is short enough in the relative positioning scenario. When a strong tropospheric anomaly effect is present, the problem can be much more complicated and the resultant positioning solution is typically no longer precise even for a baseline of a few kilometres in length. As the troposphere delay and height estimates are almost linearly correlated above a 20° elevation angle, the problem exists of how to de-correlate these two parameters to avoid such ill-conditioned cases. To obtain reliable height estimates, and avoid ill-conditioned cases, a new method is proposed in this dissertation: these two common zenith dependent parameters are combined into a single parameter plus weighting parameters. Once the parameters are combined and corresponding weighting parameters are determined, the vertical component can be retrieved. The feasibility of the methodology is investigated in a kinematic situation. To determine the weighting coefficient in this case, the residuals in a least-square estimator are analyzed. As the residuals can be decomposed into two different realms, either troposphere or ionosphere, the magnitude of the residual contribution of the troposphere for each satellite pair in the double difference can be determined. This value is further used to determine the weighting parameters. Through this new method, the common zenith-dependent parameters are found to be de-correlated. A number of data sets are processed and the results are analyzed, especially during severe inhomogeneous tropospheric conditions and under humid environments. In summary, in a kinematic scenario, the achievement is shown to be up to 20% (4 cm to 3 cm rms) with processed data. Compared to the conventional approach, the degradation of the vertical component during an anomalous weather period is almost eliminated in kinematic scenarios, which is the main goal of the research described in this dissertation. This means that this new approach is resistant to an anomalous tropospheric event.Item Design of a semi-automated LiDAR point classification framework(University of New Brunswick, 2016) Amolins, Krista; Coleman, DavidData from airborne light detection and ranging (LiDAR) systems are becoming more commonplace and are being used in applications other than traditional remote sensing and GIS applications, such as for archaeological surveys. However, non-expert LiDAR users face challenges when working with LiDAR data or derived products. Anecdotal evidence suggests that many users may not have much knowledge of how a LiDAR product was derived or the qualities of the original LiDAR point cloud. In addition, suitable processing software may not be accessible due to cost or may require extensive training and familiarity with the tools for users to achieve their desired results. This thesis addresses some of the challenges non-expert LiDAR users may face by developing a semi-automated point classification framework that does not require expert user input to classify individual points within the point cloud. The Canadian Airborne LiDAR Acquisition Guideline, released by Natural Resources Canada in 2014, was used as a guide in the development process. The framework consists of a multi-stage classification process that can be applied using LiDAR point clouds exclusively or using LiDAR data integrated with other types of data. Code developed as part of this thesis to implement the framework is hosted in a repository on Bitbucket. The first stage is a ground point identification process that requires little or no operator input to classify ground points within a LiDAR point cloud. It achieved greater than 95% accuracy in sample tests, as compared to available classified ground data. Subsequent stages add or refine classification of points within the point cloud. If only LiDAR data are used, points are classified as building/structure, low vegetation, medium vegetation, high vegetation, unpaved ground, road or paved surface, or points above paved surface. Points that do not meet the criteria for any of the classes are left unclassified. Additional data can be introduced at any stage to improve processing time; add classes, for example, water; or refine results. Recommendations for future research include making greater use of 3D data structures, making greater use of point level information, and improving methods used to refine classification results.Item Determination of a geoid model for Ghana using the Stokes-Helmert method(University of New Brunswick, 2015) Klu, Michael; Dare, PeterOne of the greatest achievements of humankind with regard to positioning is Global Navigation Satellite System (GNSS). Use of GNSS for surveying has made it possible to obtain accuracies of the order of 1 ppm or less in relative positioning mode depending on the software used for processing the data. However, the elevation obtained from GNSS measurement is relative to an ellipsoid, for example WGS84, and this renders the heights from GNSS very little practical value to those requiring orthometric heights. Conversion of geodetic height from GNSS measurements to orthometric height, which is more useful, will require a geoid model. As a result, the aim of geodesist in the developed countries is to compute a geoid model to centimeter accuracy. For developing countries, which include Ghana, their situation will not even allow a geoid model to decimeter accuracy. In spite of the sparse terrestrial gravity data of variable density distribution and quality, this thesis set out to model the geoid as accurately as achievable. Computing an accurate geoid model is very important to Ghana given the wide spread of Global Positioning System (GPS) in the fields of surveying and mapping, navigation and Geographic Information System (GIS). The gravimetric geoid model for Ghana developed in this thesis was computed using the Stoke-Helmert approach which was developed at the University of New Brunswick (UNB) [Ellmann and Vaníček, 2007]. This method utilizes a two space approach in solving the associated boundary value problems, including the real and Helmert’s spaces. The UNB approach combines observed terrestrial gravity data with long-wavelength gravity information from an Earth Gravity Model (EGM). All the terrestrial gravity data used in this computation was obtained from the Geological Survey Department of Ghana, due to difficulties in obtaining data from BGI and GETECH. Since some parts of Ghana lack terrestrial gravity data coverage, EGM was used to pad those areas lacking in terrestrial gravity data. For the computation of topographic effects on the geoid, the Shuttle Radio Topography Mission (SRTM), a Digital Elevation Model (DTM) generated by NASA and the National Geospatial Intelligence Agency (NGA), was used. Since the terrain in Ghana is relatively flat, the topographic effect, often a major problem in geoid computation, is unlikely to be significant. This first gravimetric geoid model for Ghana was computed on a 1' 1' grid over the computation area bounded by latitudes 4ºN and 12ºN, and longitudes 4ºW and 2ºE. GPS/ trigonometric levelling heights were used to validate the results of the computation.Item Developing a deep learning network suitable for automated classification of heterogeneous land covers in high spatial resolution imagery(University of New Brunswick, 2019) Rezaee, Mohammad; Zhang, YunThe incorporation of spatial and spectral information within multispectral satellite images is the key for accurate land cover mapping, specifically for discrimination of heterogeneous land covers. Traditional methods only use basic features, either spatial features (e.g. edges or gradients) or spectral features (e.g. mean value of Digital Numbers or Normalized Difference Vegetation Index (NDVI)) for land cover classification. These features are called low level features and are generated manually (through so-called feature engineering). Since feature engineering is manual, the design of proper features is time-consuming, only low-level features in the information hierarchy can usually be extracted, and the feature extraction is application-based (i.e., different applications need to extract different features). In contrast to traditional land-cover classification methods, Deep Learning (DL), adapting the artificial neural network (ANN) into a deep structure, can automatically generate the necessary high-level features for improving classification without being limited to low-level features. The higher-level features (e.g. complex shapes and textures) can be generated by combining low-level features through different level of processing. However, despite recent advances of DL for various computer vision tasks, especially for convolutional neural networks (CNNs) models, the potential of using DL for land-cover classification of multispectral remote sensing (RS) images have not yet been thoroughly explored. The main reason is that a DL network needs to be trained using a huge number of images from a large scale of datasets. Such training datasets are not usually available in RS. The only few available training datasets are either for object detection in an urban area, or for scene labeling. In addition, the available datasets are mostly used for land-cover classification based on spatial features. Therefore, the incorporation of the spectral and spatial features has not been studied comprehensively yet. This PhD research aims to mitigate challenges in using DL for RS land cover mapping/object detection by (1) decreasing the dependency of DL to the large training datasets, (2) adapting and improving the efficiency and accuracy of deep CNNs for heterogeneous classification, (3) incorporating all of the spectral bands in satellite multispectral images into the processing, and (4) designing a specific CNN network that can be used for a faster and more accurate detection of heterogeneous land covers with fewer amount of training datasets. The new developments are evaluated in two case studies, i.e. wetland detection and tree species detection, where high resolution multispectral satellite images are used. Such land-cover classifications are considered as challenging tasks in the literature. The results show that our new solution works reliably under a wide variety of conditions. Furthermore, we are releasing the two large-scale wetland and tree species detection datasets to the public in order to facilitate future research, and to compare with other methods.Item Developing an analytics everywhere framework for the Internet of Things in smart city applications(University of New Brunswick, 2019) Cao, Hung; Wachowicz, MonicaDespite many efforts on developing protocols, architectures, and physical infrastructures for the Internet of Things (IoT), previous research has failed to fully provide automated analytical capabilities for exploring IoT data streams in a timely way. Mobility and co-location, coupled with unprecedented volumes of data streams generated by geo-distributed IoT devices, create many data challenges for extracting meaningful insights. This research work aims at exploring an edge-fog-cloud continuum to develop automated analytical tasks for not only providing higher-level intelligence from continuous IoT data streams but also generating long-term predictions from accumulated IoT data streams. Towards this end, a conceptual framework, called “Analytics Everywhere”, is proposed to integrate analytical capabilities according to their data life-cycles using different computational resources. Three main pillars of this framework are introduced: resource capability, analytical capability, and data life-cycle. First, resource capability consists of a network of distributed compute nodes that can handle automated analytical tasks either independently or in parallel, concurrently or in a distributed manner. Second, analytical capability orchestrates the execution of algorithms to perform streaming descriptive, diagnostic, and predictive analytics. Finally, data life-cycles are designed to manage both continuous and accumulated IoT data streams. The research outcomes from a smart parking and a smart transit scenario have confirmed that a single computational resource is not sufficient to support all analytical capabilities that are needed for IoT applications. Moreover, the implemented architecture relied on an edge-fog-cloud continuum and offered some empirical advantages: (1) on-demand and scalable storage; (2) seamlessly coordination of automated analytical tasks; (3) awareness of the geo-distribution and mobility of IoT devices; (4) latency-sensitive data life-cycles; and (5) resource contention mitigation.
- «
- 1 (current)
- 2
- 3
- »