Department of Geodesy and Geomatics Engineering (Fredericton)

Pages

3D information supported urban change detection using multi-angle and multi-sensor imagery
3D information supported urban change detection using multi-angle and multi-sensor imagery
by Shabnam Jabari, This PhD research is focused on urban change detection using very high resolution (VHR) imagery acquired by different sensors (i.e. airborne and satellite sensors) and different view angles. Thanks to high amount of details provided in VHR images, urban change detection is made possible. On the other hand, due to the complicated structure of 3D urban environments when projected into the 2D image spaces, detection of changes becomes complicated. In general, change detection is divided into two major steps: I. Establishment of a relation between bi-temporal images so that the corresponding pixels/segment are related; this is called co-registration; II. Comparison of the spectral properties of the co-registered pixels/segment in the bi-temporal images in order to detect changes. As far as Step 1 is concerned, establishment of an accurate global co-registration between bi-temporal images acquired by the different sensors is not possible in urban environments due to different geometric distortions in the imagery. Therefore, the majority of studies in this field avoid using multi-sensor and multi-view angle images. In this study, a novel co-registration method called "patch-wise co-registration" is proposed to address this problem. This method integrates the sensor model parameters into the co-registration process to relate the corresponding pixels and, by extension, the segments (patches). In Step 2, the brightness values of the matching pixels/segments are compared in order to detect changes. Thus, variations in the brightness values of the pixels/segments identify the changes. However, there are other factors that cause variations in the brightness values of the patches. One of them is the difference of the solar illumination angles in the bi-temporal images. In urban environment, the shape of the objects such as houses with steeply-sloped roofs (steep roofs) cause difference in the solar illumination angle resulting in difference in the brightness values of the associated pixels. This effect is corrected using irradiance topographic correction methods. Finally, the corrected irradiance of the co-registered patches is compared to detect changes using Multivariate Alteration Detection (MAD) transform. Generally, in the last stage of change detection process, "from-to" information is produced by checking the classification labels of the pixels/segments (patches). In this study, a fuzzy rule-based image classification methodology is proposed to improve the classification results, compared to the crisp thresholds, and accordingly increase the change detection accuracy. In total, the key results achieved in this research are: I. Including the off-nadir images and airborne images as the bi-temporal combinations in change detection; II. Solving the issue of geometric distortions in image co-registration step, caused by various looking angles of images, by introducing the patch-wise co-registration; III. Combining a robust spectral comparison method, which is the MAD transform, with the patch-wise change detection; IV. Removing the effect of illumination angle difference on the urban objects to improve change detection results; V. Improving classification results by using fuzzy thresholds in the image classification step. The outputs of this research provide an opportunity to utilize the huge amount of archived VHR imagery for automatic and semi-automatic change detection. Automatic classification of the images especially in urban area is still a challenge due to the spectral similarity between urban classes such as roads and buildings. Therefore, generation of the accurate “from-to” information is still remaining for future researches.
A geospatial web application (GEOWAPP) for supporting course laboratory practices in surveying engineering
A geospatial web application (GEOWAPP) for supporting course laboratory practices in surveying engineering
by Jaime Garbanzo, Although most of the university courses are somehow supported by a Learning management system (e.g Desire2Learn), field practices in survey engineering are not interactively supported by these systems. Also, the internet is available in almost every place today, and there are a wide range of internet services on the web. By combining these advantages with e-learning, survey practicums can be enhanced with a web-based application. The survey practicums are very specialized with precise traditional techniques used for checking measurements in the field. Thus, the combination of E-learning and practicums is not straight forward. In order to achieve this combination, there is a need to define a framework of survey exercises and a way of effectively delivering the information to the student making the process more efficient. Different outlines of surveying courses were studied in order to provide a set of exercises that can be supported by a GEOWAPP (Geospatial Web Application). This thesis proposes a combination of processing tools, created in Python, JavaScript and PHP, and Google Maps. The main objectives is to enhance the experiences that students have in the field as well as evaluating their techniques for surveying. Accuracy was chosen as the pillar of this application, which helps to gather information about students technique and computations, and to locate students’ mistakes easily. This specific application is intended for self-reviewing. A prototype of the application was developed, which contains five (5) operational tools. These tools were tested with artificial and real data; this testing gave a good insight of such an application requirements. User reviews were carried out showing that students embrace the idea of similar applications. Finally, GEOWAPP showed some learning enhancing characteristics. However, a test with a real course remains to be carried out to determine whether it is beneficial to students., Originally, "Master of Science Engineering", already switch to "...Science in Engineering"
A scalable web tiled map management system
A scalable web tiled map management system
by Menelaos Kotsollaris, Modern map visualizations are built using data structures for storing tile images, while their main concerns are to maximize efficiency and usability. The core functionality of a web tiled map management system is to provide tile images to the end user; several tiles combined construe the web map. This thesis presents a comprehensive end-to-end analysis for developing and testing scalable web tiled map management systems. To achieve this, several data structures are showcased and analyzed. Specifically, this thesis focuses on the SimpleFormat, which stores the tiles directly on the file system; the ImageBlock, which divides each tile folder (a folder where the tile images are stored) into subfolders that contain multiple tiles prior to storing the tiles on the file system; the LevelFilesSet, a data structure that creates dedicated Random-Access files, wherein the tile dataset is first stored and then parsed in files to retrieve the tile images; and, finally, the LevelFilesBlock, a hybrid data structure which combines ImageBlock and LevelFilesSet data structures. This work signifies the first time this hybrid approach has been implemented and applied in a web tiled map context. Specifically, each data structure was implemented in Java. The JDBC API was used for integrating with the PostgreSQL database. This database was then used to conduct cross-testing amongst the data structures. Subsequently, several benchmark tests on local and cloud environments are developed anew and assessed under different system configurations to compare the data structures and provide a thorough analysis of their efficiency. These benchmarks showcased the efficiency of LevelFilesSet, which retrieved tiles up to 3.3 times faster than the other data structures. Peripheral features and principles of implementing scalable web tiled map management systems among different software architectures and system configurations are analyzed and discussed.
Accuracy of the classical height system
Accuracy of the classical height system
by Ismael Foroughi, Measuring the quality of the classical height system through its self-consistency (congruency) is investigated in this dissertation. Measuring the congruency is done by comparing the geoidal heights determined from a gravimetric geoid model with test geoidal heights derived at GNSS/Leveling points. The components of this measurement are computed as accurately as possible, e.g., the Stokes-Helmert approach is used to determine the geoid model, gravimetric and topographic corrections are applied to the spirit leveling observations to derive rigorous orthometric heights at test points, and finally, the geodetic heights are taken from GNSS observations. Four articles are included in this dissertation, one is discussing a modification to the Stokes-Helmert approach for using the optimal contribution of the Earth gravitational models and the local data. The second paper applies the methodology presented in the first paper and presents the detail results for a test area. The third paper is a discussion on the accuracy of the classical height system against Molodensky’s system and presents a numerical study to show that the classical system can be computed as accurately as Molodensky’s. The last paper presents a methodology to find the most probable solution of the downward continuation of surface gravity to the geoid level using the least-squares technique. The uncertainties of the geoidal heights are estimated using least-square downward continuation and a priori variance matrix of the input gravity data. The total estimation of the uncertainties of the geoidal heights confirms that geoid can be determined with sub-centimetre accuracy in the flat areas when, mainly, the effect of topographic mass density is taken into account properly, the most probable solution of downward continuation is used, and the improved satellite-only global gravitational models are merged with local data optimally.
An IoT platform for occupancy prediction using support vector machine
An IoT platform for occupancy prediction using support vector machine
By Alec Parise, The Internet of Things (IoT) is a network of devices able to connect, interact and exchange data without human intervention. Most of today’s research focuses on collecting indoor sensor data with the purpose of reducing costs of operation facilities management. Innovative approaches ranging from context aware sensing platforms to dynamic robot sensing have been proposed in previous research work, but the challenge remains on understanding how sensor data can be used to predict occupancy usage patterns in smart buildings. This research aims at developing a non-intrusive sensing method for gathering sensor data for predicting occupancy usage patterns in indoor environments. There are several potential applications ranging from that can benefit from occupancy prediction. Smart building management systems; establishing communication with the HVAC system when an accurate occupancy classification and prediction for optimization of energy consumption. Towards this end, an IoT platform based on an open source architecture consisting of Arduino and Raspberry Pi 3 B+ is designed and deployed in three different environments at two University campuses. By utilizing temperature and humidity for observing indoor environmental characteristics while combining PIR motion sensors, CO2, and sound detectors a robust occupancy detection model is created, and by applying Support Vector Machine, occupancy usage patterns are predicted. This IoT platform is a low-cost and highly scalable both in terms of the variety of on-board sensors and portability of the sensor nodes, which makes it well suited for multiple applications related to occupancy usage and environmental monitoring., Electronic Only.
An evolutionary graph framework for analyzing fast-evolving networks
An evolutionary graph framework for analyzing fast-evolving networks
by Maduako Derek Ikechukwu, Fast-evolving networks by definition are real-world networks that change their structure, becoming denser over time since the number of edges and nodes grows faster, and their properties are also updated frequently. Due to the dynamic nature of these networks, many are too large to deal with and complex to generate new insights into their evolution process. One example includes the Internet of Things, which is expected to generate massive networks of billions of sensor nodes embedded into a smart city infrastructure. This PhD dissertation proposes a Space-Time Varying Graph (STVG) as a conceptual framework for modelling and analyzing fast-evolving networks. The STVG framework aims to model the evolution of a real-world network across varying temporal and spatial resolutions by integrating time-trees, subgraphs and projected graphs. The proposed STVG is developed to explore evolutionary patterns of fast-evolving networks using graph metrics, ad-hoc graph queries and a clustering algorithm. This framework also employs a Whole-graph approach to reduce high storage overhead and computational complexities associated with processing massive real-world networks. Two real-world networks have been used to evaluate the implementation of the STVG framework using a graph database. The overall results demonstrate the application of the STVG framework for capturing operational-level transit performance indicators such as schedule adherence, bus stop activity, and bus route activity ranking. Finally, another application of STVG reveals evolving communities of densely connected traffic accidents over different time resolutions.
Application of footstep sound and lab colour space in moving object detection and image quality measurement using opposite colour pairs
Application of footstep sound and lab colour space in moving object detection and image quality measurement using opposite colour pairs
by Aditya Roshan, This PhD dissertation is focused on two of the major tasks of an Atlantic Innovation Fund (AIF) sponsored “Triple-sensitive Camera Project”. The first task focuses on the improvement of moving object detection techniques, and second on the evaluation of camera image quality. Cameras are widely used in security, surveillance, site monitoring, traffic, military, robotics, and other applications, where detection of moving objects is critical and important. Information about image quality is essential in moving object detection. Therefore, detection of moving objects and quality evaluation of camera images are two of the critical and challenging tasks of the AIF Triple-sensitive Camera Project. In moving object detection, frame-based and background-based are two major techniques that use a video as a data source. Frame-based techniques use two or more consecutive image frames to detect moving objects, but they only detect the boundaries of moving objects. Background-based techniques use a static background that needs to be updated in order to compensate for light change in a camera scene. Many background modelling techniques involving complex models are available which make the entire procedure very sophisticated and time consuming. In addition, moving object detection techniques need to find a threshold to extract a moving object. Different thresholding methodologies generate varying threshold values which also affect the results of moving object detection. When it comes to quality evaluation of colour images, existing Full-Reference methods need a perfect colour image as reference and No-Reference methods use a gray image generated from the colour image to compute image quality. However, it is very challenging to find a perfect reference colour image. When a colour image is converted to a grey image for image quality evaluation, neither colour information nor human colour perception is available for evaluation. As a result, different methods give varying quality outputs of an image and it becomes very challenging to evaluate the quality of colour images based on human vision. In this research, a single moving object detection using frame differencing technique is improved using footstep sound which is produced by the moving object present in camera scene, and background subtraction technique is improved by using opposite colour pairs of Lab colour space and implementing spatial correlation based thresholding techniques. Novel thresholding methodologies consider spatial distribution of pixels in addition to the statistical distribution used by existing methods. Out of four videos captured under different scene conditions used to measure improvements, a specified frame differencing technique shows an improvement of 52% in object detection rate when footstep sound is considered. Other frame-based techniques using Optical flow and Wavelet transform such are also improved by incorporating footstep sound. The background subtraction technique produces better outputs in terms of completeness of a moving object when opposite colour pairs are used with thresholding using spatial autocorrelation techniques. The developed technique outperformed background subtraction techniques with most commonly used thresholding methodologies. For image quality evaluation, a new “No-Reference” image quality measurement technique is developed which evaluates quantitative image quality score as it is evaluated by human eyes. The SCORPIQ technique developed in this research is independent of a reference image, image statistics, and image distortions. Colour segments of an image are spatially analysed using the colour information available in Lab colour space. Quality scores from SCORPIQ technique using LIVE image database yield distinguished results as compared to quality scores from existing methods which give similar results for visually different images. Compared to visual quality scores available with LIVE database, the quality scores from SCORPIQ technique are 3 times more distunquishable. SCORPIQ give 4 to 20 times distinguishable results compared to statistics based results which also does not follow the quality scores as evaluated by human eyes.
Atmospheric delay modelling for ground-based GNSS reflectometry
Atmospheric delay modelling for ground-based GNSS reflectometry
by Thalia Nikolaidou, Several studies have demonstrated the utility of global navigation satellite system reflectometry (GNSS-R) for ground-based coastal sea-level altimetry. Recent studies evidenced the presence of atmospheric delays in GNSS-R sea-level retrievals and by-products such as tidal amplitudes. On the one hand, several ad-hoc atmospheric correction formulas have been proposed in the literature. On the other hand, ray-tracing studies applied for GNSS-R show little information about the methods and algorithms involved. This dissertation is based on three articles which establish the theoretical framework of the atmospheric delay experienced in ground-based GNSS-R altimetry. In the first article, we defined the atmospheric interferometric delay in terms of the direct and reflected atmospheric delays as well as the vacuum distance and radio length. Then, we clarified the roles of linear and angular refraction, derived the respective delays and combined them in the total delay. We also introduced for the first time two subcomponents of the atmospheric geometric delay, the geometric-shift and the geometric-excess, unique for reflected signals. The atmospheric altimetry correction necessary for unbiased sea-level retrievals was defined as half the rate-of-change of the atmospheric delay with respect to the sine of satellite elevation angle. We developed a ray-tracing procedure to solve rigorously the three-point boundary value problem involving transmitting satellite, reflecting surface, and receiving antenna. We hence evaluated the atmospheric bias in sea-level retrievals for a range of typical scenarios, showing its dependence on elevation angle and reflector height. In the second article, we demonstrated that rigorous ray-tracing of the bent ray can be simplified by a judicious choice of rectilinear wave propagation model. This facilitates the adaptation by existing GNSS ray-tracing procedures, besides numerical and speed advantages. Further it was emphasized that mapping functions developed for GNSS positioning cannot be reused for GNSS-R purposes without adaptations. In the third article, we developed closed-form expressions of the atmospheric delay and altimetry correction for end-users without access or expertise in ray-tracing. These expressions rely only on direct elevation bending and mean refractivity at the site. Finally, we determined cut-off elevation angle and reflector height, for neglecting atmospheric delays. These limiting conditions are useful in observation planning and error budgeting of the GNSS-R altimetry retrievals.
Automatic processing of Arctic crowd-sourced hydrographic data while improving bathymetric accuracy and uncertainty assessment
Automatic processing of Arctic crowd-sourced hydrographic data while improving bathymetric accuracy and uncertainty assessment
by Khaleel Arfeen, Melting sea ice has led to an increase in navigation in Canadian Arctic waters. However, these waters are sparsely surveyed and pose a risk to mariners. Recognizing this issue, the government of Canada has granted funds towards the development of a pilot program to begin collecting bathymetric data through a novel crowd-sourced approach. The project is a coalition between four Canadian partners from across the country; The University of New Brunswick’s Ocean Mapping Group is tasked with the processing of the collected data and this thesis will focus on this aspect. Through an automated approach the data has been processed with the end-product being a final depth measurement with the associated uncertainty. The software is Python based and has been broken down into several modules to complete the task at hand. Utilizing specialized hydrographic equipment, designed to be low-cost and simple to operate, participating communities in the Canadian Arctic have been given the opportunity to collect bathymetric data while traversing their local waterways. As the pilot phase of the project is done, this thesis delves into the steps taken to fulfill the processing goals. The primary motivation surrounds how the processing workflow was completed through automation while mitigating errors and achieving transparency in the uncertainty assessment in the crowd-sourced bathymetric (CSB) data. Particular emphasis is placed upon the issues of collecting valuable hydrographic data from the Arctic with analysis of different methods to process the data with efficiency in mind. These challenges include obtaining a reliable GNSS signal through post-processing, qualification of the GNSS data for vertical reference, utilizing the HYCOM hydrodynamic model to collect sound velocity profiles and the identification and quantification of uncertainty as part of the Total Propagated Uncertainty (TPU) model. Several case study type examples are given where an investigation is conducted using processed collected and/or model data. Discussions surround the results of multi-constellation vs. single-constellation GNSS in the Arctic and the effects on the qualification rate for use as vertical referencing. Similarly, work towards comparing the model used to collect SVP data with equivalent real-world data collected by the Canadian Coast Guard is discussed. Finally, uncertainty has been quantified and assessed for the collected data and the results of the uncertainty assessment are provided using CHS/IHO survey standards as a benchmark.
Building detection in off-nidar very high resolution satellite images based on stereo 3D information
Building detection in off-nidar very high resolution satellite images based on stereo 3D information
by Alaeldin Suliman, Mapping or updating maps of urban areas is crucial for urban planning and management. Since buildings are the main objects in urban environments, building roof detection is an important task in urban mapping. The ideal geo-spatial data source for mapping building information is very high resolution (VHR) satellite images. On the other hand, because buildings are elevated objects, incorporating their heights in building detection can significantly improve the accuracy of the mapping. The most cost-effective source for extracting the height information is stereo VHR satellite images that can provide two types of stereo 3D information: elevation and disparity. However, most VHR images are acquired off-nadir. This acquisition type causes building leaning in the images and creates major challenges for the incorporation of building height information into roof detection. Thus, this PhD research focuses on finding solutions to mitigate the problems associated with 3D-supported building detection in off-nadir VHR satellite images. It also exploits the potential of extracting disparity information from off-nadir image pairs to support building detection. In the research, several problems associated with building leaning need to be solved, such as building roof offsetting from its footprint, object occlusion, and building façades. Moreover, the variation of the roofs offsets based on the building heights. While the offsets of building roof create difficulties in the co-registration between image and elevation data, the building façades and occlusions create challenges in automatically finding matching points in off-nadir image pairs. Furthermore, due to the variation in building-roof offsets, the mapped roofs extracted from off-nadir images cannot be directly geo-referenced to existing maps for effective information integration. In this PhD dissertation, all of the above identified problems are addressed in a progressively improving manner (i.e., solving the problems one after another while improving the efficiency) within the context of 3D-supported building detection in off-nadir VHR satellite images. Firstly, an image-elevation co-registration technique is developed that is more efficient than the currently available techniques. Secondly, the computation cost is then reduced by generating disparity information instead of the traditional elevation data. This allows bypassing a few time-consuming steps of the traditional method. Thirdly, the disparity generation is then extended from using one pair of off-nadir images to using multiple pairs for achieving an enriched disparity map. Finally, the enriched disparity maps achieved are then used to efficiently derive elevations that are directly co-registered with pixel-level accuracy to the selected reference image. Based on these disparity-based co-registered elevations, building roofs are successfully detected and accurately geo-referenced to existing maps. The outcome of this PhD research proved the possibility of using off-nadir VHR satellite images for accurate urban building detection. It significantly increases the data source scope for building detection since most (> 95%) of VHR satellite images are off-nadir and traditional methods cannot effectively handle off-nadir images., Ph.D. University of New Brunswick, Department of Geodesy and Geomatics Engineering, 2017.
Carrier-phase multipath mitigation in RTK-based GNSS dual-antenna systems
Carrier-phase multipath mitigation in RTK-based GNSS dual-antenna systems
by Luis Serrano, Carrier-phase multipath mitigation in GPS/GNSS real-time kinematic (RTK) mode has been studied for several years, at least since on-the-fly ambiguity resolution techniques were introduced, and receiver hardware improvements to the point that GNSS RTKbased systems provide position estimates at the mm to cm-level accuracy in real-time. This level of accuracy has heralded a new era of applications where the use of GNSS RTK-based techniques have become a very practical navigation tool, especially in the fields of machine automation, industrial metro logy, control, and robotics. However, this incredible surge in accuracy tied with real-time capabilities comes with a cost: one must also ensure continuity, and integrity (safety). Typical users of these systems do not expect heavy machinery, guided and/or controlled by GNSS-based systems, to output erroneous solutions even in challenging multipath environments. In multipath-rich scenarios, phase-multipath reflections can seriously degrade the RTK solutions, and in worst scenarios, integer fixed solutions are no longer available. This dissertation intends to deal with these scenarios, where the rover algorithms should deal with multiple reflections and, in real-time, be able to ameliorate/mitigate their effect. GNSS-based heading/attitude is usually obtained combining the data from two or more antennas (also known as a moving baseline). Many companies provide commercial systems based on this technique, hence this dissertation finds its main applicability here. Typical heavy construction machinery includes dozers, motor-graders, excavators, scrappers, etc., which are being equipped more frequently with GNSS dual-antenna systems to provide positioning and orientation information to the operator. We have not used and collected data from one of these machines, although the author has worked extensively with such machinery and their GNSS-based systems. However, the theory developed throughout this dissertation and the proof of concept through controlled tests that mimic the machinery/installed GNSS dual-antenna systems, are the basis of this dissertation. Moreover the algorithms developed here are meant to be used independently from the receiver hardware, as well as from GNSS signals. Hence GLONASS, and/or Galileo signals can be processed too. This dissertation is based on the fundamental relationship between multiple multipath reflections from close-by strong reflections, and their effect on GNSS RTK-based dual-antenna systems. Two questions were answered: Firstly, is it possible to retrieve strong multipath reflectors in kinematic applications? Second, once these strong reflectors are correctly identified, how accurate/reliable are the corrections to the raw carrier-phase multipath, knowing that the host platform performs unpredictable manoeuvres? Based on the results, we can conclude that it is possible to estimate m real-time multipath parameters based on a strong effective reflector. In most of the tests it takes at least 2 minutes to obtain initial values (after Kalman filter convergence). Once they are determined, multipath corrections can be determined straightforwardly for each satellite being tracked, as long as there are no cycle-slips (mostly due to the combination of the machinery high dynamics, especially within the areas where antennas are located, and the machinery itself blocking momentarily satellite signals).
Coupling of repetitive multibeam surveys and hydrodynamic modelling to understand bedform migration and delta evolution
Coupling of repetitive multibeam surveys and hydrodynamic modelling to understand bedform migration and delta evolution
by Danar Guruh Pratomo, This study addresses channelized delta top sediment transport on the Squamish estuary in Howe Sound, British Columbia. The mechanism of bedform migration and delta evolution is affected by the manner in which the available sediment flux from the feeder fluvial system is distributed. The present study is complementary to a parallel project looking at the sediment migrating on the delta slope as landslides or turbidity currents. The termination of the Squamish River consists of a single channel that flows between flanking intertidal sand bars and over a mouth bar at the lip of the delta. The delta front is growing rapidly with about 1 million m3 of sediment being input from the river system annually. There is a 3 to 5 m tidal range that strongly modulates the flow in the channel and over the adjacent intertidal sand banks. In 2011, the delta top channel was surveyed every 3 to 4 day at high water, over a period of 4 months during which the river discharge waxed and waned and the tides ranged from springs to neaps. In 2012 and again in 2013, the channel was surveyed daily over a week while the tides increased from neaps to springs. In order to understand the sediment transport mechanism in this estuary, this research parameterized the short wavelength bedform morphology and the long wavelength channel shape on the delta top, extracted the shape of the delta lip, and used volumetric characterization of the sediment on the delta top and the delta lip vicinity. A three dimensional hydrodynamic model was also built to predict the flow within the river, the delta top, and adjacent fjord over the complete tidal cycle so that the bed shear stress associated with tide modulation and river discharge could be quantified. This research shows that the short wavelength bedform characteristics and long wavelength channel shape are primarily a result of the low water period when the offdelta flows are strongest. The flow fields of the research area are dominated by the tidal modulation. However the river surge also plays a role during the high flow regime. Good correlation was demonstrated between flow conditions (parameterized by the Froude number) and all of: the bedform roughness, the bedform mobility, the 1D bedform roughness spectra and the off-delta sediment flux. These relationships indicate that a single snapshot of the riverbed morphology could potentially be used to estimate sediment transport conditions.

Pages

Zircon - This is a contributing Drupal Theme
Design by WeebPal.