Geodesy and Geomatics Engineering Technical Reports

Pages

Unsupervised detection of opium poppy fields in Afghanistan from EO-1 hyperion data
Unsupervised detection of opium poppy fields in Afghanistan from EO-1 hyperion data
Satellite remote sensing has special advantages for monitoring the extent of illegal drug production that causes serious problems to the global society. Although remote sensing has been used to monitor opium poppy fields, the main data employed were high-resolution images (≤ 1 m) like pan-sharpened IKONOS, QuickBird, etc. These images are costly, making the full coverage of the crop fields in a large area an expensive exercise. As an alternative, the imagery acquired by EO-1 Hyperion, the only available spaceborne hyperspectral sensor currently, is free. However, its spatial resolution is coarser (30 m). Until now, there is little evidence that poppy fields have been identified from aerial or satellite hyperspectral images. This thesis proposed two unsupervised methods (i.e., a MESMA-based one and a MTMF-based one), that could detect poppy fields in Afghanistan from Hyperion data directly. Comparing the two methods, the MTMF-based one has much higher computational efficiency. Moreover, the MTMF-based method performed well in both of the two main environments in Afghanistan. In addition, it was found that the moderate spatial resolution EO-1 Advanced Land Imager (ALI) multispectral data could not produce reasonable detection of poppy fields in Afghanistan.
Urban land cover classification and moving vehicle extraction using very high resolution satellite imagery
Urban land cover classification and moving vehicle extraction using very high resolution satellite imagery
This Ph.D. dissertation reviews the current techniques and develops improved techniques for the analysis of very high resolution (VHR) imagery of urban areas for two important applications: land cover classification and moving vehicle (and velocity) extraction. First, a comprehensive review is conducted on the current literature in the area of urban land cover classification of VHR imagery. The review discusses the usefulness of two groups of spatial information used in both pixel-based and object-based classification approaches. The first group is spatial information inherent in the image such as textural, contextual, and morphological (e.g., shape and size) properties of neighboring pixels, and the second group is the spatial information derived from ancillary data such as LiDAR and GIS vector data. The review provides guidelines on the use of spatial information for urban land cover classification of VHR images. Second, a novel multisource object-based classification framework is developed using the Cognition Network Language available in the eCognition® software package. The framework integrates VHR images and height point data for detailed classification of urban environments. The framework addresses two important limitations of the current literature: the transferability of the framework to different areas and different VHR images, and the impact of misregistration between different data layers on classification accuracy. The method was tested on QuickBird and IKONOS images and an overall classification accuracy of 92% and 86% was achieved for each of the images, respectively. The method offers a practical, fast, and easy to use (within eCognition) framework for classifying VHR imagery of small urban areas. Third, a combined object- and pixel-based image analysis framework is proposed to overcome the limitation of object-based (lack of general applicability and automation) and pixel-based (ignoring the spatial information of the image) approaches. The framework consists of three major steps: image segmentation, feature extraction, and pixel-based classification. For the feature extracting part, a novel approach is proposed based on the wavelet transforms. The approach is unsupervised and much faster than the current techniques because it has a local scope and works on the basis of an image’s objects, not pixels. The framework was tested on WorldView-2, QuickBird, and IKONOS images of the same area acquired on different dates. Results show up to 17%, 10%, and 11% improvement of classification kappa coefficients compared to when only the original bands of the image are used for WorldView-2, QuickBird, and IKONOS, respectively. Fourth, a novel object-based moving vehicle (and velocity) extraction method is developed using single WorldView-2 imagery. The method consists of three major steps: road extraction, moving vehicle change detection, and position and velocity estimation. Unlike recent studies in which vehicles are selected manually or semi-automatically using road ancillary data, the method automatically extract roads and moving vehicles using object-based image analysis frameworks. Results demonstrate a promising potential for automatic and accurate traffic monitoring using a single image of WorldView-2.
User-side modelling and comparative analysis of airborne LiDAR errors
User-side modelling and comparative analysis of airborne LiDAR errors
Project specifications are designed and enforced to determine whether or not delivered data met required standards. However, rapid advancements in LiDAR data capture technologies have led to major challenges for end users to validate the data and processes for fitness for use. The developed UDTEB model uses two approaches to fill this gap – 1) the deterministic approach employing CMP and SBET or their equivalent files of ALS surveys to extract the root mean square errors of points with respect to a trajectory and an estimated terrain, and 2) where these files are not available, the non-deterministic approach employing published LiDAR system performance reports to simulate flight conditions and estimate errors under defined conditions. To validate the UDTEB model, five areas of varying topography and land cover were investigated. TIN differencing and a new method for point by point comparison of checkpoints and corresponding LiDAR points using square windows around the checkpoints were employed. When the obstructions of the checkpoints were further categorized as “clear”, “light” and “dense”, average RMSE values observed were 0.06 m, 0.05 m and 0.10 m respectively. The UDTEB model proposes a method to equip end-users to perform error budgeting from data acquisition to the end product creation and validate the elevation accuracy of a LiDAR data at a given confidence interval. The method can be customized for a given error analysis task, allowing the user to include other error sources into the model. It can also be adopted for elevation error analysis of large datasets similar to LiDAR.
Verification of gravimetric geoidal models by a combination of GPS and orthometric heights
Verification of gravimetric geoidal models by a combination of GPS and orthometric heights
Gravimetric geoidal models such as “UNB Dec.’86” and “UNB ‘90” may be verified by a combination of GPS and orthometric heights. Ideally, the following relationship should equal zero: h – H _ N, where h is the height above a reference ellipsoid obtained from GPS, H is its orthometric height, and N is the geoidal undulation obtained from the gravimetric model. In many cases users are interested in relative positioning and the equation becomes: Δ(h – H – N). This study looks at each aspect of these equations. The geometric heights (or height difference) is defined and the principal sources of error that are encountered in GPS levelling such as tropospheric delay, orbit biases etc. are examined. The orthometric height (or height difference) is discussed by looking at various systems of height determination and deciding under which system the Canadian vertical network may be categorized, as well as what errors, and of what magnitude, are likely to be encountered. Orthometric heights are measured from the geoid which in practice is difficult to determine. The surface, not in general coincident with the geoid, from which these measurements are actually made, is investigated. The three campaigns discussed in this study - North West Territories, Manitoba, and Ontario – are in areas where lvelled heights are referenced to the Canadian Geodetic Datum of 1928 in the case of the former two and the International Great Lakes Datum in the case of the latter. These two reference surfaces are discussed in some detail. The geoidal solutions - “UNB Dec. ‘86” and “UNB ‘90” are described. The models are fairly similar as both use the same modified version of Stoke’s function so as to limit the area of the earth’s surface over which integration has to take place in order to determine the undulation at a point. “UNB ‘90” makes use of an updated gravity data collection. Both solutions makes use of terrestrial data for the high frequency contribution and a satellite reference field for the low frequency contribution. “UNB Sec. ‘86” uses Goddard Earth Model, GEM9, whereas “UNB ‘90” uses GEM-T1. The implications of changes inference field are discussed. All measurements are prone to error and thus each campaign has associated with it a series of stations characterized by a misclosure obtained from h – H – N. These misclosures may be ordered according to any argument – latitude, ɸ, longitude, λ, orthometric height, H, etc., in order to search for a statistical dependency between the misclosure and its argument, or in other words, a systematic effect. The autocorrelation function will detect the presence of a systematic “error” and least squares spectral analysis will give more information on the nature of this dependency. Both these tools are described and their validity is demonstrated on a number of simulated data series. The field data collected during the three campaigns is analysed. The geometric and orthometric heights are combined with geoidal undulations from the “UNB Dec. ‘86” and then from “UNB ‘90” using the misclosure h – H – N. The resulting data series are ordered according to vaious arguments and examined for presence of systematic effect by means of the autocorrection function and spectral analysis. Similar tests are carried out on the data series yield by Δ(h – H – N) ordered according to azimuth and baseline length. Clear evidence of statistical dependency is detected. Reasons for these dependencies are discussed.
Vessel heave determination using the Global Positioning System
Vessel heave determination using the Global Positioning System
This thesis investigation shows how the precise carrier-phase measurements available from the Global Positioning System in differential mode may be used to monitor the vertical motion of a ship - called heave. A model to utilize GPS observations in combination with ship attitude measurements has been devised and implemented. This model has been incorporated into a hydrographic navigation system being produced by Nortech Surveys (Canada) Inc. for the Canadian Hydrographic Service [Rapatz and Wells, 1990]. Testing of this model using a static data set indicates accuracy levels in order of five centimeters or less. Comparisons of GPS measured heave with commercial heave sensor data during a ship cruise 100 kilometres offshore of Shelburne, N.S. reinforces this initial accuracy estimate. The investigation illuminates some of the advantages, disadvantages and problems with using GPS for heave measurements and recommends areas of further research. The final conclusion is that used appropriately, GPS has the capability of accurately mesuring vessel heave, even under circumstances in which commercial heave sensors may be incapable.
Visualization, statistical analysis, and mining of historical vessel data
Visualization, statistical analysis, and mining of historical vessel data
An important area of research in marine information systems is the management and analysis of the large and increasing amount maritime spatio-temporal datasets. There are a lack of systems that may provide visualization and clustering techniques for large spatiotemporal datasets (Oliveira, 2012). This thesis describes the design and implementation of a prototype web-based system for visualizing, computing statistics, and detecting outliers of moving vessels over a massive set of historic AIS data from the Aegean Sea in the Mediterranean. This historic AIS data was acquired from the Marine Traffic project (MarineTraffic, 2014) which collects the raw location points of the vessels. The web-based system provides the following functionalities: (i) user interface to upload the location points of vessels into a database, (ii) detailed and simplified trajectory construction of the uploaded location points of vessels, (iii) distance, speed, direction, and turn angle computation of the constructed trajectories, (iv) identify vessels that intersect the European Union’s Natura 2000 protected areas, (v) identify spatio-temporal outliers in the location points of vessels using DBSCAN algorithm, and (vi) heat map visualization to show the traffic load and highlight sea zones of high risk. The architecture of the web-based system employed is based on open standards, and allows for interoperable data access. The system was implemented using PHP as the server-side scripting language, and Google Maps API as the client-side scripting language. Furthermore, improved system responsiveness, and server performance was achieved by asynchronous interaction between client and server by utilizing AJAX to send and receive requests. In addition, data transfer between client and server was achieved using the platform-independent and light weight JSON format.
Visualization, statistical analysis, and mining of historical vessel data
Visualization, statistical analysis, and mining of historical vessel data
An important area of research in marine information systems is the management and analysis of the large and increasing amount maritime spatio-temporal datasets. There are a lack of systems that may provide visualization and clustering techniques for large spatiotemporal datasets (Oliveira, 2012). This thesis describes the design and implementation of a prototype web-based system for visualizing, computing statistics, and detecting outliers of moving vessels over a massive set of historic AIS data from the Aegean Sea in the Mediterranean. This historic AIS data was acquired from the Marine Traffic project (MarineTraffic, 2014) which collects the raw location points of the vessels. The web-based system provides the following functionalities: (i) user interface to upload the location points of vessels into a database, (ii) detailed and simplified trajectory construction of the uploaded location points of vessels, (iii) distance, speed, direction, and turn angle computation of the constructed trajectories, (iv) identify vessels that intersect the European Union’s Natura 2000 protected areas, (v) identify spatio-temporal outliers in the location points of vessels using DBSCAN algorithm, and (vi) heat map visualization to show the traffic load and highlight sea zones of high risk. The architecture of the web-based system employed is based on open standards, and allows for interoperable data access. The system was implemented using PHP as the server-side scripting language, and Google Maps API as the client-side scripting language. Furthermore, improved system responsiveness, and server performance was achieved by asynchronous interaction between client and server by utilizing AJAX to send and receive requests. In addition, data transfer between client and server was achieved using the platform-independent and light weight JSON format.
Web-based flood risk assessment- rapid, user-friendly tools leveraging open data
Web-based flood risk assessment- rapid, user-friendly tools leveraging open data
Timely and accurate prediction of flood inundation extent and potential negative impacts and consequences is fundamental for the sustainable development of a given region and allows decision makers and the local community to understand their exposure and vulnerability. Complex computer models exist for flood risk assessment and while technologically sophisticated, these programs are intended, first of all, for use by a small number of technical and scientific experts and require considerable processing time and extensive inputs. These existing solutions are generally not well suited for flood prediction in near real-time and often exceed the data available for any given community. This research developed standardized methods, adapted into user-friendly tools which accept limited user input, are based on hydrologic principles and processes, widely accepted risk computation methods and leverage open data. The developed flood mapping approaches access, and through a novel data fusion method, create a better quality digital elevation model (DEM) from multiple open source elevation datasets. This fused DEM is combined with other open source data (e.g., IDF curves, river flow data, watershed boundaries, etc.) to generate a flood inundation surface through two methods: (i) a 0D bathtub model and (ii) a hybrid 1D/2D raster cell storage approach. The 0D model ignores flow rates and changes over time, producing a grid of the maximum spatial extent and depth, calculated as the difference between the terrain elevation and the computed water surface. The hybrid model solves 1D kinematic wave approximation of shallow water equations in the channel and treats the floodplain as 2D flooding storage cells. Water depths from the flood grid are combined with local inventory data (e.g., building structural type, occupancy, valuation, height of the first floor, etc.) to compute exposure and damage estimates in either a user friendly MS Office application or a webbased API. The developed methods and user-friendly tools allow non-experts the ability to rapidly generate their own flood inundation scenario on demand and assess risk, thus minimizing the gap between the existing sophisticated tools, designed for scientists and engineers, and community needs in order to support informed emergency response and mitigation planning.
Women and land reform in Brazil
Women and land reform in Brazil
In Brazil, rampant inequities severely affect women. The disparities are supported by entrenched social norms, a correspondingly discriminatory infrastructure, and inequitable land distribution that is deep-rooted. Consequently, poverty in Brazil is feminine, landless, and common. The 2001 gini coefficient for income distribution is .6 and .8 for land distribution [Federal Republic of Brazil: Ministry of Agrarian Development, 2001, p. 1]. Thus, the situation of modern Brazilian women is unique because of the magnitude and scope of their challenges. To illustrate the interconnectedness of the inequities that Brazilian women are subjected to and the severe affects of discriminatory practices, Brazil’s tumultuous history of land struggles and varied aspects of modern Brazilian culture will be explored. Recently, feminine land ownership has gained even greater importance, as women’s poverty has been increasing with their lack of resources. Their concerns have been subjugated to class battles, and in changing times, their traditional gender roles have forced them to accomplish more work and take on additional responsibilities. Because of the integrated and malleable character of their repression, it is important to explore national social, political, and economic norms that have influenced the culture in which Brazilian women live and the historic hierarchies (supported by restrictive land ownership) that have been maintained and strengthened through an evolving society. Because of the generalizations and limitations inherent in any paper that is not a restrictive case study, this work aims to provide a general overview of the current hardships surrounding Brazilian women and land obtainment. Specifically, this paper hopes to illustrate the problematic nature of inequities in Brazil, their severity, their interconnectedness, and their resistance to annihilation. Through this examination, it will become apparent that without continued pressure for comprehensive change, the majority of Brazilian society will likely remain poor, landless, and feminine.

Pages

Zircon - This is a contributing Drupal Theme
Design by WeebPal.