Urban land cover classification and moving vehicle extraction using very high resolution satellite imagery

Thumbnail Image


Journal Title

Journal ISSN

Volume Title



This Ph.D. dissertation reviews the current techniques and develops improved techniques for the analysis of very high resolution (VHR) imagery of urban areas for two important applications: land cover classification and moving vehicle (and velocity) extraction. First, a comprehensive review is conducted on the current literature in the area of urban land cover classification of VHR imagery. The review discusses the usefulness of two groups of spatial information used in both pixel-based and object-based classification approaches. The first group is spatial information inherent in the image such as textural, contextual, and morphological (e.g., shape and size) properties of neighboring pixels, and the second group is the spatial information derived from ancillary data such as LiDAR and GIS vector data. The review provides guidelines on the use of spatial information for urban land cover classification of VHR images. Second, a novel multisource object-based classification framework is developed using the Cognition Network Language available in the eCognition® software package. The framework integrates VHR images and height point data for detailed classification of urban environments. The framework addresses two important limitations of the current literature: the transferability of the framework to different areas and different VHR images, and the impact of misregistration between different data layers on classification accuracy. The method was tested on QuickBird and IKONOS images and an overall classification accuracy of 92% and 86% was achieved for each of the images, respectively. The method offers a practical, fast, and easy to use (within eCognition) framework for classifying VHR imagery of small urban areas. Third, a combined object- and pixel-based image analysis framework is proposed to overcome the limitation of object-based (lack of general applicability and automation) and pixel-based (ignoring the spatial information of the image) approaches. The framework consists of three major steps: image segmentation, feature extraction, and pixel-based classification. For the feature extracting part, a novel approach is proposed based on the wavelet transforms. The approach is unsupervised and much faster than the current techniques because it has a local scope and works on the basis of an image’s objects, not pixels. The framework was tested on WorldView-2, QuickBird, and IKONOS images of the same area acquired on different dates. Results show up to 17%, 10%, and 11% improvement of classification kappa coefficients compared to when only the original bands of the image are used for WorldView-2, QuickBird, and IKONOS, respectively. Fourth, a novel object-based moving vehicle (and velocity) extraction method is developed using single WorldView-2 imagery. The method consists of three major steps: road extraction, moving vehicle change detection, and position and velocity estimation. Unlike recent studies in which vehicles are selected manually or semi-automatically using road ancillary data, the method automatically extract roads and moving vehicles using object-based image analysis frameworks. Results demonstrate a promising potential for automatic and accurate traffic monitoring using a single image of WorldView-2.