Font Size: a A A

Integration Of Aerial And Ground Data For Optimized 3D Modelling In Urban Areas

Posted on:2020-02-25Degree:DoctorType:Dissertation
Country:ChinaCandidate:L F XieFull Text:PDF
GTID:1360330590953927Subject:Photogrammetry and Remote Sensing
Abstract/Summary:PDF Full Text Request
Three-dimensional(3D)photorealistic 3D models are fundamental to the spatial data infrastructure of a digital city and have numerous potential applications in areas such as urban planning,urban management,urban monitoring,and urban environmental studies.Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles(UAVs)offer promising techniques for 3D modelling in urban areas.However,3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures,especially on building fa?ades,due to problems such as occlusion and large camera tilt angles.Meanwhile,ground mobile mapping systems(MMSs)can capture ground images and range data of close-range objects from a complementary view on the ground at a high level of detail but do not offer full coverage.The integration of aerial with ground observations offers promising opportunities for optimized 3D modelling in urban areas.This thesis focus on the integration of these two types of observations in producing 3D point clouds and 3D models with optimal geometric accuracy and texture resolution as well as regularity.The aerial images are collected by either multi-head camera systems mounted on aerial platforms or low altitude UAVs.Meanwhile,the ground images are collected by either statistic ground photogrammetric stations or ground MMSs.By analyzing existing deficiencies and bottlenecks which hinder the joint utilization of aerial oblique images and ground data in city modelling,innovative strategies and algorithms are proposed to 1)geometrically fuse observations from the two platforms and 2)to product optimized 3D mapping products and 3D building models.Firstly,an image matching strategy for improving image matching qualities between UAV images and ground images is present.By making use of existing 3D clues from aerial datasets,vertical base planes are fitted in object space to produce rectified images to alleviate the projective distortion and scale differences in aerial-ground image pairs thus to enhance the feature matching quality between them.Interest points are extracted and matched on the synthetic images.After that,the matched tie points on rectified images are back-projected to their original images.Secondly,observations obtained by the ground platform are geometrically fused with the aerial dataset.A specialized combined bundle adjustment strategy is achieved to overcome the uneven-distributed tie points between inter-platform and crossplatform tie points in the aerial-ground image blocks and result in the best estimation of orientation parameters.Rigid transformation parameters between the two platforms are first estimated,then a combined bundle adjustment with specialized weighting strategy is followed with the constraint provided by these parameters.After that,an innovative method is developed to boost the absolute object space mapping accuracy for ground MMS laser scanning point clouds by bridging with aerial oblique images through ground MMS images.Thirdly,the geometrical co-registered aerial-ground images are jointly utilized to produce merging point clouds and 3D mesh models with optimal geometric and texture qualities.Point clouds generated from the two complementary platforms are filtered based on depth contradictory test to remove noise and ghost data.Then,aerial and ground point clouds which best represent the building surfaces are selected by taking point density,normal orientation,and region smoothness into global consideration,resulting in merged 3D point clouds.After that,the 3D mesh models are generated from the merged point clouds and image patches of either aerial or ground images are extracted to provide high-resolution textures.Lastly,in order to extract high quality building boundaries from noisy photogrammetric point clouds,a hierarchical building boundary regularization method is proposed.Beginning with detected planar structures from raw building point clouds,two stages of regularization are employed.In the first stage,the boundary points of an individual plane are consolidated locally by shifting them along their refined normal vector to resist noise and then grouped into piecewise smooth segments.In the second stage,global regularities among different segments from different planes are softly enforced through a labeling process,in which the same label represents parallel or orthogonal segments.The resulting building boundaries could fit the original boundary points faithfully while show preferred regularity.The presented framework and modelling strategy in this thesis could effectively fuse aerial oblique images with ground images and laser scanning point clouds in a unified spatial framework,the proposed methods produce 3D point clouds with optimal spatial coverage as well as geometry accuracy from aerial and ground observations.Furthermore,the generated photorealist 3D mesh models using the methods in this thesis are favored with distinct improvements in geometry and texture quality and the extracted 3D building boundaries are characterised with preferred regularity and fidelity.The findings presented in the thesis extend current knowledge of integration aerial and ground images in 3D modelling,advance photogrammetry and computer vision investigation in multiple data integration,facilitating potential applications in virtual geography environments,urban planning,urban management,and construction of urban digital twin.
Keywords/Search Tags:Aerial Oblique photogrammetry, Ground Mobile Mapping System, Aerial-Ground Integration, 3D Reconstruction, 3D Modelling
PDF Full Text Request
Related items