Font Size: a A A

Research On Visual Field Reconstruction Method For Environment Perception Of Intelligent Vehicles

Posted on:2023-02-03Degree:DoctorType:Dissertation
Country:ChinaCandidate:L BaiFull Text:PDF
GTID:1522307031486214Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the development of intelligent and networked technologies in the automotive industry,the research and industrial application of autonomous driving have become a key focus and hot spot for government departments,academia,and industry.It is an important direction of high-tech development that integrates multiple subjects such as automobiles,computers,automation,communications,and artificial intelligence.In the autonomous driving technology system,the environment perception technology based on visual information is the basis of intelligent behaviors such as path planning and decision control of intelligent vehicles.The reconstruction and spatial description of multimodal and heterogeneous information elements in complex environments is a key topic of current research in the field of environment perception technology for intelligent vehicles.This thesis focuses on key fundamental issues of environment perception of intelligent vehicles based on visual information,constructs a theoretical framework of visual field with the spatiotemporal synergy and the fusion of multi-source heterogeneous information,and proposes a visual field structure and its environment elements to effectively express the autonomous driving state of intelligent vehicles.This thesis deeply studies key technical issues such as the visual field subjective vehicle to the environmental objective target,the intervisibility between them,and the scene reconstruction and visualization of field-of-view.The spatial quantitative reconstruction method of the visual field is realized.The real-time,accurate,and robust environmental comprehensive situational awareness of intelligent vehicles is completed.The effectiveness and engineering application value of the relevant technical methods are experimentally verified.The research work in this thesis mainly includes the following aspects:(1)The research proposes a reconstruction method of “Structure-from-Motion”from the ego-motion estimation of the intelligent vehicle to the depth measurement of the visual field environment.Firstly,a method of ego-motion estimation for intelligent vehicles based on the quadruple finite element set is proposed.The global rigid scene is innovatively decomposed into a piecewise slice reconstruction problem of non-rigid motion.The combined energy function of the sliced motion of the high-level correlated features of the whole field is constructed.The closed four-loop matching and subpixel-level relocation with back-feedback are performed to extend the tracking lifetime of features and the spatiotemporal consistency,accuracy,and robustness of isomorphic features.Secondly,the accelerated computing techniques of incremental integral mapping and tree-like stacked storage are proposed to reduce the computational complexity of high-resolution scenes with wide baselines.Finally,a method of visual field environment depth measurement based on a motion constraint model is proposed.Based on the different relative motion relationships between the subjective intelligent vehicle and the environmental object,a motion constraint model is constructed to realize the depth measurement of the visual field environment and solve the imaging problem of the blind area for the field-of-view.Experimental results show that the method in this thesis can more efficiently,accurately,and robustly reconstruct the ego-motion pose of subjective intelligent vehicles and the 3D information of the environment for the visual field in real-time.(2)The research proposes a visual field object detection method based on a multi-source information fusion model.Firstly,the pre-fusion and registration of multi-source heterogeneous information are performed based on the multi-source information fusion model to reduce redundant data and computational complexity.Secondly,the neural network model with stereo regional proposal is trained to restrict the 2D boundaries of 3D object point cloud.The voxel-wise spatial index structure of the point cloud is constructed,and the comprehensive attributes and geometric distribution characteristics of the point cloud are combined to perform super-voxel over-segmentation of the environmental 3D point cloud.Finally,the 3D bounding box projected around the object is oriented,placed,and scored with 3D attribute information and semantic context information.The instance segmentation of environmental field-of-view,3D object detection,and object localization in the visual field are realized.Experimental results show that the method in this thesis can efficiently,accurately,and robustly reconstruct the objective target information in the visual field environment in the case of sparse data,small objects,objects occlusion,or objects stacking.(3)The intervisibility between the visual field subject and the environmental object is studied and analyzed.A visual field intervisibility analysis method based on the hydrodynamic model is proposed.Firstly,the image plane of 2D image and 3D point cloud reconstructed from the environment are aligned with the multi-dimensional point coordinates.The view field of motion is estimated to obtain the viewpoint position and the point cloud of field-of-view within the effective line-of-sight.Secondly,the point cloud calculation is transformed from Euclidean space to Riemannian space.The Riemannian metric based on hydrodynamic model is designed to construct the manifold auxiliary surface of the point cloud.The issue of spatial discontinuity and uneven distribution of the original massive point cloud is solved,which makes the distance calculation between points more accurate.Finally,the spectral analysis of the finite element topology is constructed for the manifold auxiliary surface.The geometric calculation conditions of the hybrid planar computational structure are proposed as the analytic criterion for the elevation values,and the intervisibility analysis between dynamic viewpoints is realized.Experimental results verify that the method can dynamically,effectively and robustly reconstruct the intervisibility between the subject of the visual field and the object of the environment.(4)The research proposes a method of visual field information reconstruction and visualization based on the feature chain code model.Firstly,the visual geometry model is initialized,and the parameters of distortion correction,epipolar correction and rigid body motion transformation are calibrated for the reconstruction technique.Secondly,the feature chain code model is established,and the scene reconstruction based on feature chain code resampling is proposed to uniformly extract the regional features with different texture roughness and depth variations.The 3D point cloud reconstruction of the visual field information is realized in a feature-detailed way.Thirdly,the outlier points and abnormal values are removed by using the point cloud statistical filter and the nearest neighbor registration,and reconstructed point clouds of adjacent motion states between two frames are fused and registered in the same coordinate system.Finally,the bilinear combination of surrounding multi-level progressive textures is used for the texture mapping of the mesh skeleton of 3D point cloud,that is,the point cloud surface reconstruction is performed.The visualization effect of scene rendering and roaming for the visual field environment is realized.Experimental results show that this technical method has the characteristics of uniform reconstructed texture and rich detailed response,which can satisfy the requirements of real-time scene reconstruction and visualization of wide-baseline scenes for intelligent vehicles.
Keywords/Search Tags:Environment perception of intelligent vehicles, Visual field reconstruction, Structure-from-Motion, 3D object detection, Intervisibility analysis
PDF Full Text Request
Related items