Font Size: a A A

Research On Key Technologies Of Dynamic Scene Analysis For Autonomous Vehicles

Posted on:2019-12-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z P XiaoFull Text:PDF
GTID:1362330623950406Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
The scene for autonomous driving generally contains dynamic characteristics which influents the autonomous driving system,so dynamic scene analysis plays an important role in the perception of the environment of the autonomous vehicle.Compared with scene understanding,scene analysis is the foundation and focuses more on the research of the basic problem.This article adopts two-dimensional and three-dimensional data fusion to study the related key technologies,namely static and dynamic characteristics of scenes from the spacial and temporal perspective,focusing on the free-space detection by single frame and the dynamic region segmentation by multiple frames fusion.The main work and contributions of this article are as follows:1.A robust free-space detection algorithm based on Gaussian process regression and conditional random held is proposed.Free-space detection algorithm based on geo-metric characteristics are more robust and migratable than learning-based methods,and are more flexible than traditional simple-feature based methods.In this paper,Bayesian framework is used to output multi-source information probabilistically.And for the characteristics of free space,the Gaussian process regression with an improved non-stationary covariance matrix is adopted to recursively fit and regress the probabilistic output,and the processed results are then subjected to a structured final output using a conditional random held model.By testing on the public KITTI road dataset and our own campus dataset,it has significantly improved compared to the benchmarks,and it ranks third among the methods based on geometric features,even close to the ranking first and second method based on the precise 3D point of the laser radar.2.A stereo vision-based dynamic and static region segmentation algorithm is pro-posed.Classic visual odometry or simultaneous localization and mapping tech-niques are mainly aimed at static scenes and are affected by the degradation caused by moving objects.This papeestablishing r proposes a streamlined method for detecting moving objects.By a unified conditional random held model,moving objects are detected through images with different resolutions.At the same time,taking into account the error caused by the projection in the visual depth estimation,an approximate Mahalanobis distance normalization method is proposed,so that the segmentation effect is significantly improved.Validation experiments were per-formed on KITTI s public dataset and satisfactory results were obtained.Compared to the baseline methods,not only can the accuracy of the background motion es-timation be improved,but also the approximate dynamic and static segmentation performance was obtained,and it was greatly improved in dynamic object detec-tion.3.The scene flow and dynamic and static region segmentation algorithm based on LiDAR points and monocular image fusion are proposed.Due to the characteristics of LiDAR's point cloud,there are few scene flow estimation methods for the LiDAR point cloud.However,considering its characteristics that are able to obtain precise distance measurement and are not affected by light,this paper proposes the use of LiDAR and monocular image fusion to estimate the scene flow as well as the dynamic and static region segmentation.The algorithm is model-free and does not require the prior information of the objects in the scene.For the sparseness of LiDAR points,the strategy of matching with small clusters rather than common feature points is used.At the same time,a conditional random held model with a high order term of MDL is established to jointly optimize the scene flow and motion segmentation problems.Validation was performed on the KITTI public dataset.Especially when the accurate 3D information is provided,the estimated optical flow error is significantly reduced by about 50V and improved the performance by about 20V on the dynamic region segmentation compared to the currently state-of-the-art scene flow methods which are based on piecewise planes assumption.
Keywords/Search Tags:Dynamic Scene Analysis, Autonomous Driving, Free Space Detection, Dynamic Region Segmentation, Conditional Random Field, LiDAR, Stereo Vsion
PDF Full Text Request
Related items