Font Size: a A A

Research On Key Technologies Of Visual Perception For Advanced Driving Assistance System

Posted on:2022-05-09Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q N JiangFull Text:PDF
GTID:1482306557481094Subject:Industrial Engineering
Abstract/Summary:PDF Full Text Request
Advanced driving assistance system(ADAS)can reduce traffic accidents and personal injuries,and improve driving comfort.Therefore,the research of ADAS related technology is of great significance.Because machine vision can obtain rich information and low cost,vision based perception technology is widely used in ADAS.In this dissertation,the key technologies of visual perception technology related to external road environment detection and internal driver condition monitoring are taken as the research objects,and the solutions based on machine vision technology are studied and proposed,which provide theoretical basis and technical support for the design and development of ADAS.Vision based external road environment detection is to judge whether the road ahead is a driveable area according to the information in the image,and can accurately detect the lane lines in the structured road in the driveable area.On the one hand,because the road image is easily affected by adverse visual conditions,the target in the image becomes blurred and invisible.Therefore,it is necessary to enhance the image before road detection.On the other hand,due to the complexity of road environment and types,it is difficult for the detection algorithm to take into account both accuracy and timeliness.Vision based internal driver condition monitoring is to detect the driver's fatigue level by using the driver's features in the image.Because the illumination,background,shooting angle and driver's characteristics are random,and the driver's fatigue degree is also affected by time factors,the robustness of fatigue detection algorithm is not ideal.In order to solve the above problems,this dissertation studies four key technologies of visual perception technology: road image enhancement under adverse visual conditions,driveable road area detection,lane line recognition and driver fatigue monitoring.The main research content and innovation of this dissertation mainly include the following four points.(1)A new method of dynamic road image enhancement under adverse visual conditions is proposed.Firstly,the road images under different adverse visual conditions are classified by using the gray and clarity features of the image to be processed.Then,according to the classification results,the appropriate enhancement algorithm is selected for the road images under different adverse visual conditions,and the parameters of the enhancement algorithm are dynamically adjusted based on the image features.Therefore,this method can guarantee the timeliness and has strong adaptability to image enhancement under different adverse visual conditions.(2)A real-time detection method based on convolutional neural networks(CNN)and Gibbs energy function is presented.Firstly,in order to improve the robustness of the first frame detection,an improved random sample algorithm is proposed.The homography matrix H of different regions in the image is estimated by RANSAC algorithm,and H is used as the input of CNN to train it,so as to realize the detection of driveable road region.Secondly,in order to improve the detection rate of subsequent similar frames,the color and texture features of roads are extracted from the detection results of the first frame,and the corresponding Gaussian mixture model is constructed based on binary splitting algorithm Then,the Gibbs energy function is used to detect the road in the following frames.This method overcomes the problem of poor robustness caused by the uncertainty of road surface features and structural features,and the problem of poor timeliness caused by complex model and high computational complexity.(3)A new lane detection method based on the combination of feature and model is provided.Firstly,invariant features,such as gray features and morphological features,are used to quickly detect the candidate regions of lane lines from the driveable regions,and lane line parameters are obtained by improved probabilistic Hough transform(PPHT),then lane line regions are selected from the candidate regions of lane lines according to lane parameters.This method not only solves the problem of the consistency between the lane structure and the preset model in model-based detection,but also solves the problem of the instability of the actual lane features in feature-based detection.Moreover,it has strong adaptability to structured roads in different environments(adverse visual conditions,tree shade,dirt,etc.),and the detection accuracy is high.(4)A real-time driver fatigue detection method based on CNN and Long Short-Term Memory(LSTM)is proposed.Firstly,the simple linear clustering algorithm(SLIC)is used to segment the image into uniform size super pixels,and the super pixels are used as the input of CNN.Then the trained CNN is used to obtain the position and area of the eye and mouth regions.On this basis,the eye feature parameter Perclos,the mouth feature parameter MClosed,and the head orientation feature parameter Phdown are extracted,and the above feature parameters in the continuous time series and the steering wheel angle feature parameter SA are used as the input of LSTM,and the fatigue level is taken as output to detect the driver's fatigue state in real time.This method can overcome the influence of illumination,background,angle and individual differences,accurately and quickly detect the driver's face,eyes and mouth,and comprehensively consider the influence of time accumulation effect on fatigue,so that the final detection accuracy is greatly improved.
Keywords/Search Tags:ADAS, Visual Perception, Image Enhancement, Driving Area Detection, Lane Line Detection, Driver Fatigue Detection
PDF Full Text Request
Related items