Font Size: a A A

Obstacles Detection And Road Identification In Autonomous Driving Scenario

Posted on:2020-11-02Degree:MasterType:Thesis
Country:ChinaCandidate:Razikhova MeiramgulFull Text:PDF
GTID:2392330590474319Subject:Information security
Abstract/Summary:PDF Full Text Request
High speed and high-precision visual perception technology determines the future development of unmanned vehicle.Deep learning method has its unique advantages in visual detection and recognition of driverless scenes.At present,deep learning theory has been widely used in the detection,recognition and semantics segmentation of obstacles and lanes in driving environment,pedestrian and vehicle intention prediction,traffic monitoring,driver condition monitoring and multi-sensor information fusion and other fields.In many application fields,fast and high-precision obstacle detection and recognition in driving environment and road semantics segmentation are prerequisites for future safe driverless driving.This paper introduces the construction of deep learning algorithm framework based on Windows and Ubuntu operating systems,and analyzes various typical data sets,network structure and numerical optimization methods of deep learning methods.On this basis,this paper applies the end to end YOLOv3 algorithm under the Tensorflow deep learning framework to realize obstacle detection and recognition of the autonomous vehicle.Unlike region proposal method,in this algorithm,the problem of target detection and recognition is reduced to a regression problem.The boundary frames and categories of the objects in the image can be obtained only by using a single network to evaluate the acquired image once.Therefore,its detection and recognition speed are faster.However,compared with the regional proposal method,this method also has some drawbacks.The positioning accuracy is poor,and the detection effect is poor when the target is small and the object is close to each other.Experiments show that this algorithm can well identify obstacles in various autonomous driving environments.Its detection and recognition efficiency depends on the hardware environment of the system.Further,based on KITTI data set,this paper trains the end to end full convolution neural network VGG16-FCN8 to realize the semantic segmentation task at the pixel level of road.The algorithm replaces the full connection layer of VGG network with the full convolution network FCN8,so that the predicted results of VGG16 can be restored to images.The algorithm has no specific size requirement for the size of the identified image.However,due to the fact that the lens distortion parameters of the on-board traveling vehicle recorder are not calibrated in this paper,the semantics segmentation results are noisy.Experiments show that the method described in this paper can well distinguish the road and non-road pixels in the image and the Adam numerical optimization method selected in the algorithm can make the loss function converge to minimum value quickly and smoothly,which further indicates that the model selected in the algorithm is not overfitting.The algorithm presented in this paper can effectively support the visual perception and recognition in the driverless scene.However,the unmanned sensing system includes not only vision,but also lidar,millimeter wave radar,GPS,ultrasonic radar,inertial navigation system and so on.In order to provide a safe sensing system for driverless vehicles,it is necessary to further study how to use intelligent algorithms to fuse the information of various sensors.
Keywords/Search Tags:autonomous driving, machine vision, deep neural network, YOLOv3, VGG16-FCN8
PDF Full Text Request
Related items