Font Size: a A A

3D Reconstruction And Pilot Detection In Human-robot Co-driving Technology Of Intelligent Flight

Posted on:2021-05-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y HeFull Text:PDF
GTID:2392330611998212Subject:Control engineering
Abstract/Summary:PDF Full Text Request
Human-robot co-driving technology of intelligent flight refers to replacing the copilot position in the engine room with robot to assist the driver in operation.This paper aims to deal with vision-related problems in human-robot co-driving technology of intelligent flight,including using depth camera to reconstruct single RGBD image and collecting multiple images to conduct overall three-dimensional reconstruction of the engine room.It also includes using camera to detect the position of the pilot in the engine room through deep learning.The main work of this paper can be summarized as follows:(1)Three-dimensional reconstruction of single RGBD image.Using depth cameras at the same time to collect a RGB color image and a said the image of the depth of the distance to the camera image,to eliminate camera view of the difference between RGB camera and depth,will also depth image coordinate system conversion to RGB image,get such a set of alignment RGBD image after using camera made in reconstruction of point cloud.After obtaining the point cloud,post-processing operations including outlier removal and down-sampling were carried out,and then the processed point cloud was used for object detection in the local field of vision and plane normal vector extraction.(2)Global three-dimensional reconstruction.In the global three-dimensional reconstruction,a depth camera was first used to scan the entire cabin,and thousands of continuous RGBD images were obtained.In order to obtain better reconstruction effect,the RGB image was filtered and the region with high noise of depth value in the depth image was optimized,and the processed image was obtained by means of visual SLAM through visual odometer,back-end optimization and loop closure detection.Among them,the back-end optimization adopted the method of pose graph optimization.In order to further filter the noise in the falling point cloud,the point cloud information is also transformed into KD tree,and then the nearest neighbor search is used to find the cluster where the cabin is located,so as to finally obtain the complete noiseless point cloud.(3)Driver detection.In order to ensure the safety of the driver after entering the robot working area,a camera is fixed behind the driver for real-time detection of the driver's position.The detection method is to use the deep learning based FCN network for semantic segmentation of the driver,that is,to classify every point in the image to obtain more accurate position information.The semantic segmentation accuracy was 93.53% and 99.20% respectively after the training of large public data sets and selfannotated data sets,so the network model trained by the latter was adopted,and the average processing time of each image is 0.16 s.Through the above work,the problems related to vision encountered in the humanrobot co-driving technology of intelligent flight are solved,which makes the cooperation between the pilot and the robot more intelligent and safe.
Keywords/Search Tags:three-dimensional reconstruction, point cloud, deep learning, semantic segmentation
PDF Full Text Request
Related items