| At present,in the military field,before battles or exercises,military commanders usually obtain maps of the battlefield by means of handcraft surveying and mapping.However,this method has a large workload and poor real-time performance.In addition,the battlefield is likely to be located in a hidden area and the timing of the outbreak of the war is difficult to predict,so it is inaccurate to map the battlefield topography in advance.And the battlefield environment is complex and changeable,and the topographic map drawn in advance may not accurately reflect the situation on the battlefield.In this article,in order to sense the battlefield environment in real time,we designed a real-time reconstruction and environment perception system for multi-vehicle collaboration.Our system can build a color point cloud map around the vehicle and sense the 2D and 3D object in real time.At present,it is difficult for most reconstruction systems to build dense point cloud in real time and accurately.Some SLAM algorithms using lidar can build point cloud maps in real time,but most of them have not been extended to multi-vehicle,and the point cloud maps created have not been colorized.In this article,we designed a real-time reconstruction algorithm for multi-vehicle collaboration to build a dense color point cloud map.In addition,we also designed the perception system,and our perception system can run on the computer in real time.When building reconstruction system,we solved a series of problems.First,due to the large drift noise and Gaussian white noise in the acceleration and angular velocity measured by the IMU,it is difficult to use these inaccurate IMU data to remove the motion distortion of the point cloud.Here,we use Extended Kalman Filtering to fuse IMU data with lidar pose data,and update the received IMU data in real time,and then use the updated IMU data to remove point cloud distortion.Second,the front-end odometry in the laser SLAM will inevitably drift,and the world coordinate systems of multiple vehicles are not at the same point.Therefore,in order to reduce the cumulative drift and allow multiple vehicles to locate each other,we designed a multi-vehicle collaborative back-end optimization algorithm running on the server.Third,when colorizing a point cloud,we need to know the relative pose between the lidar and the camera.Here,we propose a calibration method based on plane.Fourth,in order to assign colors to the point cloud more accurately,we need to know the time deviation between the lidar and the camera.In this paper,we propose an algorithm to calculate the time deviation in the real time.Fifth,due to the parallax between the lidar and the camera,some points in the point cloud will not be visible to the camera.So we have realized a method that can find visual points in real time.When constructing the perception system,we complete the following tasks: we improve the YOLOv3 object detection algorithm to make it detect objects on fisheye images faster;we improve the Fast-SCNN semantic segmentation algorithm to make it more accurate for segmenting fisheye images;we improve YOLO3 D 3D object detection algorithm to make it faster to detect objects in the point cloud. |