Robust localization is a challenging problem in a complex environment.Localization in autonomous driving is a task to provide a 6-Dof pose by using kinds of sensors.Li DAR is considered the most reliable sensor in mapping and localization.How its high cost is a limitation on its extensive usage.GPS(Global Positioning Systems)can perform well in the intense signal area,but it may fail to provide accurate pose in urban and indoor environments.The camera is considered suitable than Li DAR because of its low cost.Still,the visual localization method will cause the accumulated error,and the accumulated error can’t eliminate,and the visual localization method is vulnerable to the environment,especially the illumination.In view of the above problems,this thesis mainly completes the following work:1.Tracking camera pose in low-light conditions,is a challenge for visual localization.For solve this issue,we introduce Sem-GAN to translate nighttime or dark scenes images to more useful daytime representation.By maintaining the consistency of segmentation results,we add additional segmentation loss to modal.2.For eliminating the accumulated error during visual localization,this thesis use camera localization online and Li DAR mapping offline.This thesis extract features from the Li DAR map and match the features to the point cloud map generated by the camera to eliminate the accumulated error.Take advantage of visual information,we use semantic information to improve the accuracy of camera pose tracking.3.This thesis evaluate the approach on synthetic and real datasets under complex environment and compare our method with state-of-art V-SLAM/VO systems.The experimental results demonstrate that the proposed method reveals a more robust performance under complex conditions. |