| Against the background of software and hardware technology advancements,with the transformation of artificial intelligence carriers from cloud intelligence to terminal intelligence,terminal-aware intelligent applications such as autonomous robots and outdoor mobile maps are gaining rapid and wide popularity.Simultaneous localization and mapping(SLAM)technology based on external sensor perception fusion can complete accurate pose reasoning and map reconstruction,and thus provide the underlying basis for intelligent decision-making of terminals,which is a technical prerequisite of terminal perceptual intelligence.The existing multi-sensor fusion SLAM solutions have problems such as insufficient perceptual ability of the selected sensor combination,degradation of performance in challenging scenarios,and incomplete failure strategies.In this paper,we design and implement a complete multi-sensor sensing and positioning system solution by utilizing an infrared monocular camera,a multi-line lidar,and an inertial measurement unit as sensor combination.The system adopts a front-end and back-end separation software architecture,and finally realizes four modules:the sensing-based positioning terminal software,the deploy-test terminal software,the global management terminal software,and a multi-sensor fusion SLAM algorithm.The terminal software provides perception positioning users with map reconstruction and real-time localization services,with professional functions such as calibration check service for deployment testers,and map management and related data analysis service for system administrators.The core of back-end is to propose and implement a tightlycoupled factor graph based multi-sensor fusion SLAM algorithm.The direct method is adopted in the visual front end to process image frame inputs with low grayscale gradients but robust to all kinds of lightning conditions,while point and line features are extracted once a key image frame was selected based on motion-time threshold strategy.Next,we construct factor-graph-based optimization equations to solve system states,and then to gain map reconstruction service and pose estimation service.Guided by the software engineering project management standards,firstly,the system’s user roles composition,application-level and algorithm-level functional and quality requirements are discussed and clarified in the paper.Later,following the outline of system’s requirements,this paper elaborates the software architecture,physical architecture and overall technical schemes of the system in detail.It further expounds on the principle of the multi-sensor fusion SLAM algorithm based on factor graph,which is proposed in the later part of the paper.Then,the timing logic of each module and the execution flow design of the algorithm are designed in detail.Finally,after the coding completion of the system implementation,several sequences from public dataset M2DGR out of the Vision and Intelligent System Group of Shanghai Jiao Tong University are utilized to thoroughly test the performance of the system,with the corresponding defects later corrected.This paper has finally completed all the construction work of the multi-sensor fusion SLAM system based on factor graph optimization.While effectively improving the tradeoff between its performance requirements in real-time and accuracy.of positioning services,the multisensor fusion SLAM algorithm proposed in this paper further realizes performance improvement of the algorithm’s robustness in application scenarios such as insufficient illumination and sparse texture. |