Font Size: a A A

Research On Key Technologies Of 3D Reality Virtual Mapping For Augmented Reality

Posted on:2023-04-09Degree:DoctorType:Dissertation
Country:ChinaCandidate:D FuFull Text:PDF
GTID:1528307022455114Subject:Cartography and Geographic Information System
Abstract/Summary:PDF Full Text Request
Augmented reality technology enriches users’ perception by overlaying virtual information on the real scene for rendering.Digital twinning technology is a digital reproduction of the real world,and the virtual and real blending of three-dimensional physical space and digital world is realized based on the mapping relationship of spatial position.The integration of augmented reality technology and digital twin can overlay virtual three-dimensional twin models on real physical objects and improve the visualization ability of data.The key technology among is reality virtual mapping.3D scene mapping technology calculate the position and pose of the camera.Based on the real space coordinates and attitude of entities in the digital world,it calculates the mapping relationship between the 3D digital world and the real physical world so as to achieve the virtual and reality scene fusion from the perspective of the camera.Therefore,the accuracy of camera pose estimation is very important to the precision of reality virtual mapping technology and the effect of augmented reality.Although the traditional visual-inertial navigation system can obtain the position and pose of the camera,it’ll be greatly affected by light conditions and moving objects.The image quality will decline significantly in weak illumination environment,and the corresponding pixels of moving objects in the image will shift,all of which will pose an effect on the accuracy and robustness of the system.In order to improve precision of reality virtual mapping and the effect of virtual scene and reality fusion in augmented reality system,this paper tries to enhance the robustness and accuracy of visual-inertial navigation system for camera position and pose estimation from image enhancement,eliminating the influence of moving objects and perfecting positioning strategy.The main research results of this paper are as follows:(1)A dark light image enhancement algorithm for sequential images is proposed.To improve the precision of camera position and pose estimation of visual inertial navigation system in low illumination environment,a dark light image enhancement algorithm suitable for sequential images is put forward for image quality improvement.Firstly,existing image enhancement algorithms are compared in terms of accuracy and efficiency through experiments.Then,the algorithm is improved to make it more suitable for visual inertial navigation system.A three-dimensional convolutional neural network is used to preserve the time correlation among image frames.In the meantime,the spatial correlation model is improved to increase stability of image feature points.Taking into account both accuracy and efficiency,the improved algorithm can not only effectively improve image quality,but also be more suitable for sequence data under the premise of real-time operation.In the public dataset,compared with the positioning and navigation system using the unimproved image enhancement algorithm,the positioning accuracy of the proposed method is raised by 19.83%.(2)An algorithm integrating inertial measurement unit(IMU)data for dynamic feature points elimination is proposed.In order to improve the positioning accuracy and robustness of visual-inertial navigation system in dynamic environment,an algorithm using IMU data to eliminate dynamic feature points is studied.Firstly,the algorithm is used to test the cumulative error of IMU data.Then,the accurate IMU data is used to calculate the camera motion between two adjacent frames of images.Finally,the distance between image feature point matching and motion model is calculated.The feature point matching with long distance is the wrong matching or dynamic feature point matching that needs to be eliminated.IMU measures the motion of the equipment itself directly,which is not affected by the environment,therefore the algorithm can be applied to scenes with much motion information.The algorithm can eliminate the dynamic feature points in the image effectively.Compared with VINS-mono,the accuracy of the positioning and navigation system of this algorithm in the public dataset is improved by 4.00%.(3)A dynamic feature point elimination algorithm applying multi constraint fusion is proposed.Based on the algorithm integrating IMU data for dynamic feature points elimination,a further research is carried out.The algorithm combines various constraints,which can not only be combined with IMU data to eliminate the feature points offset along the epipolar direction,but also use the spatial consistency and the time difference of multiple images to eliminate the dynamic feature points.The algorithm fusing multiple constraints avoids the failure of single constraint,which can eliminate the feature points of the moving object and improves the accuracy of feature point matching in a better manner.This algorithm is integrated into the visual-inertial navigation system,and VINS-dimc is proposed.Experiments on public dataset and selfcollected data show that this algorithm can eliminate dynamic feature points in a variety of environments accurately and retain static feature points.In the public dataset,the positioning accuracy of this algorithm is improved by 4.45% compared with VINSmono.(4)A cloud-device collaborative positioning method is developed.This paper studies the cloud-device cooperative positioning method.The device positioning operated on the intelligent device uses the improved visual-inertial navigation system to estimate the position and attitude of the camera.Cloud positioning can be completed at the cloud server to calculate position and attitude of the camera by transmitting the image to the cloud server to extract feature points that are matched with the existing three-dimensional twin point cloud data.Then,the device positioning is corrected with the help of cloud positioning to reduce the accumulation error.Compared with the traditional method of using visual-inertial navigation system,the improved system is more accurate and stable.This positioning method is applied to augmented reality system.After setting the real three-dimensional coordinates to the three-dimensional twin scenes,the reality virtual mapping technology for real scene uses the stable and accurate camera pose to map the three-dimensional twin scenes to the real scene,and realizes the virtual reality fusion from the camera perspective.
Keywords/Search Tags:Augmented Reality, Visual-Inertial Navigation System, Reality Virtual mapping, Dynamic Feature Points
PDF Full Text Request
Related items