| Augmented Reality technology adds computer-generated virtual information to the real environment,it needs to satisfy illumination consistency,geometric consistency and temporal consistency,thereby improving the vision consistency of virtual information and real scene fusion.The illumination consistency requires estimating the illumination distribution of the entire real scene,aiming to accurately render virtual objects inserted into the real scene.In order to solve the illumination consistency problem in Augmented Reality,the following work has been done:1.addressing the problem that images do not meet the need of illumination estimation task firstly,thence proposing a low-light image enhancement method that combines multi-branch residuals and affine transformation.The method draws on the success of the latest deep learning multi-branch residual network,and designs an illumination estimation module,an illumination affine transformation module,and a detail reconstruction module to deal with low-light,noise and detail problems in low-light images respectively.It provides prerequisites for the next step to achieve illumination consistency in Augmented Reality.2.Hereafter,designing material and shape estimation network(MSNet)and spherical Gaussian illumination modeling network(SGNet).The formation of image is the result of complex interaction of information such as object shape,material,illumination and camera,obtaining illumination information requires reversing the process of image formation.Firstly,the albedo,normal and roughness maps are obtained from a single image using the MSNet.Then the SGNet accepts the output of the MSNet as input,along with the original image and the enhanced image,it outputs the spatially varying illumination information,which solves the problem of relight of virtual objects and provides conditions for virtual objects to draw shadows to the real scene.By applying deep learning to solve the illumination consistency problem in Augmented Reality,proposing image-based inverse illumination estimation method.It neither requires any a priori knowledge of the 3D shape information,nor defines the materials and textures of the objects in the scene,it is applicable to any diffuse or specular reflective scene,without any additional input other than the single image of the scene.What’s more,applying the obtained illumination information to the augmented reality system,it can well improve the visual consistency of virtual and real fusion.Therefore,the image-based inverse illumination estimation method has theoretical and practical. |