| The basic goal of infrared image and visible image fusion is to extract the effective information from the multi-modal images in a scene,and then get a high-quality,highreliability,high-effectiveness image that can fully reflect the scene.Different sensors have different advantages and limitations,and the images obtained by fusion of different sensors can be complementary to each other.It has been widely used in various fields,such as remote sensing,robotics,object tracking,security monitoring and other fields.Generative adversarial network is one of the important representatives of image fusion model.The network is constructed to make the samples generated by the generator conform to the real distribution by means of Nash equilibrium game,so that high-quality generated images can be obtained.However,most of the current fusion models lack the use of consistent subspace information and have poor adaptability to different environments.Aiming at the above two problems,this paper designs a new fusion model based on generative adversarial network,and the main research contents are as follows:1.According to the traditional model of infrared and visible light image fusion based on GAN mostly in isolation,separately considering the infrared and visible light image,without analyzing the internal relation between different modal image problems,this paper proposes a fusion model based on canonical correlation analysis to generate against network,used for infrared and visible light image fusion.The specific improvement points are as follows:(1)A canonical correlation analysis(CCA)fusion network is proposed for the first time in the model.CCA module,neural network and inverse CCA module are designed in the network,so that the model can extract the consistency information of subspace and retain the salient part of the original image well.(2)The reconstruction fusion network module is designed to enrich the details of the fused image;(3)A loss function which can retain both content information and structure information is designed.On the TNO data set,the model in this paper was compared with 6comparison models from both visual and quantitative aspects.In order to evaluate the performance of the fusion model more comprehensively,a total of 6 quantitative evaluation indexes were used in the quantitative analysis.The results show that the fused image can not only retain the structure of the original image,but also retain rich details.2.Aiming at the problem that the fusion model does not make adaptive fusion according to the different information contained in the original image under different scenes(such as different illumination intensity),this paper proposes an infrared and visible light fusion model based on adaptive canonical correlation analysis to generate adversarial network.The specific improvement points are as follows:(1)An adaptive fusion module based on entropy is proposed for the first time,which can automatically fuse with some emphasis according to the different amount of original image information;(2)A dense reconstruction network is designed to improve the feature extraction ability;(3)The loss function which can retain content information,structure information and gradient information is designed.On the TNO data set,the model in this paper was compared with six comparison models from both visual and quantitative aspects.In the quantitative analysis,a total of six quantitative evaluation indicators were used to evaluate the image from multiple perspectives.The good performance of the experimental results on entropy and standardized mutual information index proves that the proposed model can obtain as much original image information as possible.Figure [19] Table [2] Reference [56]... |