| Multi-source image fusion is a hot topic in computer vision,which aims to integrate information from different sensors that represent the same scene into a fused image.The resulting fused image contains complementary information from multiple source images,making it easier to perform advanced visual tasks such as object detection,tracking,and human visual observation.The main research content and innovation points of this paper are as follows:A guided fusion Transformer-based image fusion method is proposed for infrared and visible light image fusion.In response to the limited long-term modeling capabilities of convolutional neural network methods,this method combines convolutional neural networks with Transformers to better extract global features from images.To more fully fuse and extract source image features,this method improves the way attention is calculated inside the Transformer.By adding the features of one source image as guidance information to the features of another source image,the features of the two source images are fully fused.Additionally,to enhance the quality of the fused image,this method designs a loss function based on the edge intensity of the fused image.Finally,qualitative and quantitative comparisons with seven other fusion algorithms demonstrate the superiority of this method.A Transformer and dual-discriminator generative adversarial network-based image fusion method is proposed for infrared,SAR,and visible light image fusion.To address the problem of the slow operation of the model in research content 1,this method optimizes the network structure and designs a more refined fusion method.To generate more informative fused images,this method introduces a generative adversarial network architecture.In addition,to alleviate the image blur problem caused by the generative adversarial network and improve the quality of the fused image,this method uses the edge intensity loss function of research content 1 to increase image contrast and uses dual discriminators to simultaneously fit the distributions of the two source images,thereby enhancing the richness of complementary information in the fused image.Finally,the rationality of the method design is demonstrated through ablation experiments,and experimental results on four datasets demonstrate the effectiveness of the method.A lightweight generative adversarial network-based image fusion method is proposed for infrared,SAR,and visible light image fusion.Research content 2 partially modifies the model to reduce its size,but it still has a large number of parameters and requires significant computational resources.To address these issues,this method reduces the number of Transformer Blocks in different stages of the model and removes the positional encoding used in attention calculation inside the Transformer.However,reducing network parameters inevitably leads to network instability.Therefore,based on the network weights of research content 2,this method designs a more refined loss function to control the direction of network updates.Additionally,this method combines convolutional structures with Transformers to enhance information exchange and reduce feature redundancy.Finally,ablation experiments demonstrate the rationality of the improvement direction,and comparative experiments with previous methods and other excellent fusion algorithms demonstrate the effectiveness of the lightweight model proposed in this method. |