| Infrared images are little disturbed by the environment and suitable for all-weather work,but they usually present the disadvantage of low contrast and poor details.Visible images have difficulties showing thermal target information in low light,smoke,haze,etc.Therefore,fusing infrared and visible images to produce fused images with salient targets and high-resolution textures can provide richer semantic information for subsequent object detection.Considering that visible images provide relatively abundant background features,low-quality infrared images have difficulty accurately depicting the detailed texture of thermal targets and produce poor results for subsequent fusion.To address these issues,the research is focused on infrared and visible image fusion,as follows:A parallel multi-feature extraction network is proposed to enhance low-quality infrared images which suffer from detail blur and low contrast.The network aims to improve the expression and enrich the detail texture of infrared images,ultimately contributing to the advancement of the fused image quality.The method is designed with a structural feature mapping network and a two-scale feature extraction network for building global structural feature weights and generating target enhancement mappings,respectively.The experiments demonstrate that the proposed method achieves the best results on BSD200 and real infrared images with PSNR and SSIM of 35.42d B,35.72d B,and 0.95,0.96,respectively,and that it also has good enhancement effects on low-quality images with different contrast factors.Most current infrared and visible image fusion methods usually require decomposition of the source images during the fusion process,which tends to result in blurred details and loss of saliency targets.To solve this problem,an infrared and visible image fusion algorithm based on deep convolutional feature extraction is proposed that directly extracts features from the source image,generates seven sets of fusion weights,and uses a pixel-maximization strategy to achieve heterogeneous image fusion.All experiments are conducted on public datasets,and the subjective and objective results show that the proposed method can effectively fuse important information in infrared and visible images,highlighting the detailed textures.The algorithm improves over Dense Fuse by 3.31%and 3.33%in EN and CC metrics.It is further validated that the infrared enhancement method has a positive contribution to fusion,while the fused images have better saliency detection and depth estimation results.Besides,the experimental results of infrared and RGB image fusion are also proved.Finally,the fused images are applied to object detection,using the YOLO-v7 detection method on the M~3FD dataset for visible images,infrared images and fused images.Combining the results with"evening","night,"and"smoke"scenes,it is validated that the fused images have better applicability.The m AP of the gray-scale fused images reaches 75.77%,an improvement of 4.58%and 14.51%compared to visible and infrared,respectively;the m AP of the RGB fused is 79.19%,an improvement of 9.30%and 19.68%compared to visible and infrared,respectively. |