| Infrared and visible image fusion acts a pivotal part in civil monitoring,disaster detection,military identification and other fields.However,because the scale of the thermal radiation target in the infrared image varies greatly and is relatively blurred,and the visible image is affected by the weather and low illumination,the imaging quality of the target is poor.Therefore,infrared and visible image fusion is still a challenging task.In view of the situation that infrared thermal radiation targets vary in scale and visible image targets quality varies greatly,we innovatively come up with an unsupervised infrared and visible image fusion network based on multi-scale decomposition feature selection,component reconstruction and segmentation regularization hypotheses.Starting from the hypothesis,the problems such as the scale of infrared target is different,the fusion image is difficult to transfer the source image information and keep their complementary information are solved.The primary content and innovations of this paper are as follows:(1)In the design of the basic model,the feature selection module is used to carry out pixel-level feature weighted fusion of infrared image and visible image,cross-scale feature selection module is utilized to integrate deep features to decrease the loss of detailed information.Since there is no label for this task,this paper explores the influence of different image content loss function and image structure loss function on fusion results.(2)Component reconstruction hypothesis is proposed,and the component reconstruction module is used to learn the infrared and visible source images from the fused image,and there is no extra parameters and calculations are introduced in the test phase.The fusion effect shows that the fused image can obtain more source image information.(3)To solve the issue of retaining the infrared object and background texture information of the fused image,the hypothesis of segmentation guided fusion was proposed.The subsequent segmentation of the fused image was carried out,and the segmentation loss was used to constrain the fused image to have better edge details and highlight the target contour.No extra parameters and calculations were introduced in the test phase,and the subjective visual effect and objective evaluation indexes were greatly improved.(4)The training set and test set used in this paper are different data sets,and the effectiveness and generalization of the proposed algorithm are proved by experiments.The model trained directly on infrared and visible image data sets can be applied to multi-focus image fusion and multi-exposure image fusion tasks without fine-tuning,and good subjective visual perception can be obtained.At the same time,this paper compares the current advanced methods on the TNO data set and the VIFB data set,and both the subjective visual perception and objective evaluation indexes are at the forefront. |