| With the development of vehicle intelligent platform and intelligent driving technology,people have higher and higher requirements for the intelligent algorithm of vehicle platform.At present,most of the vehicle intelligent vision algorithms focus on daytime but ignore the needs of nighttime scenes.Therefore,image algorithms based on infrared and visible dualmodal become the focus of research,especially image fusion and image segmentation technology.However,as far as image fusion is concerned,the infrared and visible images are quite different in nature,and the vehicle scene at night has a small amount of information and a complex spatial distribution;as far as image segmentation is concerned,the vehicle scene at night is more complex,and the infrared and visible images have weak features and often have small-scale targets.Therefore,this paper focuses on the respective characteristics of image fusion and image segmentation algorithms in the night vehicle scene,and uses fusion and segmentation to complement each other and promote each other,so as to promote the performance of pixel-level tasks by improving the decision-making level.The main work is as follows:(1)In the task of infrared and visible image fusion,the scene of infrared and visible image is complex,and the contradiction between target saliency and texture detail retention is difficult to balance.In order to solve this problem,this paper uses semantic segmentation mining features to guide image fusion,and constructs an infrared-visible image fusion network ASGGAN based on adversarial semantic guidance.The network introduces an adversarial semantic guide module(ASG),namely a semantic discriminator,and utilizes the segmentation network to transfer semantic information to the process of image fusion,thereby enhancing the target saliency of the fused image.Meanwhile,an adversarial visual perception module(AVP),namely a perception discriminator,is constructed,the structure of the conditional U-shaped discriminator is adopted,and the global structural characteristics and local texture details of the image are fully reserved in the process of image fusion to achieve the natural impression of visible light.The experimental results show that ASGGAN is improved by 0.86 compared with the baseline network through the objective evaluation index edge strength EI.(2)In the task of infrared and visible light image segmentation,the infrared and visible light images of the vehicle platform at night have great differences in nature and features,and the features are easy to interfere with each other and affect information extraction.In contrast to ASGGAN,which uses segmentation to promote image fusion,this paper uses complementary feature fusion to promote semantic segmentation,and constructs an infraredvisible night vehicle scene semantic segmentation network RPAFNet based on residual pyramid and attention feature fusion.Feature attention fusion is carried out on the trunk and residual of visible light and infrared light to realize the complementary advantages of the useful information of the two paths of features at all levels,enhance the favorable information and inhibit the redundant and unfavorable information.Through channel attention and spatial attention,the segmentation ability of objects with weak features or small scale is strengthened.The experimental results show that the average intersection ratio MIOU of RPAFNet is improved by 4.0% compared with baseline network. |