| Saliency detection is a detection method for locating and segmenting the most attractive regions in an image.In recent years,saliency detection methods based on neural network have developed rapidly,and the detection performance has been greatly improved.However,the existing methods are still inadequate for the multi-feature utilization,and the detection in complex scenes is also not accurate.Therefore,how to extract rich multi-feature information and how to integrate them reasonably is still a great challenge in the current research.In order to solve the shortcomings of the existing methods,this paper proposes two saliency detection algorithms based on full convolutional neural network,which make full use of multi-scale features and multi-modal features between different levels,and capture more discriminative semantics and detailed information to improve the detection accuracy.The contributions of this paper are as follows:(1)Aiming at the problem that the multi-scale feature extraction and fusion of existing algorithms are not effective enough,an RGB saliency detection model based on multi-scale feature fusion and boundary feature refinement is proposed.First,a feature interaction module is used to enhance the multi-scale information at different levels,and the features of different scales are fused through short connections to improve the representation capability of the different side-output features from the backbone network.Secondly,a feature refinement module uses the attention mechanism to extract the semantic information in the high-level features,and guides the low-level features to improve the importance of the spatial and channel information.Finally,a boundary feedback module is designed to refine the boundary details to obtain more delicate and clear boundaries of salient objects.We compare with 15 state-of-theart algorithms.The experimental results perform well in several metrics.This shows that the proposed method can adapt to different scenarios and has more accurate detection performance.(2)Aiming at insufficient of multi-modal features fusion in existing algorithms,a multi-modal fusion RGBD saliency detection model is proposed.First,a depth enhancement module is proposed to improve depth maps quality,thereby alleviating the problems of inaccurate spatial information caused by low quality.Secondly,a deepguided feature fusion module is designed to rationally utilize the commonality and characteristic information between RGB features and depth features.It can enhance the mutual communication and integration of cross-modality features.Experiments on 6public RGBD datasets and performance comparisons with other methods validate the effectiveness of the proposed method. |