| Person re-identification aims to retrieve the query pedestrian image from large-scale pedestrian databases across several non-overlapping cameras.It has important applications in security monitoring and unmanned supermarkets,and has attracted widespread attention from domestic and foreign scholars.In recent years,due to the development of deep learning and its success in computer vision,the performance of deep learning based person re-identification methods has made great progress.However,in real scenarios,occlusion is ubiquitous,where the pedestrian images are easily occluded by various distractors and other pedestrians.Occlusion will heavily damage the trained model.As a result,it is difficult for the deep learning based person re-identification methods to learn robust feature representations,leading to a significant decrease in performance.Therefore,how to extract features that are robust to occlusion is meaningful to person re-identification in real scenarios.The main contributions of this paper are given as follows:First,we propose a Semantic-Guided Multi-Granuarity Network(SGMGN)method for occluded person Re-ID.Compared with global features,local features are more robust to occlusion.When a local area is occluded,features from other local areas can still provide effective discriminative information.Therefore,we use the labels of human semantic segmentation as the supervision signal to extract the local features from different semantic parts of the human body.At the same time,the scale of occlusion is diverse.We aggregate semantic labels into different granularities to supervise and extract local features of different granularities.Multi-granularity local features have good adaptability to different scales of occluded obstacles,so it can effectively improve the performance of occluded person Re-ID.For global features,we incorporate more semantic information into it to make it pay more attention to non-occluded semantic parts.Experimental results show that our proposed SGMGN method has achieved better results than the state-of-theart methods on a large-scale occluded person Re-ID dataset(Occluded-DukeMTMC).The performance obtained by our method on two artial person Re-ID datasets(Partial-REID and Partial-iLIDS)is also significantly improved compared with the baseline model.Our method has also achieved results comparable to the state-of-the-art person Re-ID methods on two general person Re-ID datasets(Market-1501 and DukeMTMC-reID).Secondly,we propose a Semantic-Aware Occlusion Robust Network(SORN)method for occluded person Re-ID.This proposed method has a three-branch architecture,which contains a global branch,a local branch and a semantic branch.The local branch divides the feature map uniformly in the vertical direction to extract local features.Since the high-level feature map has a larger receptive field,the extracted local features can tolerate a certain degree of misalignment.The global branch utilizes a novel Spatial Patch Contrastive(SPC)loss to extract global features that are robust to occlusion.At the same time,the semantic branch can generate a segmentation mask consisting of the foreground and background of the pedestrian image,so that the non-occluded pedestrian body parts can be recognized.These three branches can be jointly trained under a unified multi-task learning framework.Finally,we choose global features and local features from non-occluded regions for matching.Experimental results show that our method has achieved great performance improvement on two partial person Re-ID datasets(PartialREID and Partial-iLIDS).On the large-scale occluded person Re-ID dataset(OccludedDukeMTMC),our method has achieved better performance than the state-of-the-art person Re-ID method.Comparable results have also been achieved on two general person Re-ID datasets(Market-1501 and DukeMTMC-reID). |