| Person re-identification aims to identify specific pedestrian images across non-overlapping cameras.With the popularization of monitoring equipment,person re-identification technology is widely used in the fields of security monitoring and suspect tracking.However,in real life,due to factors such as pedestrian occlusion,pose change,illumination intensity change,and low resolution captured by different cameras,the appearance of pedestrian images changes greatly,and it is difficult to obtain robust features.Therefore,focusing on learning robust and distinguishable features has become the top priority for solving person re-identification tasks.Traditional person re-identification methods extract reliable features by using color and texture features or learning strong similarity measures.However,these methods rely on artificially designed prior knowledge to judge.In complex environments,the generalization ability of features is relatively low.weak.In recent years,the method based on deep learning has become more popular in the task of person re-identification,and compared with the traditional method,it has made great improvements in extracting pedestrian feature information and learning similarity measurement methods.Therefore,this paper is based on the deep learning method,and the main contents are as follows:1.This paper proposes A person re-identification method based on fusion of multi-dimensional attention mechanisms.Aiming at the problem of spatial dislocation caused by changes in human posture,the body parts of the images do not match.The global and local features are fully captured from the spatial and channel dimensions to learn discriminative features.First,the classical Res Net-50 is selected as the backbone network.The channel attention module is embedded in the Res Net-50 network.The channel attention module can capture the dependency of the channel dimension,obtain the weight of the importance of the feature channel,and improve the useful local features and suppress the useless features.The Res Net-50 network outputs a complete feature map,and the input is extracted from the attention module to extract spatial features.The self-attention module captures the context information from the spatial dimension to obtain the weight of each feature,and then obtains the global feature.Experiments on two datasets show that this method can obtain more robust features and improve the recognition accuracy.2.A person re-identification method based on self-attention mechanism multi-loss optimization is proposed..Aiming at the problem that the intra-class distance of the feature map varies greatly and the inter-class distance is small,the center loss,Softmax loss and difficult sampling triplet loss are used for joint training,which not only makes the intra-class distance more compact,but also maintains the inter-class distance and improves the network performance.Person discrimination ability.At the same time,Self-attention obtains contextual information and normalizes feature maps by learning pixel dependencies,thereby improving the generalization ability of the network.In the data preprocessing stage,to address the problem of image occlusion,Random erasure augmentation(REA)is applied to augment the training images.In addition,we have proved the method in this paper through a large number of experiments on multiple data sets,including the two data sets Market-1501 and Duke MTMC-re ID.The experimental results show that the Rank-1 reaches 93% and 84% respectively. |