| With the continuous development of artificial intelligence technology and the advent of the big data era,artificial intelligence technology has been widely used in daily life.Person re-identification(re-ID)is the task of identifying individuals using images of people obtained from cameras or other sensors in an uncontrolled environment.This task has broad applications in video surveillance,intelligent transportation,and other fields.However,due to the difficulty in obtaining data and the problem of cross-camera matching,person re-ID has been a challenging problem in the computer vision field.In addition,because this task requires processing a large amount of data,traditional manual annotation methods require a lot of time and effort,making automated and unsupervised person re-ID methods highly sought after.In this context,this paper proposes an unsupervised person re-ID retrieval system to achieve a more efficient,accurate,and scalable person re-ID task.The main contributions of this paper are as follows:(1)Based on unsupervised techniques,we build a person re-ID system,which reduces the cost of manual annotation and can be more efficiently and accurately deployed in new scenarios.(2)Unsupervised person re-ID tasks face the challenge of camera similarity in unsupervised scenarios.We propose a camera similarity decoupling module to address the influence of camera similarity.(3)Unsupervised person re-ID tasks are all implemented based on clustering algorithms.However,because the model does not learn difficult samples well,some samples are misidentified as outliers by clustering algorithms.Previous unsupervised algorithms simply discard outliers.We design a multi-round outlier recall strategy to allow the model to fully learn the knowledge of difficult samples,making our model more robust.Compared with the most advanced algorithms in the field of unsupervised person re-identification,the algorithm designed in this paper has improved the mAP index by 4.1%,2.9%and 6.5%on the Market1501,DukeMTMC-reID and MSMT17 datasets respectively. |