Font Size: a A A

Research On Visible Light-Infrared Person Re-ID Based On Shared Feature Refinemen

Posted on:2024-01-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y ZhangFull Text:PDF
GTID:2568307106482224Subject:Electronic information
Abstract/Summary:PDF Full Text Request
With the wide application of monitoring equipment in public places,the image and video data of pedestrians are growing rapidly.The traditional way of relying on manual pedestrian matching is inefficient and costly,and the topic of person re-identification based on machine intelligence arises at the historic moment.At present,most person re-identification methods mainly focus on visible-visible single-modality images,but with the continuous improvement of security monitoring systems,more and more infrared cameras are used for monitoring at night or in dim light.Because of the imaging difference between visible camera and infrared camera,there is a huge modality gap between cross-modality images,and the traditional person re-identification method is difficult to apply to cross-modality scenes,so the task of visibleinfrared person re-identification has attracted more and more attention.Visible-infrared crossmodality person re-identification is faced with the dual challenges of inter-modality differences and intra-modality differences.In this paper,visible-infrared person re-identification methods are improved by refining the modality-shared features by using the complementary global features and local features,the complementary modality-specific features and modality-shared features,and the attention mechanism.The main research contents of this paper are as follows:(1)The existing methods usually focus on global features,which leads to the limitations of the learned shared information and lack of attention to fine-grained features.A multigranularity feature utilization network for cross-modality visible-infrared person reidentification is proposed,which enhances the complementarity between coarse-grained features and fine-grained features by fusing the modality-shared information in global and local features.Firstly,the fine-grained local shared features of two modalities are extracted by two feature extractors,and the hard-mining triplet loss and heterogeneous center loss are used to promote intra-class compactness and inter-class differences from the sample level and class center level.Then,a multi-modality feature aggregation module is used to fuse the information of the two modalities and learn the relationship between the two modality features to alleviate the differences.(2)In view of the lack of discriminative modality-shared information,the existing methods usually ignore the important information that can alleviate the modality differences in specific features.A cross-modality method based on dual attention feature enhancement is proposed,which supplements the missing shared features between modalities by mining the shared information in shallow and deep features respectively.Specifically,firstly,the visible image is converted into gray image by channel enhancement strategy to eliminate the interference of color information.Then,a shallow feature measurement module is designed for modalityspecific features,and the distribution of features belonging to the same identity in specific features between the two modalities is measured by the maximum average difference loss,so as to narrow the class distance and mine the hidden shared information.Finally,a dual attention feature enhancement module is proposed to mine more effective contextual information from shared features,so as to shorten the distance between the same identities in the modality.
Keywords/Search Tags:Visible-infrared person re-identification, Cross-modality, Modality-shared feature
PDF Full Text Request
Related items