Font Size: a A A

Research On Cross-camera Vehicle Tracking Method Based On Omni-scale Features

Posted on:2022-08-01Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y WangFull Text:PDF
GTID:2492306575466104Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the construction of intelligent cities,surveillance cameras began to be deployed in cities on a large scale,and the number of surveillance videos in intelligent transportation systems has grown exponentially.In addition,as car ownership continues to grow,effective analysis of surveillance video in intelligent transportation systems is becoming increasingly challenging.The cross-camera vehicle re-identification and tracking tasks,as two core tasks of video analysis in intelligent transportation systems,have been a research hotspot in the field of computer vision.In real urban surveillance environments,the high similarity between different vehicles with the same brand,type,and color is not easily distinguishable,which often leads to poor performance of crosscamera vehicle re-identification methods and has serious implications for subsequent cross-camera vehicle tracking tasks.This thesis focuses on cross-camera vehicle re-identification and tracking methods,and the specific work is as follows:1.In order to extract more comprehensive discriminative features of vehicles,a vehicle re-identification method based on omni-scale and attention fusion learning is proposed in this thesis.The method extracts features at different scales through several different receptive fields,and fuses the extracted features at different scales through a shared aggregation gate,and the fused features have better expressiveness.At the same time,in order to focus on discriminative features in the process of feature extraction,a convolutional block attention module is introduced,which can clarify the content and area of the network model to learn.The experimental results show that the m AP accuracy of the method reached 82.3% and 77.4% on the Ve Ri-776 and Ve Ri-Wild,respectively,which verified the feasibility of the method.2.In order to extract richer features from multiple video frames,an improved vehicle re-identification method is proposed in this thesis and incorporated into the cross-camera vehicle tracking task.The method introduces a temporal attention model based on a vehicle re-identification method with omni-scale and attention fusion learning,and a weighted fusion of attention scores with video frame features for enhanced feature representation.The improved method is further combined with single-camera tracking and cross-camera trajectory association to enable cross-camera vehicle tracking tasks.The experimental results show that the method improves the IDF1 accuracy on the City Flow dataset by 3.83% compared with the original method without the inclusion of the temporal attention model,which proves the effectiveness of the method.
Keywords/Search Tags:vehicle re-identification, omni-scale, convolutional block attention module, vehicle tracking
PDF Full Text Request
Related items