Font Size: a A A

Location And Identification Of Key Equipment In Catenary Based On Multi-Scale Spatial Information Fusion

Posted on:2023-05-17Degree:MasterType:Thesis
Country:ChinaCandidate:Z H HuoFull Text:PDF
GTID:2532307073490684Subject:Control engineering
Abstract/Summary:PDF Full Text Request
With the increasing mileage and operation intensity of high-speed railways in my country,the safety problems faced in railway operation have been paid more and more attention,and the safety of traction power supply system plays a crucial role in this.In order to meet the ever-increasing requirements for the safety detection and monitoring of railway power supply systems and relieve the pressure of manual inspections,it has become an urgent and important issue to carry out normalized intelligent inspections of the service status of key catenary facilities based on on-board video.In the railway scene with prominent structural features,the images captured by the on-board inspection device have the characteristics of wide viewing angle,rich targets,and prominent geometric information.When a three-dimensional scene with a wide viewing angle is projected on a two-dimensional picture,a distinct perspective effect of near-high,far-low,near-large and far-small,near-wide and far-narrow is formed.The main work of this paper is to study intelligent detection algorithms suitable for perspective imaging in railway scenes and complex backgrounds.Through mathematical modeling of the geometric characteristics of high perspective distortion in the catenary inspection image and combined with the knowledge of deep learning,the positioning and detection of the key equipment of the catenary and the identification of the abnormal state of the catenary suspension are realized.The experimental data in this paper comes from the catenary inspection images collected by the 3C vehicle high-definition camera group,and the experiments are carried out using standard data sets in different scenarios,which all verify the effectiveness of the proposed algorithm.The main research work of this paper includes:(1)Aiming at the problem that the current key point FOE(Focus Of Expansion)for calculating the motion parallax of catenary inspection images requires multi-frame image matching estimation and has high time complexity,this paper combines the idea of self-supervised learning and proposes a motion parallax keypoint estimation algorithm for single-frame image fusion with self-supervised learning.In order to use CNN as the predictor of FOE,a fully convolutional network F-VGG is built to regress and predict the location information of FOE.In order to avoid the data bias caused by manual annotation,the training labels of the sample data are automatically generated by fusing the proxy task,without manual annotation of supervision information,and the end-to-end single-frame image FOE estimation is realized.The experimental results show that the method improves the FOE prediction accuracy by 13.45% on average and the detection speed by 56.27%,which is suitable for real-time applications.(2)In order to effectively solve the interference to the detector caused by perspective imaging of catenary inspection images and the complex background,this paper proposes a spatial scale perspective constraint model based on the geometric features of high perspective distortion of inspection images.Based on the mathematical principle of collinearity of key points in perspective imaging,the model establishes the correlation between the target object and the predicted rectangular frame,so as to achieve the effect that if the target position is given,its size can be predicted,which is used to correct the detection results of traditional target detection algorithms.At the same time,the calculation method of model-related parameters is also given.The experimental results show that the method improves the detection accuracy of catenary suspension string positioning by 3% on average,and the effectiveness of the method is verified by practical application.(3)Due to the large number of targets in the catenary inspection images and the large scale span of different detection objects,the traditional target detection algorithm is difficult to meet the needs of practical applications.Therefore,this paper designs a Cross-Scale Feature Fusion Network(CSFNet)suitable for the location detection of catenary key equipment in railway scenes to realize the location detection of catenary suspending strings and brackets.By adding RFB(Receptive Field Block)multi-scale receptive field module to the large-sized feature map to expand the receptive field of its neurons,thereby enhancing its ability to focus on the overall target information.By loading the cross-scale feature fusion module(Cross-Scale Feature Fusion Module),the feature scales of all outputs of the detector are fused,so that each layer of prediction scale contains rich spatial detail information and strong semantic recognition information to strengthen Links between features at different scales.(4)The catenary hanging string occupies a small proportion in the inspection image and the fault shape changes slightly.The hanging string features extracted by ordinary convolutional neural networks are difficult to achieve satisfactory classification results.Therefore,this paper improves a deep neural network(Visual Geometry Group Network with Muti-scale SE(Squeeze and Excitation Networks)Block,MSEVGG)based on multi-scale attention perception for the identification of abnormal state of catenary hanging strings.The multi-scale convolution kernel is introduced to realize the perception of different scales of the hanging string,and the channel domain attention module is added to amplify and strengthen the feature information of the small gradient of the catenary hanging string,so as to improve the feature expression ability of the network.In the experiment of abnormal state detection of catenary suspension strings,MSEVGG network cooperates with Bayesian algorithm to adjust its output results by weighting,and achieves excellent results that the false detection rate and missed detection rate are only 1.83% and 5.26%.
Keywords/Search Tags:Catenary Image, FOE, Perspective Constraint, Cross-Scale Feature Fusion, Multi-Scale Convolution Kernel, Channel Domain Attention
PDF Full Text Request
Related items