| In the machining process,cutting tool as a key component of material removal,plays an important role in the whole machining process,tool wear will directly affect the machining quality and efficiency.In addition,when the tool wear is severe,there is a risk of fracture and breakage,which can lead to machine downtime and injury to the operator.It is precisely because of the necessity of tool state acquisition in the actual machining process and its great potential in practicality and economic benefits,further promote the development of tool wear state detection technology,many scholars have carried out targeted research on this.In this paper,a deep learning model based on multi-sensor data fusion imaging and attention mechanism is proposed for tool wear monitoring in milling process.First,the raw time-series signals collected by multi-channel sensors are fused into a 2D image of suitable size.Then,the obtained image data is fed into a deep residual convolutional network with attention mechanism for model training.Finally,the trained model is used for tool condition monitoring by analyzing the real-time input of multi-sensor signals.The main research innovations and work contents of this paper are summarized as follows:(1)A deep residual convolutional network model based on multi-sensor data fusion imaging and attention mechanism is proposed,which is used to identify and monitor tool wear status online.Firstly,the optimal performance of the proposed algorithm model is obtained through multi-group optimization experiments.Subsequently,the effectiveness of the model was further verified by multiple comparative experiments.Finally,a set of machining experiments were designed to verify the generalization and robustness of the model.(2)Triangular Matrix of Angle Summation based multi-sensor fusion imaging technology(TMAS)is an innovative data fusion method whose goal is to achieve multi-sensor signal fusion in the data layer.Under the premise of preserving the inherent temporal relationship of the original time series signal and avoiding data redundancy,the TMAS method fuses multi-sensor data into image information as the input for subsequent model training.(3)This paper attempts to encode the time series signal into image data and then input it into a residual convolutional network with attention mechanism for feature learning.Through the convolutional block attention module(CBAM)embedded in the network,the channel domain and spatial domain features can be effectively extracted,so that the irrelevant information in the picture information can be ignored,and the key information can be ignored,noise interference can be reduced,and the robustness of the model can be improved. |