| With the aging population becoming increasingly prevalent,there is a growing number of elderly individuals living alone without proper care.As a result,they are susceptible to falling and sustaining serious injuries.However,due to a shortage of available labor,manual care can be prohibitively expensive and difficult to provide on a large scale.The development of sensor technology has paved the way for human activity recognition,particularly in the detection of falls,which has become a prominent research topic both domestically and internationally.Single-sensor monitoring systems have been the norm,but their reliability is hampered by factors such as poor lighting and occlusion.To address this issue,multi-sensor fusion has emerged as a promising approach and is poised to revolutionize the field of human activity recognition.In this paper,we propose a novel method of combining camera and millimeter wave radar sensors to overcome the limitations associated with single-sensor systems.We conducted theoretical analysis,model building,algorithm research,data collection,and experimental verification to explore the potential of our approach.Our work focuses on the following key areas:1.Multi-modal Data Feature Extraction and LearningInstead of using a single mode of input data,a multi-sensor system combines multi-mode data collected by different sensors.Simple network models are difficult to extract complex space-time and timing features in these data.Based on the space-time features extraction ability of CNN and the timing features extraction ability of LSTM,we propose an improved CNN-LSTM model.This model can extract and learn the space-time and timing features of data well and train the classifier.It can extract and learn multi-modal data features easily.2.Research on Improved Algorithm of SpectrogramSpectrogram is difficult to effectively classify human activity and we propose a recognition algorithm based on improve spectrogram,which reinforce the features of spectrogram and reduce the noise in the spectrogram.The improved spectrogram can improve the accuracy of human activity recognition and reduce the misclassification rate of confusing human activity.3.Design a human activity recognition system based on cameraThe camera can recognize human activity efficiently and accurately in normal light environments,but the recognition performance decreases in low light environments.We collect the data of human activity and conduct experiments to determine the optimal number of frames as the image data input size.Finally,we prove that the camera can have excellent recognition performance in normal light environments by some experiments and its performance will decline rapidly in low light environments.4.Design a human activity recognition system based on multi-sensor combinationIn order to solve the disadvantages of the single sensor,we propose a human activity recognition system based on multi-sensor combination.The recognition system consists of a camera and a millimeter wave radar.The two sensors in the recognition system are fused by the transformation of the spatial coordinate system and the calibration of the timing information of the data.Then,we use the improved CNN-LSTM network to extract the spatial and temporal features in the multimodal data and obtain the complex features of human activity for deep learning.Finally,we conduct some comparative experiments to analyze the advantages and disadvantages of different fusion recognition algorithms and selected the most suitable fusion algorithm for the system. |