With the increasing number of patients with lower limb prostheses,it is increasingly important to design a lower limb prosthesis that can effectively coordinate human movement.Due to the continuous development and crossover of various technical fields,more new elements have been integrated into traditional prostheses.With the integration of mechanical fields,lower limb prostheses can provide additional auxiliary power for patients,helping patients to complete standing and walking more easily.In recent years,the rapid development of computer technology has created more development directions of dynamic artificial limbs.Such as merging machine learning with powered prostheses by studying patients’ leg electromyographic signal and the oscillating angular velocity data when they walk,judging patients current gait mode,and setting corresponding adjustment parameters for different gait stages to lower limb prostheses so that patients have a better walking experience.Therefore,intelligent prostheses are proposed.Intelligent prostheses usually refer to the integration of computer control technology,microelectronics science and mechanical design with electromechanical prosthesis.There are more and more researches on intelligent prostheses because they offer better control and have the ability to perceive information about the human body and the surrounding environment compared to powered prostheses.An intelligent prosthesis must satisfy the following four points if it wants to cooperate with the human body well: perceiving the external environment,dealing with external stimuli accordingly,interacting with other organs and giving feedback to the brain.At present,most of the researches on intelligent lower limb prostheses are devoted to the coordination of prostheses and the remaining limbs,while the researches on the other three conditions are relatively less.In this paper,we studied the key technologies of obtaining and processing the external environmental information of lower limb prostheses and simulated and experimented with the processes.By using deep learning techniques,we processed and analyzed the video data obtained by wearing video acquisition equipment on the upper body of pedestrians.Thus,the information of obstacles in the current movement environment can be obtained,which further enables the lower limb prostheses to acquire the ability to perceive the environment.The main research work and innovation of this paper are as follows:(1)A key frame extraction algorithm is proposed based on Perceptual Hash Algorithm(PHA).In the turning stage,pedestrians are prone to collide with obstacles due to the limited viewing angle and perceived distance.Therefore,we design and implement a key frame extraction algorithm to detect whether the patient is currently in the turning stage.By collecting 100 turning stages in different motion scenes,the data set is divided into 20 videos.The two adjacent turning stages are separated by the straight stage,and the position of the start frame and end frame of the turning stage is marked.On this data set,we set two test criteria,which are accuracy and timeliness,and compare with the other two commonly used key frame extraction algorithms.The results show that our algorithm is superior to other algorithms in both indexes,which proves the effectiveness of our algorithm.(2)Dynamic scene deblurring is proposed using enhanced feature fusion and multidistillation mechanism.Based on the Generative Adversarial Network(GAN),we integrate two mechanisms for the U-shape generator.We design two mechanisms in the generator,namely Enhanced Feature Fusion(EFF)mechanism and the Feature multidistillation(FMD)mechanism.First,we develop the Enhanced Feature Fusion(EFF)mechanism which aims at providing low-level feature information to high-level convolution layers.In this way,the image details lost due to the deepening of network layers is supplemented,and the image details restored by the network are more abundant and clearer.We further design Feature Multi-distillation(FMD)mechanism to filter and well fuse the multi-scale semantic information.The feature map is refined in the backbone network,and the residual connection distils the refining results of different stages and finally summarizes them.In this way,a process similar to distillation is formed,so that the network can obtain more diverse semantic information for image deblurring.In the fusion process,the multi-scale input feature maps are fed into downsampling layers with different rates.The upsampling operation assists these maps to recover to their original size.The outputs are merged by a convolutional layer.It allows the network to find the best sampling rate to capture feature details by self-learning.We compare our method with the most advanced deblurring methods in recent years on Go Pro and Kohler datasets respectively,and the results verify the superiority of our method.(3)In the system model,semantic segmentation network MRnet is integrated in obstacle detection system design.The comparison experiment of the whole system is designed.Firstly,the key frame extraction algorithm is used to extract the key frame of the walking video.Then blur algorithm is used to blur the key frames,and then the blurred image is input to the deblur network,to get the blurred image,deblurred image and groundtruth image data.Finally,the semantic segmentation of the three is carried out,and the experimental results are compared.In addition,the obstacle detection effect of the model is verified when the brightness of the scene changes greatly and shakes violently.The results show that our overall system model can effectively and reliably detect obstacles in the moving process of pedestrians. |