Traditional medical image teaching focuses on theoretical knowledge,and the single teaching method with teachers as the main body makes students in a passive state for a long time,unable to effectively stimulate learning interest and cultivate innovative thinking.Holographic projection technology through the construction of high definition,rich color,three-dimensional vivid three-dimensional tissue and organ images in real space,to medical imaging students a strong visual impact,so as to deepen the impression in the learning process and improve the understanding of professional knowledge.This topic mainly aims at the limitations existing in the current traditional medical image teaching process,and develops a simple,easy to use and cheap naked eye 3D holographic projection medical image teaching system.The research content of this topic mainly includes three parts,which are OBJ model visualization,3D holographic projection and gesture recognition.To be specific:firstly,an OBJ model visualization software based on Open GL was developed for rendering and displaying 3D medical image models.Secondly,a holographic projection device is designed and built to present the effect of model rendering in the real world space,and users can observe it with naked eyes.Finally,users can control and transform the model through predefined gesture commands,such as scaling,translation,rotation,etc.,to observe the medical image model from multiple levels,directions and angles.Gesture recognition is the key technology used in this system.In the process of gesture recognition research,it is found that when the indoor environment background is complex,lighting factors are unstable,and there are obvious differences in gesture skin color,the traditional method of gesture recognition which only relies on RGB images to extract convolutional features such as texture,color and contour will cause inaccurate recognition.To solve the above problems,this paper proposes an improved gesture recognition method that integrates convolution feature and key point feature,considering both global feature and local feature of gesture.The classifier module proposed in this paper and the self-defined "matcher" module were used to extract the convolution features and key point features respectively for gesture recognition.Finally,the category probability distribution output by the two modules was weighted and fused as the final gesture recognition result.The effectiveness of the proposed gesture recognition method is verified by reference experiment and ablation experiment.Finally,this system adopts C/S architecture and develops based on Py QT5 to realize a desktop application.The main functional modules of this system include authentication management module,data management module,gesture recognition module and model processing module. |