| The fusion of machine vision and machine haptics can effectively improve the robot’s ability to perceive the external environment.This thesis studies how to improve the robot’s visual-tactile fusion perception ability and object description ability.First,the visual-tactile fusion method based on deep learning is studied,and the factors affecting the performance of the visual-tactile fusion model are analyzed through comparative experiments.Then,objects are described from multiple perspectives using a visual-tactile fusion model,which can take full advantage of the complementary properties of vision and tactile.Finally,the robot’s ability to describe grasped objects is studied on the robot experimental platform.The specific research process is as follows:In the research of visual-tactile fusion model based on deep learning,visual and tactile single-modality experiments were carried out respectively.Three visual models and three tactile models were selected for visual-tactile fusion experiments,and the best combination of visual and tactile models was determined.In addition,this thesis also conducts comparative experiments on decision-level fusion and feature-level fusion of visual and tactile data.Finally,the further analysis of the visual-tactile fusion method draws the following conclusions:(1),the feature-level data fusion is easily affected by the model structure and parameter quantity,resulting in slow training speed and poor recognition effect,while decision-level data fusion is hardly affected.And(2),the performance of the visual-tactile fusion model will be affected by multiple factors such as fusion strategy,matching degree of visual and haptic models,model structure,and parameter quantity.In the object description research based on visual-tactile fusion,the complementary advantages of vision and tactile were used to describe objects from four perspectives:material properties,shape,color and category.In order to achieve accurate object description,this thesis proposes a multi-task-multi-label classification method,which can effectively balance each description angle.In the object description experiment,the object description method proposed in this thesis can achieve the most accurate object description with the smallest amount of parameters.In the robot grasp description experiment,a grasp description dataset was established using a robotic platform including vision and tactile,and an object description algorithm was trained on this dataset.The experimental results show that the object description method based on the fusion of visual and tactile perception can be effectively applied to the grasping description of robots. |