| Facing with the increasingly complex task requirements,whether service robots can learn object operation options efficiently and establish a harmonious human-computer interaction mode is a key factor,which determines the level of robot intelligence.Considering the role of service robots in daily life,it is convenient to extract,learn and store operational skills knowledge from human demonstrations,which could be conducted by the robot Learning by Demonstration(LbD)technologies.In this paper,a complete LbD system framework is built for learning operation procedures of the grasping task.Aiming to construct a complete process for robots extracting task objectives from grasping demonstration actions or interaction activities and responding according to general behavioral norms,this paper studied the methods of hand-object pose fitting,grasping constraint learning,task intention reasoning and action primitive decomposition in demonstration behavior systematically.In this paper,the research and experiments are carried out separately in three aspects: information extraction,knowledge learning and mapping reproduction in LbD system.Firstly,based on the traditional direct mapping method,according to the different degree of mapping of the demonstration space,three working modes are designed for the LbD system.Secondly,aiming to solve the problem of information extraction in the demonstration,based on the KCF tracking algorithm(Kernelized Correlation Filters),virtual mapping method and PSO algorithm,a framework is applied to extract the trajectories and hand-object gestures.The hand-object attitude,trajectory and joint angle fitting is realized based on a single RGB-D sensor.And the real-time performance or accuracy meet requirements.Thirdly,in order to enable the robot to choose the grasp position with knowledge of task label and operating objects,a grasping constraint learning method is designed based on the Bayesian Network Model.The Graspit! Simulation platform is used to generate training samples,and Bayesian Information Criterion(BIC)and the mountaineering search algorithm are used to learn the network structure.The network parameters are fitted by the Maximum Likelihood Estimation(MLE)algorithm.The system is able to inference the task intention from a single frame of the grab action.The operation position learning process is realized based on the task label or object properties.Moreover,the inferencing experiments are conducted and results hot zone is rendered on the collected actual object point clouds.Then,in order to solve the planning problem of task reproduction,this paper decomposes the demonstration behavior to sub fragments on the symbol layer.The primitive combinations describe the execution flow of complex task.Based on the Hierarchical Hidden Markov Model(HHMM),a method is designed for decomposing and identifying the behavioral action primitives.Thus,with knowledge of the primitive sequence,the problem of complex task planning for robots is simplified into multiple simple motion executions.Finally,according to the above research contents,the robot demonstration learning experiment system is developed and constructed separately.The functions of operation behavior feature extraction,operation intention reasoning,grasping position inferencing and operation task primitives decomposition are developed.The functional verification is performed separately.In addition,the practical experimental tasks are designed for the three working modes of the proposed system.The demonstration-reproduction process is carried out on the UR5 robotic arm platform and VREP simulation platform,in which the task of opening of the cabinet door,drawing drawers,spelling,sorting,pouring water and transferring items are conducted. |