Font Size: a A A

Research On Self-Supervised Learning Method Of Target Pushing And Grasping Skills Based On Affordance Map

Posted on:2023-09-28Degree:MasterType:Thesis
Country:ChinaCandidate:R J LiuFull Text:PDF
GTID:2568306848462034Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the gradual popularization of artificial intelligence,intelligent robots are widely used in human production and life.In cluttered scenes,pushing and grasping skill learning has become a necessary basic skill for service robots.At present,deep reinforcement learning has been used to solve the learning problems of robot pushing and grasping skills.However,due to the wide variety and characteristics of operation objectives,the complexity of service environment and the limitations of existing algorithms,the existing research still has some problems,such as low learning efficiency,low success rate and insufficient generalization ability.For the target-oriented pushing and grasping task in cluttered scenes,a robot action decision process in workspace as a new Markov decision process has been defined,the overall framework has been modulated into vision mechanism module and action mechanism module,aiming at improve the existing self-supervised deep reinforcement learning algorithm,and applies it to the robot’s target pushing and grasping skill learning based on affordance map in cluttered scenes.The details are as follows.Firstly,aiming at the problems of low accuracy of action point position and angle and poor coordination of pushing and grasping in the process of robot execution,a selfsupervised learning method of target pushing and grasping skills is proposed.Firstly,a feature extraction network(Residual Group Splitting Attention Network,RGSA-Net)is proposed by integrating adaptive parameters and group splitting attention module in the vision mechanism module.Then,in the action mechanism module,a deep reinforcement learning self-supervised training framework based on deep Q network and actor-critic framework(Deep Q Actor Critic,DQAC)is built.The robot makes action decision according to the framework to better realize the coordination of pushing and grasping.Finally,experimental comparison and analysis are carried out to verify the effectiveness of this method.Then,aiming at the problems that the action coordination efficiency of DQAC framework is not significantly improved and the migration of traditional deep reinforcement learning model is poor,a self-supervised learning method of target pushing and grasping skills(Generative Adversarial Random Layer Deep Q Network,GARL-DQN)is proposed,this method is designed based on the deep Q network algorithm of generating adversarial network and random layer.Firstly,the generative adversarial network is integrated into the traditional deep Q network.The pushing network is used as the generator and the grasping network is used as the discriminator to train the cooperative algorithm between pushing and grasping.Secondly,the priority experience replay mechanism is used to improve the sample utilization of the experience pool,and the strategies,rewards and Q values in Markov decision process are formulated based on the target object.Then,a random(convolution)neural network is introduced into GARL-DQN to realize the random disturbance to the input characteristics and improve the generalization ability of the robot to different models.Finally,experimental comparison and analysis are carried out to verify the effectiveness of this method.
Keywords/Search Tags:robot pushing and grasping skills, splitting attention mechanism, generative adversarial network, DQN, affordance map
PDF Full Text Request
Related items