| With the increasing demand for service robots in society,the interaction between service robots and the environment is becoming increasingly intelligent.Grasping is an important way for robots to interact with the environment,so it is necessary to study the robotic grasping problem.Service robots are often in complex and unstructured environments,requiring more focus on grasping novel and unknown objects.Compared with various complex grasping structure designs,general robotic grasp detection methods have wide applicability and more research significance.Traditional object grasping detection methods based on hand-designed features and prior object knowledge are inefficient for grasping unknown objects and unsuitable for complex environments with factors such as lighting,background interference,and local occlusion.Object grasping detection methods based on deep learning have the advantage of generalizing grasping features learned from known objects to novel and unknown objects.Therefore,this paper studies deep learning-based robotic grasp detection,proposes robotic grasp detection methods for single-object and multi-object scenes,and the proposed algorithm effectively improves the robot’s object grasping detection capabilities in new environments.A framework for a Kinect-based robotic grasp system has been established.The principles of camera calibration and the process of coordinate system conversion during camera imaging were studied,and an experimental calibration of the Kinect V2 camera was performed using the Matlab Camera Calibrator tool.The mechanical arm used in the grasping experiment was modeled using the D-H parameter method,and both forward and inverse kinematic analyses were conducted.Aiming at the robotic grasp detection of a single object scene,a novel single-object grasping detection network based on the Res Net-50 architecture is proposed.This network employs a generative pixel-wise approach for grasping prediction,and adds a grasping detection task branch on top of the object classification task branch of Res Net-50,with weight sharing during image feature extraction.The network takes RGB images of objects as input and outputs the optimal grasping pose and object category information.The proposed algorithm is trained and tested on self-built and Cornell datasets,with optimization of the multi-task learning loss function and network training strategy.On the self-built dataset,the network achieves a classification accuracy of 99.70% and a grasping accuracy of 89.02%,while on the Cornell dataset,it obtains detection accuracies of 97.74% and 96.61% in terms of image-based and object-based partitioning,respectively.The results demonstrate that the improved network effectively improves the accuracy of grasping detection in single-object scenarios,thus verifying the performance of the proposed algorithm.As a prospective doctoral student,this work serves as a contribution to the field of robotics and computer vision.Aiming at the robotic grasp detection in multi-object scenes,a novel end-to-end multiobject grasping detection method that integrates object detection and grasp detection tasks is proposed.Based on the multi-task learning network architecture of single-object grasping detection methods,a grasping detection task branch is added to the YOLO v5 object detection network branch.The feature extraction network is decoupled into two task branches,and network parameters are reduced by weight sharing.The predictions of the two task branches are coupled in the network output part.By a joint reasoning strategy,the category information predicted by the object detection task is matched to the grasping pose of the corresponding object predicted by the grasping detection task.The proposed algorithm is evaluated on a selfbuilt multi-object grasping dataset for network training and testing,achieving a category detection m AP of 94.0% and a grasping detection m AP of 74.17%.The results demonstrate that the proposed algorithm has high category detection accuracy and grasping detection accuracy in multi-object scenes,verifying its performance.A simulation grasping experimental environment based on Pybullet was constructed,and single-object and multi-object robotic grasp detection simulation experiments were conducted using the Pytorch deep learning framework.For the grasp experiments in single-object scenes with known objects and novel objects,grasp success rates of 83.72% and 78% were achieved,respectively.For the grasp experiments in multi-object scenes,a grasp success rate of 80.18%was obtained.The results of the grasping experiments validate the feasibility and effectiveness of the proposed grasping detection method.This research is significant for improving the grasping detection capability of service robots and their perception of unknown environments. |