Font Size: a A A

Research On Shared Control Of Multi-robot Formations Based On The Brain-eye-hand Multi-modal Human-robot Interface

Posted on:2023-12-05Degree:MasterType:Thesis
Country:ChinaCandidate:L J QinFull Text:PDF
GTID:2568307061958799Subject:Measurement and control technology and intelligent systems
Abstract/Summary:PDF Full Text Request
At present,the technology of the mobile robots formation has been applied in many fields,such as cargo handling,military border patrol,cooperative environment exploration,geographic resources survey,reconnaissance and rescue,etc.Due to the wide application of the mobile robots formation,the cooperative motion control of the formation system has been paid more and more attention by researchers.However,due to the current level of intelligent algorithm,the realization of the fully autonomous mobile robots formation is very challenging and uncertain.In the mobile robots formation motion system,the intervention and assistance of the human operators can greatly improve the control performance.At present,the widely used bilateral teleoperation control mode means that the operator could senses the task status of the formation system through the haptic device,and then sends the motion commands with the help of the haptic device after the operator makes the perceptual decision and task judgment.The force feedback provided by the haptic device can alleviate the loss of perception caused by human-multi-robot separation and improve the sense of immersion in control.However,as a single-modal input,the interaction with the formation system is not intuitive and the operator has heavy load.Therefore,three input modes of brain,eye and hand in the master are introduced in this paper,and the formation is realized based on the fusion of the artificial potential field and the virtual structure,and the research is carried out based on the shared control framework.Firstly,a human-multi-robot interface based on brain-eye-hand is designed in this paper.The position and angle information of the haptic device are converted into the speed commands of the formation system.The eye tracker can map the line of sight signal to the corresponding formation switching commands according to the operator’s fixation position and fixation duration.However,when the line of sight signal is used as the input modal to control the formation shape,the Midas contact problem will occur(that is,due to the randomness of the user’s line of sight movement and the lack of the active screening approach of the user’s intention,serious false positive check errors will be caused,and the wrong formation switching commands will be issued to the formation system).Therefore,the EEG based on the steady state visually evoked potentials(SSVEP)can be used as the powerful supplement to the sight signal,and the target selection results of the two input signals can be fused to avoid issuing the incorrect formation switching commands to the formation system.Then,two formation algorithms based on the artificial potential field and the virtual structure are used to construct the formation system,which not only solves the problems of internal collision avoidance and external obstacle avoidance,but also increases the binding force between the formation members.A shared control framework is designed based on the multi-modal input interface and the formation system,which consists of the master teleoperation controller and the slave autonomous controller.The function of the master teleoperation controller is to convert the signals of the haptic device,the eye tracker and the EEG into marching and formation switching commands of the formation system,while the autonomous tasks of the formation system are completed by the slave autonomous controller,for example,formation keeping,external obstacle avoidance and internal collision avoidance.And based on the shared control framework,the stability of the whole system is proved by the passiveness analysis.Finally,the experiment platform of the mobile robots formation was built,and a series of experiments were designed to verify the multi-modal human-multi-robot interface.The first step is to design the obstacle avoidance and size change experiment under the single-modal of the haptic device.The experiment is set to prove that the force feedback provided by the haptic device can reduce the perception loss caused by human-multi-robot separation and improve the control immersion.The second step is to design the formation switching experiment under the hand-eye dual-modal control,this experiment is set to verify the feasibility of issuing the formation switching commands with the eye tracker as the input modal.The third step is to design a contrast experiments of the single-modal input and the dual-modal input,this experiment is set to verify the superiority of the sight signal as a natural and intuitive input modal,which can not only relieve the operator’s mental stress and reduce the operating load,but also improve the control efficiency.In the four groups of experiments,the efficiency increases by 9.3%,13.3%,17.3% and 19.6% respectively.That is,with the increase of the test site size and the task time,the control efficiency of dual-modal increases more obviously than that of single-modal.However,according to the questionnaire,when the operator issues the formation switching commands by the eye tracker,he or she occasionally issues the wrong formation switching commands to the formation system because of the sight drift or psychological effect.The fourth step is to design a multi-modal human-robot interaction experiment integrating the electroencephalogram(EEG).The experiment is set to prove that the EEG as a powerful supplement of the eye tracking can improve the accuracy of target selection and the transmission rate of the commands,and improve the robustness of the system.
Keywords/Search Tags:multi-robot formation, human-robot interface, brain-eye-hand multi-modal, bilateral teleoperation, shared control
PDF Full Text Request
Related items