| It is of great significance for service robot to realize the autonomous choice of social objects in the real social interaction between human and robot.The arousal mechanism of most robots is to trigger interaction behavior through predetermined wake-up words.In the interaction process,robots will have interaction disorder in the face of multiple social objects,and cannot judge whether the interactive object is interacting with them.As a result,chaotic phenomena occur in the human-computer interaction process,and the robot’s eyes will show a dull and rudderless state.To solve this problem,on the basis of various social cues obtained by the robot from the video and audio of users in the scene,a computational model of multi-user social intention evaluation is proposed in this paper,so as to evaluate and select the state of the social object,and then drive and control the human-head anthropomorphic behavior expression of the robot,and increase the eye contact in human-robot interaction.With the rapid development of machine learning and artificial intelligence,multimodal information fusion perception technology has been widely studied,and the robot’s perception ability of user information has also been rapidly improved.Therefore,in this paper,feature extraction and quantification of multiple social cues of multiple users are carried out through multi-modal information of video and audio,so as to obtain the initial feature matrix of multiple users and multiple social cues,providing data support for further decision-making tasks.While robots can sense multi-user information,the robot interaction theory lacks reliable model and calculation method for multi-user interaction selection.In order to solve this problem,the user selection problem in the multi-user parallel interaction of robots is transformed into a comprehensive evaluation ranking problem of user social intention,and a user social intention evaluation model integrating multiple social cues is proposed in this paper.The model integrates a variety of social cues,evaluates and ranks users’ social intentions in real time by using Entropy-TOPSIS method,and drives the social attention shift of robots according to the sorting results.Many studies have also demonstrated the need for robots to express natural and anthropomorphic behaviors in social scenarios.In order to realize the human-like gaze behavior of the robot,a two-degree-of-freedom control system is constructed by modeling the eye-head coordination gaze behavior.The numerical equation of the model is solved under the condition of minimum neurotransmission noise.The model can predict the coordinated trajectory of the robot’s eye movement and head rotation,and finally realize the robot’s natural anthropomorphic gaze behavior expression. |