| In recent years,with the continuous development of society and innovation in the field of computers,Human Movement Science has been paid more and more attention to people's daily physical fitness activities.Nowadays,obesity has become the focus of the whole society.How to lose weight scientifically and effectively through exercise,and how to evaluate the effect of exercise,has become the focus of people's attention.At the same time,many innovative applications have appeared in the aspects of merging of competitive sports and daily fitness,in which computer vision technology can bring people a sense of participation and a more intuitive experience.Nowadays,obesity has become the focus of social concern,and 2D image of the human body has a lot of biometric information.Thus,the combination of computer vision and human body weight is a promising topic practically.Non-contact human body weight estimation is a popular technology that combines computer vision,anthropometry,and kinematics.Compared with traditional contact weighing,non-contact human body parameter measurement and human movement evaluation has a broader application prospect.How to construct the relationship of the pixel in the image and the human body in the real state is still open and hard to be addressed for human body weight estimation based on 2D image.Previously,many researchers use Kinect depth-sensing camera or thermal sensor cameras to obtain the depth information and contour of the human body in the image,and then,the weight of the human body can be calculated using the extracted features from the human body image.Its expensive equipment limits the widespread application of this kind of method.However,it is difficult to extract accurate depth information from two-dimensional images obtained from ordinary cameras,so it is difficult to accurately estimate the actual size of objects in the image by matching the pixel scene with the actual scene.This thesis combines the knowledge of computerscience and human movement science to study the method of human body weight estimation based on two-dimensional images.In this thesis,Firstly,based on the camera imaging principle,the height of the object in the two-dimensional image is estimated by two methods: fixed camera attitude and selected reference.Secondly,according to the prior knowledge of anthropometry,the external features of the human body that are closely related to the body weight are selected.Next the human body key point detection model is combined with the image semantic segmentation model to extract the external features of the human body in the 2D image and perform feature regression calculation,based on deep learning,the external features of human Body in two-dimensional images were calculated by regression,and the Body Mass Index(BMI)of human Body was obtained.Finally,the Body weight estimation based on two-dimensional images was realized by combining the height and BMI of human Body.The experiment shows that,compared with the traditional method,there is a small error between the estimated weight and the actual weight of the human body in the two-dimensional image in this thesis,and it has a higher precision and accuracy compared with the naked eye in the estimation of the human body weight. |