| With the continuous development of economy and society,people’s consumption level is constantly improving,and the use of motor vehicles is increasing year by year.The road traffic safety problem caused by this is becoming more and more serious,which has brought tremendous losses to the life safety and economic property of drivers and pedestrians.Research shows that fatigue driving has become one of the important factors causing traffic accidents,which has attracted the attention of many countries and governments.To solve this problem,researchers have proposed many fatigue detection methods,including three detection methods:based on physiological parameters,vehicle behavior and facial feature analysis.Among them,the detection algorithm based on driver’s facial feature analysis has the advantages of non-contact,high accuracy and easy implementation,so the fatigue state is determined by analyzing the driver’s facial features in this paper.In this paper,many kinds of fatigue detection algorithms at home and abroad have been studied,and their advantages and disadvantages have been analyzed.Two detection methods based on traditional machine learning algorithm and deep learning algorithm have been proposed,which combine eye and mouth state characteristics to determine driver fatigue.This paper improves the accuracy of driver fatigue driving detection by fusing multiple fatigue parameters.The main contents and innovations of this paper include the following parts:1.Research on face detection and facial keypoint localization.In the manual extraction fatigue feature algorithm,the Adaboost algorithm based on Harr-Like feature is used to detect the face,and the EyeMap algorithm is used to locate the eye region;then,according to the color difference between lip color and skin color,the lip region is located to obtain the mouth image.In the algorithm of extracting fatigue characteristics based on convolutional neural network(CNN),face detection is implemented by using multi-task cascade convolution network MTCNN.The network structure fully considers the potential relationship between the two tasks of face detection and facial keypoint localization,and then locates the eye and mouth regions according to the key points combined with the distribution characteristics of facial organs.2.An eye and mouth state recognition algorithm based on contour feature extraction is proposed.After obtaining the eye image,the scleral region is extracted by the clustering algorithm according to the difference of chroma and saturation between the sclera and the skin color;Then select the scleral boundary points to fit the eyelid contour using the quadraticpolynomial curve,and finally calculate the aspect ratio of the eyelid contour to determine whether the eye is open or closed.In the judgment of the mouth state,according to the chromaticity difference between the lip color and the skin color,the internal region of the mouth is obtained by an adaptive threshold segmentation algorithm;Then the inner contour of the mouth is fitted according to the boundary points,and the aspect ratio is calculated to judge the opening degree of the mouth and to further determine whether yawning or not.3.An eye state recognition algorithm based on weighted color difference matrix is proposed.According to the chromaticity difference between the sclera,iris and skin,two feature images are constructed for the eye image,and the grayscale and the size are normalized respectively;The feature images are projected into a block feature matrix,and the feature value is calculated to construct a feature vector;Finally,a support vector machine is used to train and classify the extracted eigenvectors,and the state of the driver’s eye is judged to further analyze the driver’s state of fatigue.4.An eye and mouth state recognition algorithm based on CNN feature extraction is proposed.Two convolutional neural network models are constructed in the algorithm,one is used to extract eye fatigue features,perform two classification tasks(open or closed),and realize eye state recognition;another network extracts the mouth fatigue feature and also performs a classification task(open or closed)to realize mouth state recognition.The sample set is expanded by data enhancement during training,which improves the accuracy and robustness of the algorithm.5.Establish a fatigue judgment model.After detecting the state of the eye and the mouth according to the above methods,the indicators such as the continuous closed eye time,the blink frequency,the PERCLOS parameter,and the yawn parameter are calculated,and these parameter indicators are combined to establish a fatigue judgment model for determining the driver fatigue state.In this paper,some experiments are carried out on the above proposed algorithm and the experimental results show that the proposed algorithm can effectively judge the fatigue state of the driver.The driver’s fatigue state is detected by combining the two characteristics of the eye state and the mouth state,and the accuracy and reliability are higher than the single parameter.And when the driver is in a fatigue state,reminding the driver to stop and rest can reduce the occurrence of traffic accidents. |