Relevant information shows that improper measures,fatigue driving and other behaviors that hinder safe driving are the main factors causing traffic accidents.Among them,traffic accidents caused by fatigue driving account for about 25%,ranking second,so timely reminders to fatigued drivers at critical moments can effectively avoid traffic accidents.Computer vision-based fatigue detection has the advantages of simple installation,low cost and no contact while ensuring accuracy.However,the existing detection methods have problems such as poor adaptability to light changes and difficulty in balancing real-time and accuracy.The fatigue driving detection method based on multi-feature fusion proposed in the paper can well solve the above problems.The model mainly includes three parts:driver face detection,driver facial feature extraction,feature fusion and fatigue determination.The main work is as follows:(1)Face detection:the input image is processed using a spatial domain image enhancement algorithm to improve the image brightness and contrast,and then noise removal techniques are used to reduce the effect of noise on the detection results.Then an improvement method is proposed for the multi-task convolutional neural network(MTCNN):replacing the non-maximum suppression algorithm based on classification confidence with the non-maximum suppression algorithm based on localization confidence,and adjusting the key parameters of the image pyramid.The experimental results show that the improved algorithm reduces the detection time by 40.4%and improves the detection speed of MTCNN to 93ms/photo,which can meet the requirements of face detection in fatigue driving application scenarios.(2)Facial feature extraction:MTCNN combined with ERT algorithm is used to locate 68 feature points on the face,and the local images of the left eye and mouth are intercepted as the input of the feature extraction module according to the coordinates of the feature points of the left eye and the corner of the mouth.The left eye feature vector(EFV)and mouth feature vector(MFV)are calculated as the static features of the face,respectively,to eliminate the influence of artificially set thresholds in the commonly used facial features EAR and MAR on the detection results,and the optical flow information is extracted as the dynamic features of the driver’s face using the Farneback optical flow method.(3)Feature fusion and determination:The local features of the left eye and the local features of the mouth were fused using direct averaging and weighted averaging methods.The experimental results show that for the proposed detection method,the accuracy of using the weighted fusion method is improved by 1.2%compared with the direct fusion method,indicating that the weighted average fusion method better reflects the advantages of feature fusion in this algorithm.The experimental results show that the local image multi-feature fusion detection method improves 1.7%,reaching 93.3%,compared to the local image single-feature detection method.the local image multi-feature fusion detection method improves 6.1%in accuracy and 10FPS,indicating that the model proposed in the paper can achieve higher accuracy while ensuring real-time performance. |