This tutorial is about detecting blinks. This is to prevent drowsy driving, which is consistently mentioned as a cause of traffic accidents. Drowsy driving is the highest cause of death on highways where serious accidents can occur. Therefore, drowsy driving is a common cause of accidents, and countermeasures for the problem are required. The tutorial is based on colab and Visual Code ver. It consists of two. In colab, it was cumbersome to load a webcam or video, so I replaced it with a captured image.
image.png
Colab version
Download files using git clone
eye_blink_detector
dataset
model
eye_blink.py
train.py
shape_predictor_68_face_landmarks.dat
requirements.txt
Traing
library load
Load Dataset
Preview
Data Augmentation
Build Model
Model train
Run using Colab
Image capture
VSCODE version
1. Setting up virtual Environment for Eye Blink Detector
from IPython.display import display, Javascript
from google.colab.output import eval_js
from base64 import b64decode
def take_photo(filename='photo.jpg', quality=0.8):
js = Javascript('''
async function takePhoto(quality) {
const div = document.createElement('div');
const capture = document.createElement('button');
capture.textContent = 'Capture';
div.appendChild(capture);
const video = document.createElement('video');
video.style.display = 'block';
const stream = await navigator.mediaDevices.getUserMedia({video: true});
document.body.appendChild(div);
div.appendChild(video);
video.srcObject = stream;
await video.play();
// Resize the output to fit the video element.
google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);
// Wait for Capture to be clicked.
await new Promise((resolve) => capture.onclick = resolve);
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
stream.getVideoTracks()[0].stop();
div.remove();
return canvas.toDataURL('image/jpeg', quality);
}
''')
display(js)
data = eval_js('takePhoto({})'.format(quality))
binary = b64decode(data.split(',')[1])
with open(filename, 'wb') as f:
f.write(binary)
return filename
from IPython.display import Image
try:
filename = take_photo()
print('Saved to {}'.format(filename))
# Show the image which was just taken.
display(Image(filename))
except Exception as err:
# Errors will be thrown if the user does not have a webcam or if they do not
# grant the page permission to access it.
print(str(err))
Image('my_face.PNG')
import cv2, dlib
import numpy as np
from imutils import face_utils
from keras.models import load_model
# import winsound (It is not used in Colab)
IMG_SIZE = (34, 26)
threshold_value_eye=0.4
count_frame= 0
Alarm_frame= 50
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('eye_blink_detector/shape_predictor_68_face_landmarks.dat')
#change your model
model = load_model('models/2021_06_20_08_37_01.h5')
# model.summary()