Face Recognition Using OpenCV.
In this article, I am going to discuss about the face recognition using openCV. Here, I am using LBPH model for face recognition and using python language.
Road Map for the task :
i.) Create the dataset for the model.
ii.) Train the model
iii.) Predict the face
iv.) Perform the action as the face predicted.(Send Mail and Whatsapp message to a contact when User A come , launch the ec2 instance and attach an EBS volume when user B come in front of camera )
Now lets start these steps one by one.
1. Create the dataset:
Using the openCV library, we will use our webCam and click some photos of the user A and User B and store these in different folders. To classify or train more Users faces , create more directory to store the dataset. The structure should be like :
train_img/ |__ 0/ |__ 1/
Code to create the dataset:
import cv2
import numpy as np
# Load HAAR face classifier
face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Load functions
def face_extractor(img):
# Function detects faces and returns the cropped face
# If no face detected, it returns the input image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = face_classifier.detectMultiScale(gray, 1.3, 5)
if faces is ():
return None
# Crop all faces found
for (x,y,w,h) in faces:
cropped_face = img[y:y+h, x:x+w]
return cropped_face
# Initialize Webcam
# cap = cv2.VideoCapture("http://192.168.43.1:8080/video")
cap = cv2.VideoCapture(0)
count = 0
# Collect samples of your face from webcam input
while True:
ret, frame = cap.read()
if face_extractor(frame) is not None:
count += 1
face = cv2.resize(face_extractor(frame), (200, 200))
face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)
# Save file in specified directory with unique name
file_name_path = 'C:/Users/Asus/Documents/summer_training/cv/CNN/users/train_img/0/' + str(count) + '.jpg'
cv2.imwrite(file_name_path, face)
# For user B, file_name_path should be 1 instead of 0
# Put count on images and display live count
cv2.putText(face, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
cv2.imshow('Face Cropper', face)
else:
print("Face not found")
pass
if cv2.waitKey(1) == 13 or count == 300: #13 is the Enter Key
break
cap.release()
cv2.destroyAllWindows()
print("Collecting Samples Complete")
After collecting the data, now we can train our model.
2. Train the model:
To train the model, we have to give the path of the dataset till "train_img" . From this directory , it will go through all the directories 0,1,2... and train by each and every relevant image.
import cv2
import os
import numpy as np
from os import listdir
from os.path import isfile, join
# train the model
# path of the images
data_path = 'C:/Users/Asus/Documents/summer_training/cv/CNN/users/train_img/'
faces=[]
faceID=[]
for path,subdirnames,filenames in os.walk(data_path):
for filename in filenames:
if filename.startswith("."):
print("this file is not used by the model")
continue
id=os.path.basename(path)
img_path=os.path.join(path,filename)
print ("img_path",img_path)
print("id: ",id)
test_img=cv2.imread(img_path,cv2.IMREAD_GRAYSCALE)
if test_img is None:
print ("Not used ")
continue
faces.append(test_img)
faceID.append(int(id))
# create the model using LBPH and train the model.
face_recognizer=cv2.face.LBPHFaceRecognizer_create()
face_recognizer.train(faces,np.array(faceID))
print("Model is trained successfully.. ")
# to save the model in a file and can use again and again.
face_recognizer.save('C:/Users/Asus/Documents/summer_training/cv/task_6_face/trained_model.xml')
3.) Load the model and predict:
After saving the model's bias and weights , now load the model and predict the face.
# create or initialise the model
face_recognizer=cv2.face.LBPHFaceRecognizer_create()
# load the model
face_recognizer.read('C:/Users/Asus/Documents/summer_training/cv/task_6_face/trained_model.xml')
# predict the face
results = face_recognizer.predict(face)
4. Perform the fun activities:
This model will predict two faces and also classify both of them. Now , if User A come in front of the screen, it perform action like send the gmail to the other person with the face of User A and send whatsapp message and if User B come in front of screen, it create the EC2 instance and an EBS volume and attach it to the instance.
For Gmail, we use SMTP protocol to send the mail and using "pywhatkit" library, we can send Whatsapp message also.
For launching AWS instance, I have already configured my aws using "aws configure" command. Using "subprocess" module , it will run the command to create and attach the volume with ec2 instance.
import subprocess as sp
ins = sp.getoutput('aws ec2 run-instances --image-id ami-06a0b4e3b7eb7a300 -- instance-type t2.micro --key-name firstkey --subnet-id subnet-e8c2cb80 --security-group-ids sg-043a35ed45e0208d6 --count 1 --query Instances[0].InstanceId')
vol = sp.getoutput('aws ec2 create-volume --volume-type gp2 --size 5 --availability-zone ap-south-1a --query VolumeId ' )
time.sleep(20)
print(sp.getoutput(f'aws ec2 attach-volume --volume-id {vol} --instance-id {ins} --device /dev/xvdf'))
break
'subprocess' module also return the output while 'os' does not.
Using query , we will get the instance id and volume id , then it will automatically takes these values from the variables. Meanwhile , instance needs some time to go to running state. So, we have to pause the program by sleep function.
To get the complete code , visit to Github Repository. and also read the README.md to understand how to use it.
For any query or suggestion, feel free to ping me. Thank You.🌺