FACE RECOGNITION USING TRANSFER LEARNING

FACE RECOGNITION USING TRANSFER LEARNING

Today, I going to use the Transfer Learning concept to demonstrate how transfer learning can be done on a pre-trained model to save our computational power and resources.

INTRODUCTION TO TRANSFER LEARNING:-

Humans have an inherent ability to transfer knowledge across tasks. What we acquire as knowledge while learning about one task, we utilize in the same way to solve related tasks. The more related the tasks, the easier it is for us to transfer, or cross-utilize our knowledge.

Transfer learning is a technique whereby a neural network model is first trained on a problem similar to the problem that is being solved.

Face recognition is the general task of identifying and verifying people from photographs of their face.

Overview of the project: In this project we use the pre-trained model "resnet-50" and use the transfer learning concept for face recognition.

Now let us start with the detailed task:

Pre-requisites :

  1. Keras
  2. Tensorflow
  3. Pillow
  4. numpy
  5. pandas
  6. Jupyter notebook and Python.

step for project :-

  • Create a Workspace(in my case "Mytask" for the project.
  • Create a Folder called “Dataset” inside which create 2 sub-folders as “Train” and “Test”. Inside both of these Folders create a sub-folder with the name of the person whose face is to be recognized, this would store images of the particular person.
import cv2
import numpy as np
# Load HAAR face classifier
face_classifier = cv2.CascadeClassifier(‘haarcascade_frontalface_default.xml’)
# Load functions
def face_extractor(img):
 # Function detects faces and returns the cropped face
 # If no face detected, it returns the input image
 
 gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
 faces = face_classifier.detectMultiScale(gray, 1.3, 5)
 
 if faces is ():
 return None
 
 # Crop all faces found
 for (x,y,w,h) in faces:
 cropped_face = img[y:y+h, x:x+w]
return cropped_face
# Initialize Webcam
cap = cv2.VideoCapture(0)
count = 0
# Collect 100 samples of your face from webcam input
while True:
ret, frame = cap.read()
 if face_extractor(frame) is not None:
 count += 1
 face = cv2.resize(face_extractor(frame), (300, 300))
# Save file in specified directory with unique name
 file_name_path = r”C:\Users\Shinchan\Music\Akashdeep” + str(count) + ‘.jpg’
 cv2.imwrite(file_name_path, face)
# Put count on images and display live count
 cv2.putText(face, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
 cv2.imshow(‘Face Cropper’, face)
 
 else:
 print(“Face not found”)
 pass
if cv2.waitKey(1) == 27 or count == 100: #27 is the Esc Key
 break
 
cap.release()
cv2.destroyAllWindows() 

print(“Samples Taken”)
No alt text provided for this image

First we take 100 images from the required person and split it to 2 sets Train and Test. Train set consists of 80 images while Test set contains 20 images(We need to segregate manually)

Step 2: Model Creation.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Now, we will add the layers for the input that we want to train.

No alt text provided for this image

Adding all the layers to the model and printing the summary of the model.

No alt text provided for this image

Now, checking the testing and validation dataset that we have given into the model and also modifying them.

No alt text provided for this image

 Now, we will train are model and saving this model.

No alt text provided for this image


No alt text provided for this image

You can see the accuracy of the model which is almost 91% by having such few epochs and using less computation power.

Now, loading the model and prediction of the model for facial recognition.

Thank You :)

In case of any queries Mail me - piyushsinghsanchit@gmail.com










To view or add a comment, sign in

More articles by Piyush Singh

Others also viewed

Explore content categories