Emotion Detection using Deep Face

Emotion Detection using Deep Face

Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion, and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace, Dlib, and SFace. Deepface’s face-identifying accuracy goes up to 97% and has proved to be more successful in detecting faces than the average face recognition frameworks.

USES:

Facial Identification and recognition find their use in many real-life contexts, whether your identity card, passport, or any other credential of significant importance.

The recognition incorporated in such tasks demands three things: the ability to comprehend identity from unfamiliar faces, the ability to learn new faces, and the ability to acknowledge familiar faces.

We will try to create a real-time emotion-detecting model using Deep Face Framework.

Calling the dependencies

import cv
import numpy as np
from deepface import DeepFace        

cv2 is important to use all functionalities of the OpenCV library, NumPy is used to perform mathematical calculations but here we will not use it for calculation, continue reading the article to know more and the most popularly used emotion recognition library in python is deep face.

face_cascade = cv2.CascadeClassifier(r"C:\Users\saras\Downloads\haarcascade_frontalface_alt.xml")        

HaarCascade is a feature-based object detection algorithm to detect objects from images. A cascade function is trained on lots of positive and negative images for detection. The algorithm does not require extensive computation and can run in real time.

HaarCascade frontal face XML is an Object Detection Algorithm used to identify faces in an image or a real-time video. It is a pre-trained model which makes use of AdaBoost to get better results and accuracy.

cap = cv2.VideoCapture(0) # to capture a video from a webcam.
scaling_factor = 1 # Parameter specifying how much the image size is reduced at each image.

        

Let us go deep into deepface,


while True

 ret, frame = cap.read()

 frame = cv2.resize(frame, None, fx=scaling_factor, fy=scaling_factor, interpolation=cv2.INTER_AREA)

 faces = face_cascade.detectMultiScale(frame, scaleFactor=1.3, minNeighbors=3)

 for (x, y, w, h) in faces:

   face = frame[y:y+h, x:x+w]

   emotions = DeepFace.analyze(face, actions=['emotion'],enforce_detection=False)
   emotion_text = "Emotion: " + emotions[0]['dominant_emotion']

   print(emotion_text)
        

   

cv2.putText(frame, emotion_text, (x, y-20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2
   cv2.rectangle(frame, (x,y), (x+w,y+h), (0,255,0), 3)


   cv2.imshow('frame', frame)


 if cv2.waitKey(1) == ord('q'):
    break


cap.release()
cv2.destroyAllWindows())        

Basically, ret is a boolean regarding whether or not there was a return at all, at the frame is each frame that is returned. cv2 resize can upscale, downscale, resize to a desired size while considering the aspect ratio.

detectMultiScale() is a function used to detect faces. Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.

x,y are the coordinates for the top left corner of the box, and w,h are just the width and height. The deep face analyze() method contains strong facial attribute analysis features such as age, gender, facial expressions.

Here, the emotion text is used to identify the emotion which has the highest probability. OpenCV putText() method to write text on the image. cv2.rectangle() is used to draw a rectangle on any image.

The matplotlib function imshow() creates an image from a 2-dimensional numpy array and it is to display the image.cv2. waitKey(1) & 0xFF ==ord('q')This line means that when the user presses 'q' from the keyboard, our video will stop.

the Video Capture object is using the webcam, then while it is using it, no other process on your system can use the webcam. Hence, once our work is done, we release the resources. This is done using cap. release().

Python Opencv destroyAllWindows() function allows users to destroy or close all windows at any time after exiting the script.

Thanks for reading this and lets see how it works,

OMG! the emotions are recognized. Hope you enjoyed reading this.

360DigiTMG

Sandhya Kuppala

Aditya Dhillon

Bharani Kumar Depuru

Sharat Manikonda

i have issues when i install this lib ? you know how to deal with it ?

Like
Reply

Hey👋 I’m the creator of DeepFace. Thank you for sharing this to the community.

To view or add a comment, sign in

More articles by Saraswathi Natarajan

Others also viewed

Explore content categories