Liveness Check using MLKit | IOS Tutorial

Liveness Check using MLKit | IOS Tutorial

We all have used captcha, image matching and other human verification systems which works like charm. We will use the same concept in this tutorial but with different approach. We will use Google's ML Kit to verify human liveness by capturing different face contours and expressions like smile, blinking and head movement.

No alt text provided for this image



Sounds interesting Right?

Let's get started.......





For this tutorial I am using Xcode 11.4 and Swift 5

Download Starter Pack

Download the start pack from the link below and get familiar with the code and to successfully run the project you need to run in on a real iPhone device .

For the simplicity of tutorial i have not covered camera setup as there are already multiple tutorials available.

First you need to install all the dependencies required to run the project. Go to your project directory and run the below command in your terminal.

Pod Install

What Is Firebase ML Kit?

Firebase ML Kit is a mobile SDK that makes it easier for mobile developers to include machine learning capabilities in their applications. It consists of the following pre-built APIs Like:

  • Text Recognition
  • Face Detection
  • Object Detection and Tracking
  • Image Labelling

and many more.....

Setting Up a Firebase Account

To set up a Firebase account, follow the account setup section in this Getting Started With Firebase Tutorial. While the Firebase products are different, the account creation and setup is exactly the same.

The general idea is that you:

  1. Create an account.
  2. Create a project on console.
  3. Add an iOS app to a project.
  4. Drag the GoogleService-Info.plist to your project.
  5. import Firebase in AppDelegate.
import Firebase
  1. Initialize Firebase application(_:didFinishLaunchingWithOptions:) Delegate method.
FirebaseApp.configure()

It’s a simple process but, if you hit any snags, the guide above can help.

Note: You need to set up Firebase and create your own GoogleService-Info.plist for both the final and starter projects.

In Main.Storyboard file there is a simple UI layout consisting of Camera Preview view and a table view in bottom which will display all the checks.

No alt text provided for this image

In CameraPreview.swift file you will find all the implementation of AVFoundation for Camera Preview.

After running the project you will see the app running camera on main screen and we are good to go with some ML setup.

Now move to LivenessCheckViewController.swift File as we will be working in this file for the rest of tutorial.

We will start from VideoCaptureDelegate method didCaptureVideoFrame which provides the frames. we will use these frames to predict the face detection and other checks. Paste the below code inside didCaptureVideoFrame method.

// 1
if let pixelBuffer = pixelBuffer {                self.predictUsingVision(pixelBuffer: pixelBuffer) 
}

  1. the captured image from camera is contained on pixelBuffer.

Now we will create a function which will takes the pixelBuffer as parameter.

private func predictUsingVision (pixelBuffer: CVPixelBuffer)
{
    let ciimage : CIImage = CIImage (cvImageBuffer: pixelBuffer)
    let ciContext = CIContext ()
    guard let cgImage : CGImage = ciContext.createCGImage (ciimage, from:          ciimage.extent)
    else {
        // end of measure
        return
    }
    let uiImage : UIImage = UIImage (cgImage: cgImage)

    // predict!
    detectFace (uiImage)
}

This functions takes the pixelbBuffer and converts it into the image. Once image is produced we can now starting predicting using this image.

You will see compiler complaining about Use of unresolved identifier 'detectFace' error that is fine we will take care of it later. First we need to import firebase framework.

import Firebase

After that define following properties in the top of class.

private var faceDetector: VisionFaceDetector?
private let options = VisionFaceDetectorOptions()
private lazy var vision = Vision.vision()

Add configuration code in ViewDidLoad method.

options.performanceMode = .fast
options.landmarkMode = .all
options.classificationMode = .all
options.minFaceSize = CGFloat(0.1)

faceDetector = vision.faceDetector(options: options)

Face Detection API

The Face Detection API mainly takes in an image and scans it for any human faces that are present, which is different than facial recognition (in which Big Brother is infamously watching you while you sleep). It’s also an on-device API, which means that all the ML inferencing happens on your device and no personal data is sent to a third-party vendor

  1. Coordinates of a rectangular bounding box around that face
  2. Coordinates of a rectangular bounding box around the eyes, nose, and mouth for that face
  3. The probability that the face in the picture is smiling
  4. The likelihood that the eyes in that face are open or closed
  5. A face contour outlining the face, eyes, nose, and mouth

We will be using VisionFaceDetector class to detect contours on our image. Now we will define detectFace function.

private func detectFace (_ pickedImage: UIImage)
{
    // 1
    let visionImage = VisionImage (image: pickedImage)
    // 2
    faceDetector?.process (visionImage) { [weak self] (faces, error) in


		guard let self = self, error == nil else { return }


        // 3
        guard let faces = faces,
                  !faces.isEmpty,
                  faces.count == 1,
                  let face = faces.first else { return }

        self.validateLiveness (face)

    }
}

  1. Conversion of UIImage in to VisionImage object.
  2. Method invocation of faceDetector which will return the detected faces in the provided image.
  3. Here are few checks which are the basis of liveness check.

Now we have detected the face we will extract the data from it. implement the below function.

    private func validateLiveness(_ face: VisionFace) {

        //1

        updateValidStep(number: 1)

    }
  1. Since we have already check the face detection check we will update it to table view

Detect Blink

No alt text provided for this image


We need to use the information we receive from the model to make a call and determine if the person in the picture is blinking or not.

For the sake of this tutorial, I’ll assume that the person is blinking if the open eye probability of any one of the eyes is less than 0.4.



Add below code in our ValidateLiveness Function.

//Detect Blinking

if face.leftEyeOpenProbability < 0.4 || face.rightEyeOpenProbability < 0.4 {

    updateValidStep(number: 2)

}

Detect Left and Right Movement


No alt text provided for this image



We will use headEulerAngleY property to identify the left and right movement of detected face

For Looking left at angle -35 degree or less i will assume it looking right and +35 in case of Right.



Add below code in our ValidateLiveness Function.

//Detect Right  Movement

if face.headEulerAngleY < 35 {

    updateValidStep(number: 3)

}


//Detect Left  Movement

if face.headEulerAngleY < -35 {

    updateValidStep(number: 4)

}

Detect Smile 🙂

No alt text provided for this image



Lastly we will detect the smile by using smilingProbability. I will assume above 0.3 probability as smiling face.




Add below code in our ValidateLiveness Function.

//Detect Smile

if face.smilingProbability > 0.3 {

    updateValidStep(number: 5)

}

And we are done with the liveness check..... You can play around with other contours and add you own checks.

Where to go from here?

You have learn basics of MLKIT and implemented a simple app using those information. You can add more checks and optimize the flow by adding validations like random checks generation and for each check you add counter otherwise it will fail the detection. Find the complete project in the link below.


I've updated the article to reflect the deprecation of FirebaseMLKit and provided an alternative solution using GoogleMLKit, along with the revised code. Check out the improved version here: https://sizaan5.medium.com/face-detection-smile-detection-using-googlemlkit-facedetection-ios-swift-6b5b77eeba99. Your support means a lot! 🚀 #iOSDevelopment #GoogleMLKit #MobileVision #TechUpdate

Good article but it's just a Face Detection not a Liveness Detection because it doesn't recognized 2d picture for exemple.

Like
Reply

Great article. If I only use detect blink and smile. Does Firebase charge a fee?

Like
Reply

To view or add a comment, sign in

More articles by Abdul Basit

  • Cordova | Native Plugin Development

    The first question you may ask after reading the title why on earth anyone is still developing app using Cordova when…

    3 Comments

Explore content categories