Sightcorp logo

What are Face Matching Algorithms?

Everything about Face Matching Algorithms

     

What is a Face Matching algorithm?

In simple terms, a face matching algorithm is a set of rules that a computer uses to detect a face in an image and then to compare that face to another face (or faces) to determine whether there is a match.

What are the different types of face matching algorithms?

We can distinguish between two groups of face matching algorithms: classical machine learning algorithms that use handmade feature descriptors, and deep learning methods.

Here is an explanation of each of these terms, along with examples:

Feature descriptors

These are applied on raw images to extract features. The features are then used as input data for the machine learning algorithms. Some examples include:

  • Eigenfaces/Principal Component Analysis
  • Local Binary Patterns Histograms (LBPH)
  • Fisherfaces
  • Scale Invariant Feature Transformation (SIFT)
  • Speed Up Robust Features (SURF)

Some of these feature descriptors can also be used in combination with one another to develop a face matching algorithm, e.g. Eigenfaces and Local Binary Patterns Histograms.

Machine learning algorithms

Machine learning algorithms are used in combination with feature descriptors to perform face matching. They use the input data from the feature descriptors to train a face recognition model. Here are some examples of machine learning algorithms that you can use for this purpose:

  • Neural Networks
  • Support Vector Machines
  • Nearest Neighbor
  • Decision Trees

Deep learning methods

This is the most recent approach to face recognition. Click here to learn more about deep learning methods for face matching.

Which Algorithms are used in Face Recognition?

Various face recognition models utilize different algorithms in order to perform their programmatic duties. Some programs use algorithms that identify facial features through extraction of facial landmarks from an image that shows a face of the subject. The algorithm analyzes the position, shape of the eyes, jaw and other facial features. The data obtained from such analysis is later used to search for other images with similar facial characteristics.

Other algorithms on the other hand work by normalizing the gallery of a given face image and then compressing the particular face data, only saving the data that can be used for face recognition. A final result in the form of a probe is compared to the database of facial characteristics. This can be described as one of the earliest face recognition systems that were successful.

Facial recognition algorithms can be split into two main categories: geometric that focuses on looking at identifying different facial features and photometric that looks at distilling a given image into different values that are later compared to the values in templates in order to eliminate variances. Some researchers tend to differentiate between two broad categories that are holistic and feature based. Holistic approach is focused on recognizing the face in its entirety, when the feature based is subdividing the given face into different components and analyzes each of them separately.

Face recognition software solutions use different algorithms that work on different principles. Some include eigenfaces and principal component analysis while another utilize linear discriminant analysis, elastic graph matching and hidden Markov Model. All of them have their advantages and disadvantages, it all comes down to individual preferences of the client who is looking to incorporate facial recognition in his business.

Which face matching algorithm(s) do we use at Sightcorp?

At Sightcorp, we use deep learning techniques to develop FaceMatch (our facial recognition technology), since they give us better results compared to the old techniques.

More specifically, we use Convolutional Neural Networks (CNNs). How the process works is that we feed the networks with a huge dataset of identities, where there are thousands of facial images per identity. (The dataset itself contains tens of millions of facial images.) The CNNs then process these images and compute faceprints. The faceprints provide flexibility in how matching can be done: you can either immediately compare two faceprints to determine whether they match, or you can store the faceprints, and later conduct a face search by comparing a new faceprint to all those already stored in the database.

Learn More About FaceMatch

Here are other articles you might find interesting:

 

   

Technical Specifications

The table below shows how FaceMatch SDK performs on the Labelled Faces in the Wild (LFW) dataset:

FPRTPRThreshold (Inverse of distance)
0.10.99900 ±0.002130.55448
0.010.99667 ±0.005370.59791
0.0010.99367 ±0.006050.62989

FPR = False Positive Rate
TPR = True Positive Rate

These results are an indication only and are based on the specific dataset Labelled Faces in the Wild. Customers can expect similar performance, with possible variations due to hardware and the availability of annotated data.