Sightcorp logo

What are Face Matching Algorithms?

Everything about Face Matching Algorithms

     

What is a face matching algorithm?

In simple terms, a face matching algorithm is a set of rules that a computer uses to detect a face in an image and then to compare that face to another face (or faces) to determine whether there is a match.

What are the different types of face matching algorithms?

We can distinguish between two groups of face matching algorithms: classical machine learning algorithms that use handmade feature descriptors, and deep learning methods.

Here is an explanation of each of these terms, along with examples:

Feature descriptors

These are applied on raw images to extract features. The features are then used as input data for the machine learning algorithms. Some examples include:

  • Eigenfaces/Principal Component Analysis
  • Local Binary Patterns Histograms (LBPH)
  • Fisherfaces
  • Scale Invariant Feature Transformation (SIFT)
  • Speed Up Robust Features (SURF)

 

Some of these feature descriptors can also be used in combination with one another to develop a face matching algorithm, e.g. Eigenfaces and Local Binary Patterns Histograms.

Machine learning algorithms

Machine learning algorithms are used in combination with feature descriptors to perform face matching. They use the input data from the feature descriptors to train a face recognition model. Here are some examples of machine learning algorithms that you can use for this purpose:

  • Neural Networks
  • Support Vector Machines
  • Nearest Neighbor
  • Decision Trees

 

Deep learning methods

This is the most recent approach to face recognition. Click here to learn more about deep learning methods for face matching.

Which face matching algorithm(s) do we use at Sightcorp?

At Sightcorp, we use deep learning techniques to develop FaceMatch (our facial recognition technology), since they give us better results compared to the old techniques.

More specifically, we use Convolutional Neural Networks (CNNs). How the process works is that we feed the networks with a huge dataset of identities, where there are thousands of facial images per identity. (The dataset itself contains tens of millions of facial images.) The CNNs then process these images and compute faceprints. The faceprints provide flexibility in how matching can be done: you can either immediately compare two faceprints to determine whether they match, or you can store the faceprints, and later conduct a face search by comparing a new faceprint to all those already stored in the database.

Click here to learn more about Sightcorp’s FaceMatch

Here are other articles you might find interesting:

   

Technical Specifications

The table below shows how FaceMatch SDK performs on the Labelled Faces in the Wild (LFW) dataset:

FPRTPRThreshold (Inverse of distance)
0.10.99900 ±0.002130.55448
0.010.99667 ±0.005370.59791
0.0010.99367 ±0.006050.62989

FPR = False Positive Rate
TPR = True Positive Rate

These results are an indication only and are based on the specific dataset Labelled Faces in the Wild. Customers can expect similar performance, with possible variations due to hardware and the availability of annotated data.