Sightcorp logo

Face Recognition Using Deep Learning

Everything about Face Recognition Using Deep Learning

     

What is Deep Learning?

Deep learning is a subset of machine learning, which, in turn, is a subset of artificial intelligence (AI). When it comes to facial recognition, deep learning enables us to achieve greater accuracy than traditional machine learning methods.

How is deep learning used in facial recognition?

Deep learning networks are loosely based on the structure of the human brain, and enable us to train machines to learn by example. This means that once the deep learning algorithms have been trained for long enough using datasets that are both sufficiently large and diverse, they can apply what they have learned to make predictions or produce results in response to new data.

At Sightcorp, we use deep learning in the form of Convolutional Neural Networks (CNNs) to perform facial recognition. A CNN is a type of Deep Neural Network (DNN) that is optimized for complex tasks such as image processing, which is required for facial recognition. CNNs consist of multiple layers of connected neurons: there is an input layer, an output layer, and multiple layers between these two.

With facial recognition, the input is an image, which the CNN processes as groups of pixels. These groups are scanned as matrices, and the values within the matrices are multiplied, with the results of this multiplication being fed into the next layer. This process continues through all the layers, until it reaches the output layer, where the network produces an output in the form of an array of 2048 numbers. This array is referred to as a faceprint.

The computed faceprint can then be compared to another faceprint (1:1 matching), or to a database of faceprints (1:N matching), to determine whether or not there is a match. If two or more faceprints are similar enough, based on the chosen confidence thresholds, they will be regarded as a match.

Traditional machine learning methods vs deep learning and neural networks

Before deep learning was discovered, traditional machine learning methods, such as support vector machines, were used for facial recognition. While deep learning methods have now become the norm, it is still worth noting some of the main differences between these approaches:

  • Deep learning performs better at a large scale, while traditional machine learning methods may perform better when working with smaller datasets.
  • With traditional machine learning, you would usually need to break a problem down into each of its steps, and solve each step separately. In the case of facial recognition, this would mean using one algorithm for face detection, another for feature extraction, etc. With deep learning, however, you can solve the problem end to end.
  • With traditional machine learning methods, hand coding is required for feature identification and extraction, while this is not required with deep learning.
  • With traditional machine learning methods, we can see why an algorithm produced a certain result, whereas this is not usually the case with deep learning.
  • While deep learning is often more resource-intensive than traditional machine learning methods, it has the potential to deliver more accurate results.

 

Using Python and TensorFlow for Deep Learning in Facial Recognition

At Sightcorp, we use Python and TensorFlow in the development of FaceMatch, our deep learning-based facial recognition technology.

Python is the industry-standard programming language for deep learning. It is easy to use for prototyping, which you need to be able to do quickly during the research phase. There is also a large community of Python users, which means that there is plenty of support available, as well as many opportunities to share ideas with other developers.

TensorFlow is a deep learning framework, initially developed by Google and now available as an open source framework. We chose TensorFlow because there is a big community of users, which makes it easier to access support. Other options for deep learning frameworks include Caffe, Apache MXNet, PyTorch, and Keras.

Our development team says, “We chose TensorFlow because it is designed for a production environment. It allows for easy deployment on desktop, mobile, and cloud environments.”

Click here to view Sightcorp’s facial recognition product, FaceMatch 

Face recognition using deep learning for Android and iOS

On mobile devices, facial recognition using deep learning is still under development. Since deep learning is CPU-intensive, there is still plenty of work to be done in terms of developing mobile processors that are better suited to this task, as well as in terms of optimizing algorithms for running on mobile devices. The biggest challenge, currently, is to optimize deep learning for speed on mobile devices.

That said, there are already some deep learning-based facial recognition solutions available for iOS and Android. At Sightcorp, we are also working on developing a mobile SDK for FaceMatch, which would be able to run on Android.

Here are other articles you might find interesting:

   

Technical Specifications

The table below shows how FaceMatch SDK performs on the Labelled Faces in the Wild (LFW) dataset:

FPRTPRThreshold (Inverse of distance)
0.10.99900 ±0.002130.55448
0.010.99667 ±0.005370.59791
0.0010.99367 ±0.006050.62989

FPR = False Positive Rate
TPR = True Positive Rate

These results are an indication only and are based on the specific dataset Labelled Faces in the Wild. Customers can expect similar performance, with possible variations due to hardware and the availability of annotated data.