An accuracy rate of 97.25 percent is fairly impressive in most cases, and when it comes to facial recognition, DeepFace, a facial-verification software project being developed by Facebook, reached that level, according to a research paper released by the social network last week, which added that human beings shown two unfamiliar photos of faces were able to identify whether or not the subjects were the same person 97.53 percent of the time, barely edging out DeepFace.
MIT Technology Review pointed out that the progress made by DeepFace marks “a significant advance” over previous facial-recognition software, adding that DeepFace has received a boost from Facebook’s emphasis on artificial intelligence.
Yaniv Taigman, a member of the social network’s AI team, told MIT Technology Review that DeepFace is actually not a facial-recognition program, matching names with faces, but rather a facial-verification program, which recognizes when two images portray the same faces, adding of DeepFace’s progress:
You normally don’t see that sort of improvement. We closely approach human performance.
The abstract to Facebook’s research paper on DeepFace reads:
In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3-D face modeling in order to apply a piecewise affine transformation and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of 4 million facial images belonging to more than 4,000 identities, where each identity has an average of over a thousand samples. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.25 percent on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 25 percent, closely approaching human-level performance.
Readers: What are some potential uses for DeepFace if it advances past project mode?
Image courtesy of Shutterstock.