How independent testing has impacted face recognition accuracy– and helped create trustworthy marketing


Contributors

Ján Lunter

CEO of Innovatrics

Matúš Kapusta

Head of Government Solutions

Does a generic claim like “99.99% face recognition accuracy really make the grade now, if not supported by evidence from independent assessments?

Algorithms responding to current challenges

The FRVT (Face Recognition Vendor Test) evaluation is both a training ground and a somewhat cruel arena. It provides reliable benchmarking for face recognition accuracy in both verification and identification, and it confronts biometric algorithms with additional challenges in order to check how they perform in different circumstances. 

The result is a list of contenders, ranked from the lowest to the highest performance. All data is public and accessible through the NIST website.


Brief history of FRVT

In the early 2000s, the National Institute of Standards and Technology (NIST) in the U.S. decided to start independent evaluations for facial recognition technologies. The federal agency wanted to provide the U.S. government with a benchmark for the best available technology in the field, both in the market and as a prototype. At that time, facial recognition was just starting to show its potential. This is how the FRVT (Face Recognition Vendor Test) was born.

The first large-scale FRVT evaluations took place during the 2000s. The algorithms’ accuracy was challenged on a standard image database, called FERET, which at that time consisted of 2,413 still facial images representing 856 people. Today, FRVT has become an ongoing program. At the moment, approximately 200 algorithms have been submitted by about a hundred universities, research institutes, and companies from all over the world.

Tests like FRVT were born in order to provide an authoritative standard in the evaluation of new biometric technologies. Together with other NIST evaluation programs like FpVTE (for facial recognition and fingerprints) and MINEX (which assesses the capacity of an algorithm to be used by different vendors), they have resulted in the fast-paced biometrics industry taking on an interesting dimension.

This is now a community in which the capability to steadily improve your own algorithms is what really matters. 

Marketing has its consequences too. Does a generic claim like “99.99% face recognition accuracy” really make the grade now, if not supported by evidence from independent assessments? Not really. 

Yet this presents a new challenge to biometrics companies participating in independent evaluations: how to communicate facts and rankings to their customers, that are not always easy to digest.

“Participating in the FRVT shows transparency and confidence in the product.”

We discussed the value of independent face recognition accuracy testing for biometrics companies with Ján Lunter, CEO at Innovatrics, and the company’s Head of Government Solutions, Matúš Kapusta.

How has independent testing impacted your work at Innovatrics?

J. Lunter: It helped us immensely, from the very start of our story. At the beginnings of Innovatrics we were mainly focusing on fingerprint recognition. NIST set up a benchmark for fingerprint algorithms and we decided to participate. We may have been a small company among giants, such as NEC, but we had extremely fast and accurate algorithms, which were also not too demanding on hardware.

This independent comparison helped us in many international tenders in which we competed, and won, against much larger companies. Independent testing provided by NIST really does create a level playing field: you can challenge the biggest companies on equal terms. And that’s terribly exciting!


The Innovatrics algorithm has scored top marks in accuracy and speed. These are crucial for large-scale applications such as document verification, criminal investigation and border control. Read more about the results here

What is your experience with the Facial Recognition Vendor Test (FRVT)?

J. Lunter: When we turned to the FRVT our goal was not to merely participate, but to consistently come in among the top 20 or so. This is one of the priorities of our research team. FRVT provides a wealth of data about all our algorithms: we can see where we are doing well and where we lag behind and need to get better.

M. Kapusta: The development in facial recognition is extremely fast. It’s impressive; even if you have a top algorithm in a certain category, within half a year that same algorithm would end up somewhere in the middle of the table… or worse.

The reports provided by NIST are extremely detailed and precise. We can study them to see exactly what we need to do in order to either catch up with the market or to improve our position in the fields in which we’re already leading. For example, we are one of the few companies that are actually able to verify children’s faces. Children’s faces are smaller and change faster, so they provide a bigger challenge for algorithms.

How do you use FRVT results to market your technology?

J. Lunter: For us, the participation in FRVT and in other independent testing provided by NIST shows transparency and confidence in what we are doing. If you offer facial recognition and you are not in the FRVT comparisons, you’ll be having a difficult time in persuading customers to opt for your offer.

This helps us when dealing with certain customers, who sometimes are in contact with vendors that offer impossible numbers but which cannot be found in FRVT rankings at all. Most of the companies that take facial recognition seriously are in the benchmark. And, of course, being among the top competitors is great marketing.

M. Kapusta: You also have to know how to communicate the results of the FRVT to customers, so that they can really focus on what matters most to them. If you are not deeply involved in biometrics, the FRVT reports are anything but easy to read and interpret.

So, every time a new report from NIST comes out, we extract the information that is relevant for us and for our customers. For example, comparing us with the largest ABIS providers shows that we are one of the best in the world, as some of the top-ranking algorithms are actually from universities or research labs and they are not used in commercial products.

We also split data into use-cases, to show how we perform in small databases, for example in attendance systems within companies, and in large ones, such as criminal or border control databases.

Are you going to keep engaging in FRVT and other independent testing in the future?

J. Lunter: Definitely. And it is great that other emergent technologies are getting a similar independent testing treatment. Recently we passed tests by iBeta – an independent biometrics testing lab – with our liveness check technology, and we are one of the first in the world to actually do so. These evaluations compare liveness solutions from different companies and check whether they can be spoofed by showing a photo or a mask. We are now preparing for level 2 of these tests.


Blair Crawford is the co-founder and managing Director of Daltrey, an Australian biometric identity provider with a mission to redefine how identity is used to create safer, more secure environments across government, critical infrastructure and enterprise. The Daltrey solution runs Innovatrics’ algorithms, of which the face recognition accuracy performance is publicly visible on the NIST website.


Why having a safe global digital identity is important, how is its adoption faring and what will it look like in the near future?
Watch Blair Crawford’s talk at Trust Report conference
Watch

Blair Crawford:
“You should never mark your own homework.”

Opinion of a provider of a biometric identification platform

“If you are a good biometric provider, or anyone who operates in the security industry, you know that there is such a high level of competitiveness that you actually want to showcase results which can be reviewed and matched against others. Competition is good and it drives innovation. You are never going to get innovation if you are only using your own team to test what you built. Briefly said: you should never mark your own homework.

If you are not willing to communicate with your customers about what they should expect from a security risk and user-experience perspective, then something is probably not stacking up in terms of your offering. On the other side, I think it is extremely important that the market has the opportunity to ask vendors for the details and the expectations for the services they are paying for.”



Author: Giovanni Blandino