Meet Us at Milipol Asia-Pacific in Singapore
Innovatrics joins the world’s leading international exhibition to homeland security in the Asia-Pacific, Milipol, to be held at ...
Read more
“For a long time, we didn’t believe in face recognition as a viable identification technology in our company. It was too error-prone,” says Jan Lunter, CEO and CTO of Innovatrics. His company now provides very fast and accurate face recognition algorithms, and the ability to recognize and identify faces is one of the flagship products. This U-turn in attitude towards face recognition has been caused by the rapid advances in AI. What used to look impossible just a few years back is currently not a problem anymore. At Softecon, he explained how AI had changed the field of biometrics in which Innovatrics has been active for 15 years now.
An exponential growth in accuracy
In 2010, the American National Institute of Standards and Technology (NIST) started to provide the first standardized testing for the accuracy of face recognition algorithms. It is an accepted authority in testing which sets benchmarks including fingerprint algorithms (where Innovatrics is one of the top performers in the world).
“At that time, face recognition algorithms had a much higher error rate than humans. Between the years 2010 and 2014, its accuracy improved only 1.4 times, but was still subpar compared to a human,” says Jan Lunter. Only in 2015 did its accuracy start to significantly increase. The reason? Algorithms started to incorporate deep neural networks for machine learning.
Computer imitates a human brain
Deep neural networks try to imitate the workings of neurons in the human brain. Between the input (a picture in which we want to find a face) and output (an answer whether there is indeed a face in the picture), there is a network of electronic neurons several layers deep. They analyze all aspects of the picture and try to find the parts which resemble a face. To do so, the neurons get trained beforehand: they analyze millions of pictures where the operators show them whether there is a face or not and where it is located.
The neural network tries to find all the common features of a face – two eyes above a nose, mouth below it, chin etc. – and looks for other similarities that might not be obvious to a human. During training, each neuron can become a specialist in spotting a certain feature. The network then collects the individual outcomes and sums them up into an output: a face is found (or not) with a specific degree of certainty. Today, the most used approach when analyzing images is to use deep convolutional neural networks. Convolution is basically a process where the image is broken down into parts that are mapped out and identified one by one.
Better than a human eye
The ability to find patterns hidden to a human observer led to a breakthrough in facial recognition technologies. “Today the error ratio has improved 20-fold compared to year 2010, and the algorithms are already better than humans at spotting and identifying faces,” says Jan Lunter, calling it an ‘industrial revolution’ in face recognition. The same trend can be observed in object recognition, where the error rate has decreased 12-fold compared to 2010. And speech recognition algorithms are also already on par or even slightly better at recognizing spoken words than humans.
As Jan Lunter says, this approach also shows where AI is best and most readily applied: to isolated problems with clearly defined answers that we know beforehand and with lots of representative data to provide for machine learning. “It is easier to teach an algorithm to find a cat in an image, but much more difficult, when you want to teach an algorithm to drive autonomously,” he adds. Driving a car also provides ethical dilemmas which do not have a clear-cut answer that the AI could adopt.
General AI has still a long way to go
In many cases, the lack of representative data can be mitigated through technology. In teaching AI to read IDs, Innovatrics takes a template of a certain ID and generates random data into its fields, as well as possible production errors and wear marks. This way, it can produce millions of IDs to teach the AI with. With the help of these synthetic IDs, the AI is now able to ‘correctly locate and read IDs even on noisy, blurry pictures’, Jan Lunter shows. There are also services that are able to synthesize realistic-looking human faces, which can be then used for machine learning.
This is a blueprint for what to expect from AI and which fields it will disrupt the most: those that are easy to describe, clear to define, and have lots of available data. The traditional example is logistics, which historically has lots of data. They can be fed into the algorithm for learning and recommending, e.g., best routing for a package or delivery. The automotive and other hi-tech industries are highly automatized, and therefore, produce considerable information as well. “The difficult areas are those with complex targets or lots of customer interaction. A hotel concierge will not be replaced by an AI anytime soon,” says Jan Lunter, “Although AI may still surprise us in many other fields.”