It uses a ‘deep neural network system’ that works a little like the human brain to analyse infrared images and match them with ordinary photos.
At the moment, facial recognition systems tend to rely on matching clear and well-lit photos, meaning they are not useful if someone is standing in the shade, RT reported.
Computer scientists at the Karlsruhe Institute of Technology, Germany, developed the new facial recognition technique that reads a person’s thermal signature.
Saquib Sarfraz and Rainer Stiefelhagen created the system that uses mid or far-infrared images then matches details with those in ordinary photos.
This is particularly impressive because there’s not a linear correlation between faces in visible and infrared light.
The way the human face emits thermal signatures is different to how it reflects light in daylight and these emissions vary depending on the temperature of the skin, environment and even a person’s expression.
Also, infrared images tend to be lower resolution than regular photos, making matching the two a challenge.
To overcome this, they used a deep neural network, which is a computer programme that imitates the way the human brain makes connections and draws conclusions.
But it needs a large dataset to do this in the form of masses of infrared and normal images so it can make comparisons and learn how to make better matches.
The researchers used a set of 1,586 images from the University of Notre Dame which included pictures of 82 people with different facial expressions and in different lighting.
They used images of the first 41 people to ‘train’ the system and the remaining pictures of 41 people to test it.
The experts found their system outperformed the leader in the field by 10 per cent and could match and recognise a face in just 35 milliseconds.
We show substantive performance improvement on a difficult thermal-visible face dataset,’ they write in the study published on arxiv.org.
However, there is still a long way to go before police could use the system to catch criminals, for example, because the system has an accuracy rate of 80 per cent when it has multiple visible images of a person to draw on.
When only on visible image was available, the accuracy dropped to 55 per cent.