Deep Fake: How to fool Big Data and Machine Learning biometrics

Published by admin on

In this article, we will look at how hackers cheat on a fingerprint scanner, confuse Big Data with street video analytics systems and impersonate a different person using the latest Machine Learning technology called Deep Fake.

From the phone to Big Data video analytics: 7 real-life examples of hacking biometric systems

Let’s start with the face recognition method in Big Data CCTV systems. Existing Machine Learning algorithms can successfully recognize a person even by 70% of the face, for example, if he partially hid under a medical mask. Glasses, hats, beards and mustaches reduce recognition accuracy from about 95% to 92%. At the same time, such masking methods increase the likelihood of false positives (type 1 error in the confusion matrix) by about 5 times – to the level of 0.01%. Less complex biometric systems installed on individual devices such as a smartphone can be fooled even without the use of special devices. There are multiple cases of hacking Apple’s Face ID using twins or visually similar people. For example, the Vietnamese company Bkav was able to crack this device with an inexpensive mask for only $ 200. Moving 3D models based on random photos from the Facebook social network helped scientists at the University of North Carolina cheat 4 out of 5 facial recognition systems in 2016. Stage makeup (wig), wigs and colored glasses of glasses also increase the likelihood that the Machine Learning algorithm is wrong. committing a false positive or false rejection.

In 2017, hackers were able to fool the iris scanner of the Samsung Galaxy S8 smartphone by faking the iris using a regular contact lens, on which a picture of someone else’s eye was placed. This imitation was done using a high-resolution digital photo of the eye in night mode when the infrared filter is turned off. Having printed a photo on a laser printer with a pre-adjusted brightness and contrast setting, the attackers pasted a simple contact lens and showed it to the scanner of the smartphone. In 2016, a fingerprint scanner was discredited in a similar way, which took a fake printed with conductive ink on glossy paper as a genuine fingerprint. Thus, cybersecurity researchers from the University of Michigan USA successfully tricked the built-in scanners of the Samsung Galaxy S6 and Huawei Honor 7 smartphones.

And engineers from Berlin Technical University have demonstrated how to get around a biometric authentication system by drawing veins in the palm of your hand. To do this, they developed a prototype device that imperceptibly collects these biometric data using a single-board Raspberry Pi computer and a compact digital camera without an infrared filter. The resulting photos of the palms were processed in order to clearly identify the veins, in the place of which single-pixel lines were drawn. Then the image was printed and covered with wax imitating the surface of the skin. The operability of this method was shown on real Hitachi and Fujitsu equipment in 2018.

In 2019, it became known about the possibility of deceiving the fingerprint scanner of the Samsung Galaxy S10 smartphone with the help of an artificial finger model printed on a 3D printer. Then, in 2019, the fingerprint scanner of the OnePlus 7 Pro smartphone was misled by a fake made of foil and glue.

The incidents described above were based on an exact copy of the biometric data of a particular person. However, some fragments of papillary patterns in many people are common. Therefore, back in 2017, a method for deceiving biometrics using the universal “master fingerprint” was proposed. Scientists at New York and Michigan Universities using Big Data technologies analyzed 8.2 thousand prints and found 92 universal fragments for each group of 800 randomly selected samples, which at least 4% exactly coincided with the rest. Researchers note that based on such fragmentation, a number of “master fingerprints” can be created that can deceive almost any fingerprint biometric system.

What is Deep Fake and how dangerous is it for biometrics

In 2019, a hysteria called Deep Fake swept the world when, using the technology of deep machine learning (Deep Learning), numerous audio and video fakes for real people began to be created. In this case, Generative Adversarial Networks are used (GAN), when one neural network generates a fake, and the other recognizes it. Thus, both networks train each other, gradually bringing the result to perfection. So far, the most striking example of the practical and unlawful application of this Machine Learning technology is the case of a foreign energy company that lost about 220 thousand euros, transferring them to the attackers’ account as a result of a phone call, where the order was sent to the third-party counterparty by the voice of the company’s head. It is characteristic that not only the voice of the boss was plausibly imitated, but also his manner of conversation, in particular, the German accent. Similarly, using Deep Fake, video can be generated online, as was shown in the second half of 2019 at a conference at the Massachusetts Institute of Technology.

An additional danger of misuse of GAN models is added by the ability of these Machine Learning algorithms to learn from heterogeneous audio, video and images, which are full in the public domain. So fake recordings of famous people, celebrities and politicians can be created, which entails social, financial and political risks. In addition, this machine learning technology is also potentially dangerous for ordinary people who use face and voice as biometric identifiers.

Therefore, even if the generated and published audio and video recordings using Deep Fake can be considered as high-tech editing, then additional cybersecurity measures are needed to protect against such Machine Learning applications in real Big Data biometric systems. These include multi-factor authentication, protection of biometric templates and multiple verification, which we will discuss in the next article.

Categories: Blog

Leave a Reply