News

Can you believe your eyes? How to recognize and protect against deepfakes

Recently Google has banned deepfake algorithms from Google Colaboratory, its free computing service with access to GPUs. The technology giant is not the only one to regulate deepfakes — several US states have laws regulating them, China draft law requires identification of computer-generated media, and the prospective EU AI regulation may include a clause about this particular technology.

Kaspersky experts have explained what deepfakes are, and why there is so much controversy around them –  along with how users can protect themselves:

Deepfakes’ commonly refer to various kinds of synthetic, computer-generated media involving people and made with deep neural networks. These might be videos, photos or voice recordings. The use of deep learning, instead of the traditional image editing techniques, drastically lowers the effort and skill required to create a convincing fake.

Originally, the term referred to a particular piece of software which had gained popularity on Reddit. The software could implant the face of a person into a video featuring another person, and was used almost entirely to create non-consensual porn with celebrities. According to some estimates, up to 96% of all deepfakes are pornographic, highlighting concerns around deepfakes being used for abuse, extortion and public shaming.

This technology could also aid cybercriminals. In at least two cases, in England, a voice deepfake has been used to trick companies into transferring funds to fraudsters, by posing as the officers of the respective companies. Recent research showed that commercial liveness detection algorithms, used by financial institutions in KYC procedures, could be tricked by deepfakes created from ID photos, creating new attack vectors and making leaked IDs an even more serious issue.

Another issue is that deepfakes undermine trust in audio and video content as they can be used for nefarious purposes. For example, in a recent case a fake interview of Elon Musk was used to promote a cryptocurrency scam. Various experts and institutions, such as Europol, warn that the growing availability of deepfakes can lead to further proliferation of disinformation on the Internet.

It’s not all bad news. Image manipulation is as old as the images themselves, and CGI has been around for decades, and both have found decent use, as have deepfakes. For example, in a recent Kendrick Lamar’s video, Heart Part 5, deepfake technology was used to morph the rapper’s face into other famous celebrities, such as Kanye West. In the movie Top Gun: Maverick, an algorithm was used to voice Val Kilmer’s character after the actor lost his voice. A deepfake algorithm was also used to create a viral TikTok series starring fake Tom Cruise. And a few startups are looking for new ways to use the tech, for example, to generate lifelike metaverse avatars.

With all the trust issues surrounding deepfakes, users can wonder how to spot a deepfake. Here are some tips to get started:

A convincing deepfake, such as the one featuring Tom Cruise, still requires a lot of expertise and effort — and sometimes even a professional impersonator. Deepfakes used for scams still tend to be low-quality and can be spotted by noticing unnatural lip movements, poorly rendered hair, mismatched face shapes, little to no blinking, skin color mismatches and so on. Errors in the rendering of clothes or a hand passing over the face can also give away an amateur deepfake.

If you see a famous or public person making wild claims or too-good-to-be-true offers, even if the video is convincing, proceed with due diligence and cross-check the information with reputable sources. Note that fraudsters can intentionally code down videos to hide the deficiencies of deepfakes, so the best strategy is not staring at the video looking for clues, but using your common sense and fact-checking skills.

A trusted security solution can provide good support if a high-quality deepfake convinces users to download any malicious files or programs, or to visit any suspicious links or phishing websites.

In case you are the victim of deepfake porn, you can reach out to both the website to ask for the video to be taken down (many websites prohibit posting deepfakes), and to a law enforcement agency, as generating deepfakes is a criminal offence in some legislations.

“Deepfakes are a prime example of a technology that develops faster than we can actually comprehend and manage the complications. This is why it is perceived both as an addition to an artist’s toolkit as well as a new disinformation instrument that challenges what we as a society can trust”, says Vladislav Tushkanov, lead data scientist at Kaspersky.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button