What are deepfakes – and how do you recognize them?

0
67
A comparison of an original and fake video by Facebook boss Mark Zuckerberg.

What is a deep fake?

Have you seen how Barack Obama called Donald Trump "utter nonsense" or how Mark Zuckerberg bragged about "having total control over the stolen data of billions of people", or have you seen Jon Snow advocate for that apologized for dismal end of Game of Thrones? If you answer "yes", you have seen a deepfake. Deepfakes, the 21st century answer to photoshopping, use a form of artificial intelligence called deep learning to take pictures of fake events, hence the name deepfake. Would you like to put new words in a politician's mouth, star in your favorite movie, or dance like a pro? Then it's time to make a deep fake.

What are they for

Many are pornographic. The AI ​​company Deeptrace found 15,000 deepfake videos online in September 2019, which is almost a doubling within nine months. An astonishing 96% were pornographic and 99% of these faces ranged from female celebrities to porn stars. As new techniques enable inexperienced people to create deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. Danielle Citron, a professor of law at Boston University, said: "Deepfake technology is used against women." In addition to porn, there are plenty of parodies, satire and mischief.

Is it just about videos?

Deepfake technology can create compelling but completely fictional photos from scratch. A non-existent Bloomberg journalist, "Maisy Kinsley", who had a LinkedIn and Twitter profile, was probably a fake. Another LinkedIn counterfeit, "Katie Jones", said to work at the Center for Strategic and International Studies, but is considered a fake made for a foreign espionage operation.

Audio can also be faked to create "Voice Skins" or "Voice Clones" by public persons. In March of last year, the head of a UK subsidiary of a German energy company deposited just under £ 200,000 in a Hungarian bank account after being called by a fraudster who mimicked the voice of the German CEO. The company's insurers are of the opinion that the vote was fake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.

A comparison of an original and fake video by Facebook boss Mark Zuckerberg.



A comparison of an original and fake video by Facebook boss Mark Zuckerberg. Photo: The Washington Post via Getty Images

How are they made?

University researchers and studios for special effects have long exceeded the limits of what is possible in video and image manipulation. But Deepfakes itself was born in 2017 when a Reddit user of the same name posted porn clips dealt on the website. The videos exchanged the faces of celebrities – Gal Gadot, Taylor Swift, Scarlett Johansson and others – for porn actors.

It only takes a few steps to create an exchange video. First, take thousands of facial images of the two people using an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces and reduces them to the common features. The images are compressed. A second AI algorithm, called a decoder, then learns to restore the faces from the compressed images. Since the faces differ, train one decoder to restore the first person's face and another decoder to restore the second person's face. To change the face, simply enter coded pictures in the "wrong" decoder. For example, a compressed image of person A's face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and alignment of face A. For a convincing video, this must be done on each frame.

Original and deepfake videos by Vladimir Putin



Comparison of original and deepfake videos by Russian President Vladimir Putin. Photo: Alexandra Robinson / AFP via Getty Images

Another way to create deepfakes is the so-called generative opposing network (Gan). A gan juxtaposes two algorithms for artificial intelligence. The first algorithm, called a generator, is fed with random noise and turns it into an image. This synthetic image is then added to a stream of real images, for example of celebrities, which are fed into the second algorithm, the so-called discriminator. At first, the synthetic images don't look like faces. But repeat the process countless times with feedback on performance, and both the discriminator and the generator improve. With enough cycles and feedback, the generator will start producing completely realistic faces from completely non-existent celebrities.

Who makes deepfakes?

Everyone, from academic and industrial researchers to amateur enthusiasts, visual effects studios and porn producers. As part of their online strategy, for example to discredit and disrupt extremist groups, governments could also deal with the technology or establish contacts with certain people.

What technology do you need?

It's hard to do a good deep fake on a standard computer. Most are created on high-end desktops with powerful graphics cards or better with computing power in the cloud. This reduces the processing time from days and weeks to hours. But it also requires expertise to improve finished videos, reducing flicker and other visual errors. However, many tools are now available to create deep fakes. Several companies manufacture them for you and do all the processing in the cloud. There's even a cell phone app, Zao, that allows users to add their faces to a list of TV and movie characters that the system has trained on.

Chinese face swap app Zao



Chinese face-swap app Zao has raised privacy concerns. Photo: Imaginechina / SIPA USA / PA Images

How do you recognize a deepfake?

The better the technology, the more difficult it becomes. In 2018, U.S. researchers found that fake faces weren't blinking normally. No wonder, because most of the pictures show people with open eyes, so the algorithms never really learn how to blink. At first it seemed like a silver ball for the recognition problem. But research had hardly been published when Deepfakes appeared with the blinking. This is the nature of the game: as soon as a weakness is revealed, it is fixed.

Bad deepfakes are easier to spot. The lip sync may be poor or the skin tone blotchy. Flickering may appear on the edges of transposed faces. And fine details, such as B. hair, are particularly difficult to render for deep fakes, especially where strands are visible on the edge. Badly rendered jewelry and teeth can be a giveaway as well as strange lighting effects such as inconsistent lighting and reflections on the iris.

Governments, universities, and technology companies fund all research to uncover deepfakes. The first Deepfake Detection Challenge started last month, supported by Microsoft, Facebook and Amazon. It will include teams of researchers around the world fighting for supremacy in the deepfake recognition game.

Facebook banned deepfake videos last week that could trick viewers into thinking ahead of the 2020 U.S. election that someone "said words he didn't really say". However, the policy only covers misinformation created using AI. This means that “flat counterfeits” (see below) are still allowed on the platform.

A woman watches a deep fake video of Donald Trump and Barack Obama.



A woman watches a deep fake video of Donald Trump and Barack Obama. Photo: Rob Lever / AFP via Getty Images

Will deepfakes wreak havoc?

We can expect more deepfakes that annoy, intimidate, humiliate, undermine and destabilize. But do deepfakes trigger major international incidents? The situation here is less clear. A world leader's deep fake that presses the big red button shouldn't cause Armageddon. Deepfake satellite images of troops gathering on a border will not pose any major problems either: most nations have their own reliable security systems.

However, there is still plenty of room for nonsense. Tesla stock collapsed last year when Elon Musk smoked a joint on a live web show. In December, Donald Trump flew home early from a NATO meeting when real footage showed up from other world leaders who apparently mocked him. Will plausible deepfakes change stock prices, affect voters and provoke religious tension? It seems to be a sure thing.

Will they undermine trust?

The more insidious effect of deepfakes, along with other synthetic media and fake news, is the creation of a zero trust society in which people can no longer or do not want to tell the truth from the lie. And when trust fades, it's easier to raise doubts about certain events.

Last year, the Cameroonian minister of communications dismissed a video that Amnesty International believed to show Cameroonian soldiers executing civilians as fake news.

law

Donald Trump, who admitted to boasting of having grabbed women's genitals in a recorded conversation, later suggested that the tape was not genuine. In Prince Andrew's BBC interview with Emily Maitlis, the prince questioned the authenticity of a photo taken with Virginia Giuffre. A shot your lawyer insists on is real and unchanged.

"The problem may be less the fake reality than the fact that reality is plausibly denied," said Prof. Lilian Edwards, a leading Internet law expert at Newcastle University.

As technology becomes more accessible, deepfakes could cause trouble for the courts, particularly in custody battles and labor courts where fake events could be used as evidence. However, they also pose a personal security risk: deepfakes can imitate biometric data and potentially outsmart systems based on facial, speech, vein or gait recognition. The potential for fraud is clear. Call someone out of the blue and they're unlikely to transfer money to an unknown bank account. But what if your “mother” or “sister” makes a video call on WhatsApp and makes the same request?

What is the solution?

Ironically, AI could be the answer. Artificial intelligence already helps to detect fake videos, but many existing detection systems have a serious weakness: They are best suited for celebrities because they can train freely available footage for hours. Technology companies are currently working on detection systems that should report counterfeits whenever they occur. Another strategy focuses on the origin of the media. Digital watermarks are not foolproof, but a blockchain online ledger system can contain tamper-proof recording of videos, images and audio data so that their origin and any tampering can be checked at any time.

Are deepfakes always malicious?

Not at all. Many are entertaining and some are helpful. Deepfakes when cloning voices can restore people's voices if they lose them due to illness. Deepfake videos can liven up galleries and museums. In Florida, the Dalí Museum has a deep fake by the surrealist painter who presents his art and takes selfies with visitors. Technology can be used in the entertainment industry to improve dubbing of foreign language films and revive controversial dead actors. For example, the late James Dean is said to play the leading role in Finding Jack, a Vietnam war film.

What about flat fakes?

Invented by Sam Gregory of the human rights organization Witness, flat counterfeit videos are videos that are either taken out of context or edited with simple editing tools. They are raw, but no doubt effective. A shallow, fake video that slowed Nancy Pelosi's speech and clouded the sound of the house's US spokeswoman reached millions of people on social media.

In another incident, Jim Acosta, a CNN correspondent, was temporarily excluded from the White House press conferences during a violent exchange with the President. A subsequently released flat video showed how he contacted an intern who tried to remove the microphone from him. It later turned out that the video had been sped up at the crucial moment, making the movement look aggressive. The Costa press card was later reintroduced.

The UK Conservative Party has followed a similar strategy. Ahead of the recent elections, the Conservatives conducted a television interview with Labor MP Keir Starmer to make it appear as if he couldn't answer a question about the party's Brexit stance. With deepfakes, the disaster will likely only increase. Henry Ajder, Head of Threat Intelligence at Deeptrace, explains: “The world is becoming increasingly synthetic. This technology doesn't go away. "