Imagine a dark web economy, where specialists produce misleading content that can be posted on the Internet to influence everything from car purchases to political candidates who receive votes. Deepfake is a type of fake news, often using technologies used by bad actors to create false information, such as fake news and fake videos. These videos are often created from existing images and use advanced deep learning techniques known as Generative Adversarial Networks (GANs). GANs are a relatively new concept in AI that aims to synthesize artificial images that are indistinguishable from authentic ones. Deepfakes are audio, image and video files that seem to represent language or action realistically, but actually contain synthetic representations that are produced using modern artificial intelligence.
Although not exclusive to faces, the ability to manipulate facial expressions and language, as well as people who exchange faces in videos, has been of great concern. In 2013, a journalist and actress made national headlines after becoming the first direct victim after she was grudgingly cast in a porn video.
This is not a turning point, but it will contribute significantly to the ongoing erosion of trust in digital content. The Internet has long been permeated by altered images, audio and video. Deepfakes, derived from the term “deep learning fakes,” means the manipulation of facial expressions, language or other aspects of a video or audio recording. The use of machine-learning algorithms like deep learning has recently evolved rapidly, making it easier than ever to distinguish between original and fake content.
It is worrying that counterfeits are getting better and better and that it is becoming increasingly difficult to detect them. As more and more companies follow suit and concern grows about the use of this technology, we need to look more closely at how it works and what policy issues it raises. Deepfakes are videos, photos and audio recordings that appear real but have been manipulated with artificial intelligence (AI). The underlying technology can synthesize, replace, manipulate facial expressions, synthesize speech, or synthesize faces.
By manipulating images, videos, and real people’s voices, deepfakes can represent someone doing something they have never done or saying things they never said. These tools are most commonly used to represent people who say or do things they never did or said, or people who have said or done things that people never say or do. The Deep Fake Algorithm learns the details of a person’s face by feeding thousands of target images into a machine learning model. With enough training data, the algorithm can then predict what one face will look like when it mimics another’s expression.
A similar process is used to train the DeepFake algorithm to mimic the accent, intonation and tone of a person’s voice. Facial mapping technology is also used to swap a person’s face for another face, using a deep learning algorithm. The neural network scans large data sets to learn how to replicate the facial expressions of different people, such as a woman’s face, a man’s face or a child’s expression. DeepFake technology allows anyone with a computer and an Internet connection to create realistic photos and videos of people saying or doing things they don’t really say or do. It has the potential to produce content that can be used to make people believe that something is real when it is not, according to a defense strategist who focuses on the use of deep fake technology in the US military and intelligence community.
Driven by innovative new deep learning methods known as generative enemy networks (GANs), deep fakes have emerged in recent years. In October 2014, there were 7,964 deep fake videos on the Internet, according to a Google study, and just nine months later that number rose to 14,678. While impressive, today’s deep fake technology is not yet on par with authentic video footage, but if you look closely you can typically tell which videos are deep fakes and which are not. Deep fakes are fake video and audio recordings created by AI that look and sound like the real thing. They use deep learning to manipulate and create visual and / or auditory content in order to deceive people.
In the world of cybersecurity, trying to hack a human is known as a social engineering attack, but deep fakes are the most dangerous form of crime. The term “deepfake” refers to a video in which the algorithmic learning methods used to train a computer are used to make it appear as if a person has said something they have not said. The most prominent examples include Barack Obama, who insulted Donald Trump in his speech to the Democratic National Convention in 2008, and Hillary Clinton’s speech in the 2016 presidential election.
According to the Pew Research Center, misinformation and disinformation will dominated headlines in 2020, undermining confidence in information around social media and conventional media. The process of creating a deep fake uses a technology called neural networks that can be trained by machine learning to convert the original face of a video into the face of the target, while maintaining the same pose, expression and lighting. Deepfakes are a way to manipulate video (and sometimes photos and audio) to replace someone else.