How Machine Learning Drives the Deceptive World of Deepfakes
Deepfakes are spreading fast, and while some have playful intentions, others can cause serious harm. We stepped inside this deceptive new world to see what experts are doing to catch this altered content.
»Subscribe to Seeker! http://bit.ly/subscribeseeker
»Watch more Focal Point | https://bit.ly/2s0cf7w
Chances are you’ve seen a deepfake; Donald Trump, Barack Obama, and Mark Zuckerberg have all been targets of the computer-generated replications.
A deepfake is a video or an audio clip where deep learning models create versions of people saying and doing things that have never actually happened. A good deepfake can chip away at our ability to discern fact from fiction, testing whether seeing is really believing.
The deep part of the deepfake that you might be accustomed to seeing often relies on a specific machine learning technique called GAN, or generative adversarial network.
Two algorithms compete with each other to outsmart the other one. For example, one of the algorithms tries to create a convincing image of a face, while the other algorithm tries to detect if the image is fake. The end result can be a convincing generation.
Deepfakes first started to pop up in 2017, after a Reddit user posted videos showing famous actresses in porn. Today, these videos still predominantly target women but have widened the net to include politicians saying and doing things that haven’t happened.
In June 2019, the House Intelligence Committee held an open meeting to address the national security challenges presented by deepfakes, manipulated media, and artificial intelligence.
So how do platforms like Facebook, YouTube, Twitter, Google, Instagram, TikTok, and more deal with the disinformation format?
Find out more about the complexities and history of deepfakes, how to spot a deepfake, and the strides that are being made to detect deepfakes on this Focal Point.
Special thanks to Harry Bratt, a senior computer scientist at SRI International.
Read More:
President Nixon Never Actually Gave This Apollo 11 Disaster Speech. MIT Brought It To Life To Illustrate Power Of Deepfakes
https://www.wbur.org/news/2019/11/22/…
“Imagine a past in which the crew of Apollo 11 landed on the moon in 1969 — but then became stranded there, leading then-President Nixon to give a speech memorializing the astronauts. A new MIT film installation uses that exact premise to shed light on so-called deepfake videos and how they are used to spread misinformation.”
Even the AI Behind Deepfakes Can’t Save Us From Being Duped
https://www.wired.com/story/ai-deepfa…
“Google and Facebook are releasing troves of deepfakes to teach algorithms on how to detect them. But the human eye will be needed for a long time.”
Deepfakes are a real political threat. For now, though, they’re mainly used to degrade women.
https://www.vox.com/2019/10/7/2090221…
“A new report on deepfakes finds 96 per cent involve simulating porn of female celebrities (without their consent).”