Deepfakes: Silicon Valley Prepares to Battle Latest Election Threat
February 11, 2020
U.S. government officials and technology experts are sounding the alarm
about a new social media threat: Fake videos of political leaders and
candidates for office.
They are known as “deepfakes,” synthetic videos that use artificial
intelligence to combine images into a convincing illusion, like having a
well-known person say something they never said.
In the technology’s infancy just a few years ago, the flaws in deepfakes
were evident to most viewers. But experts warn that rapid advances have
already made the difference between real and synthesized voice
imperceptible and that it may be just a matter of months before reaching
the same point with video.
In the U.S., fears are mounting that deepfake technology could cause
disruption as people head to the polls.
Lawmakers have proposed legislation aimed at preempting the threat.
Facebook, Google, YouTube and Twitter have updated their policies,
saying they will remove deepfakes designed to deceive voters.
But first they will have to find them.
Teaching an AI to spot deepfakes
A digital arms race is underway, to design technological tools to
One leader in this effort is Hany Farid, a digital forensics expert at
the University of California, Berkeley. His lab receives funding from
both the Defense Advanced Research Projects Agency (DARPA) and Facebook.
He is focusing his efforts on the facial gestures of current U.S.
“We have pretty consistent mannerisms, the way we talk,” says Farid.
“We’ve been building what we call soft bio-metric models.” These
gestural models, as unique to a person as their fingerprints, are made
by feeding hours of video of each candidate into a computer system.
Farid is preparing for a scenario in which a malicious deepfake of one
of the candidates appears in the final days of the election.“
Our hope is to make this technology available to mainstream journalists
and then allow them, as videos start to hit, to authenticate them or
not,” he says.
The San Francisco-based AI Foundation is also joining the detection
effort. The tech startup actually builds synthetic personalities it
considers beneficial, like a Deepak Chopra AI you can interact with via
smart phone. But then it also shares detection tools.
“We want to recognize the potential harm in the kinds of technologies
that we are creating,” says Delip Rao, director of research at the AI
Foundation, which is a hybrid for-profit/non-profit organization.
The foundation is launching a platform called Reality Defender,
combining detection tools from researchers across industry and
technology that the public will be able to use.
“This is going to be our Manhattan Project, to fight misinformation,”
New bans still allow deceptive video
Farid says that Facebook’s new policy on deepfakes is too
narrowly-defined and fears that other forms of political disinformation
will still flourish on the site.
experts point out that the Facebook ban would not have applied to a clip
of U.S. House Speaker Nancy Pelosi with altered speech that went viral
last year. The clip, and others in a category known as “cheapfakes,” are
easily made with common editing software. The Pelosi video, which
falsely labeled the congresswoman as “drunk,” received over 2.2 million
views in its first 48 hours on Facebook.
Recently, Pelosi asked Facebook and Twitter to take down another
doctored video showing she ripped up President Donald Trump’s State of
the Union speech while he was speaking. Both Facebook and Twitter
As primary season begins, there’s a lot at stake. Consumers of online
media may have the most important role in stopping the spread of
synthetic disinformation, Farid says.
“It's not just an issue of deepfakes,” he says. “It's not just an issue
of Facebook falling asleep at the wheel. And it's not just an issue of a
polarized society. It's all of them coming together that I think is
leading to issues that are existential threats to our democracy.”