The subject has come to higher-level attention in the last month, when YouTube channel VillainGuy posted this somewhat absurd video of Jennifer Lawrence answering questions at a press conference, except that he replaced her face with Steve Buscemi's. The expressions, the mouth movements, and general believability are quite good. If I didn't know this was supposed to be Jennifer Lawrence, just an actress at an interview, I'd have written it off as particularly homely actress getting attention for some reason. In other words, I would have thought it was real.
From The Daily Dot, this "explanation"
Utilizing a free tool known as “faceswap,” VillainGuy proceeded to train the AI with high-quality media content of Buscemi. With the aid of a high-end graphics card and processor, “Jennifer Lawrence-Buscemi” was born. VillainGuy says the level of detail was achieved thanks to hours of coding and programming as well.It might help to explain that this is close to the origins of the technology. Deep Fakes was originally a Reddit subgroup that would take the faces of well known actresses and put them onto actresses in porn film clips. This sparked public outrage over the more famous actresses rights to their images and at the end of 2017, Reddit banned the subgroup and deleted any of this involuntary porn from the site.
Today, nobody cares about that. What everyone is spun up about is the potential for convincing voters some politician did something wrong; very wrong.
The video’s viral spread online Tuesday comes as numerous U.S. lawmakers sound the alarm over the potential of deepfakes to disrupt the 2020 election. A report from CNN indicates that the Department of Defense has begun commissioning researchers to find ways to detect when a video has been altered.In particular, some people are all wrapped up about "you'll never be able to believe you eyes again", or "how can we ever trust our senses again".
Late last year, Rep. Adam Schiff (D-Calif.) and other members of the House of Representatives wrote a letter to Director of National Intelligence Dan Coates to raise concerns over the possible use of the technology by foreign adversaries.
My reaction was, "I think I read this exact same argument about photo editing in the 1980s".
That's the balanced look found in "Deep Fakes: Let's Not Go Off The Deep End" written by Jeffrey Westling on the website Tech Dirt.
Much of the fear of deep fakes stems from the assumption that this is a fundamentally new, game-changing technology that society has not faced before. But deep fakes are really nothing new; history is littered with deceptive practices — from Hannibal's fake war camp to Will Rogers' too-real impersonation of President Truman to Stalin's disappearing of enemies from photographs. And society's reaction to another recent technological tool of media deception — digital photo editing and Photoshop — teaches important lessons that provide insight into deep fakes’ likely impact on society.The truth is that trust in photography has gone down, which is appropriate. We used to say "the camera doesn't lie", but even in my early years playing with film cameras (50 years ago), we used to try to get the camera to lie, to make up scenes that weren't there. Before Photoshop, we called it "trick photography". Photoshop and digital cameras have multiplied how often "cameras lie" by millions of times.
In 1990, Adobe released the groundbreaking Adobe Photoshop to compete in the quickly-evolving digital photograph editing market. This technology, and myriad competitors that failed to reach the eventual popularity of Photoshop, allowed the user to digitally alter real photographs uploaded into the program. ...
With the new capabilities came new concerns. That same year, Newsweek published an article called, “When Photographs Lie. ...
When was the last time you saw some picture on a website that was open to comment and someone didn't say it was 'shopped? "Dude, I can tell by the pixels around her neck that's not Jennifer Lawrence" - or the equivalent.
Now, however, the same “death of truth” claims — mainly in the context of fake news and disinformation — ring out in response to deep fakes as new artificial-intelligence and machine-learning technology enter the market. What if someone released a deep fake of a politician appearing to take a bribe right before an election? Or of the president of the United States announcing an imminent missile strike? As Andrew Grotto, International Security Fellow at the Center for International Security and Cooperation at Stanford University, predicts, “This technology … will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions.” Perhaps even more problematic, if society has no means to distinguish a fake video from a real one, any person could have plausible deniability for anything they do or say on film: It’s all fake news.A weekend ago, the country turned itself inside out over a badly edited video of some teens from a Catholic school nobody had ever heard of interacting with old drum-beating protestor nobody had ever heard of. The deceptive video went viral (is there a stronger word?) and was widely viewed even though the original video was discredited in under 12 hours. People still cling to the original video because it reinforces their mental preconceptions.
The lesson here isn't that videos lie (or not), it's that people shouldn't jump to conclusions and just forward something. People should care about the truth and not just whatever gives them a momentary pleasure.
My view is that the same phenomenon that took place with Photoshop will take place with these fake videos. They're likely to become extremely common because to some extent they can be done on a smart phone (by which I mean I don't know if they'd be as good as "Jennifer Buscemi"), while to do convincing things with Photoshop requires some skills. Given the large number of deep fakes that will be coming from anyone with a phone who wants to make one, it sounds like we're going to see huge numbers of fakes that are relatively easy to dismiss. We shouldn't forget that there are going to be some professionals somewhere capable of making really good fakes.
Final words to Westling:
However, we should not assume that society will fall into an abyss of deception and disinformation if we do not take steps to regulate the technology. There are many significant benefits that the technology can provide, such as aging photos of children missing for decades or creating lifelike versions of historical figures for children in class. Instead of rushing to draft legislation, lawmakers should look to the past and realize that deep fakes are not some unprecedented problem. Instead, deep fakes simply represent the newest technique in a long line of deceptive audiovisual practices that have been used throughout history. So long as we understand this fact, we can be confident that society will come up with ways of mitigating new harms or threats from deep fakes on its own.