Fake Video Will Complicate Viral Justice
Credit to Author: Catherine F. Brooks| Date: Mon, 18 Jun 2018 11:00:00 +0000
It used to be that cameras never lie. We tend to privilege visual content, trust what we see, and rely on police cams, mobile recording tools and similar devices to tell us about what is really happening on the streets, in local businesses, and more.
Catherine Brooks (@catfbrooks) is an Associate Professor of Information at the University of Arizona, where she is the associate director of the School of Information and founding director of the Center for Digital Society and Data Studies. She is a Public Voices Fellow with the Op Ed Project.
Take, for example, a viral video that shows a white woman calling the police as black men in Oakland attempt to barbecue. Millions are laughing, and the woman’s image is being used as a meme across the Internet. When a video of a patron threatening café employees for not speaking English went viral, the subject, a New York attorney Aaron Schlossberg, was identified on social media within hours. His office information was shared quickly, comments on review pages and public shaming ensued. The racist lawyer ended up with the attention of mariachis playing music outside of his apartment.
In both these cases, the videos were real, the memes entertaining, and the Twitter storm was deserved. After all, mobile videos and other cams provide transformative new avenues for justice, precisely because they can spread like fire around the world. But this kind of ‘justice’ landscape only works as long as we can trust the videos we see—and faked videos are on the horizon. Often called “deepfakes,” a term coined by a Reddit user for videos that swap porn star faces for those of famous people, fake videos are quickly becoming more prevalent. With a kind of Photoshop for video, artificial intelligence affords just about anyone the tools to generate fake visual content.
This kind of ‘justice’ only works as long as we can trust the videos we see—and faked videos are on the horizon.
Using a tool like FakeApp (an app that uses deep learning to make face-swap videos), pretty much anyone can gather images and make a video without a lot of computational skill. Very swiftly we have moved from the crude superimposing of faces in movies and video games, to sophisticated AI tools that give the average citizen means for doctoring visual content, and limited help in discerning this doctored material.
In a world of fake news, anyone can write a story that seems reliable; soon generating fake videos will become as commonplace. More and more, these videos will provide easy means for harassing individual citizens, influencing public officials, or threatening peers in schools. We can easily imagine a world of revenge porn, cyberbullying, and other kinds of public harassment of average citizens – maybe even children.
In a world of fake news, anyone can write a story that seems reliable; soon generating fake videos will become as commonplace
Most consumers will be able to recognize the subtle cues of inauthenticity, only if they watch very carefully. But as we’ve learned from the rise of fake news, often people don’t consume information carefully. In a world where police cams, public surveillance videos, or even mobile recordings are used in highly-consequential scenarios, like court hearings, and when social media-based persuasion tactics are influencing elections around the globe, assuming people will ‘watch carefully’ is akin to assuming people will read online content critically. These technologies will become increasingly sophisticated over a very short period of time, making it more and more difficult for average consumers to be able to recognize deceptive tactics.
Reddit banned deepfakes. But there will be other deepfakes. While consumers of information must be vigilant and remain critical when taking in public messages today, tech leaders must develop sophisticated but easy-to-use tools for average message consumers to be able to see doctored content. Blockchain may work, but we’d better move quickly. The safety of ourselves and our democracy depends on it.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.
https://www.wired.com/category/security/feed/