On December 4, 2016, a 28-year-old man named Edgar Maddison Welch drove for hours from his home in North Carolina to a DC pizza parlor and opened fire with an assault rifle. The reason? A fabricated story, disseminated mainly through social media outlets like Facebook and Twitter, claiming that Hillary Clinton and her cronies were running a child sex ring in the basement.
This incident was neither the first nor the latest indication that we are living in what many have called a “post-truth” world. The importance once lent to facts and statistics has taken a backseat to emotional appeals and tribalism within political systems in the Western world.
Journalists and political commentators now call this type of misinformation “fake news.” But it didn’t take long for that phrase to be co-opted by President Trump and others as a way to disregard news stories with which they disagree. This has called almost all journalism into question—and now, it seems like almost any fact can be questioned and instead labeled as an opinion.
What most of this fake news has had in common thus far is that it has been presented through text. However, the way we consume fake media might soon change with the advent of some shockingly advanced new software.
At Adobe MAX 2016, Adobe’s annual conference, the company presented a new technology called VoCo, an audio program that has been compared to Photoshop. But instead of altering pictures, it can alter sound clips. By feeding it a 20-minute sample of someone’s voice, the program can then generate new audio clips, allowing users to type in a phrase that the original voice had never uttered and have it repeated back instantly in the same voice. And the people at Adobe are not the only ones working on this type of technology. A Canadian startup called Lyrebird has utilized some of the same techniques, as have researchers at the University of Alabama at Birmingham.
This same concept is also being developed for video. Last year, researchers at the University of Erlangen-Nuremburg and Stanford University developed a technology called Face2Face. A source actor is used to capture different facial expressions, which can then be displayed in real time on a target actor. One of the example target actors they used to demonstrate this feat was the President, which seems to allude to the terrifying power this new technology might have. Other projects, like “The Digital Emily Project” have developed similar abilities.
This kind of innovation raises obvious concerns about how technology might soon make the problem of fake news even worse. With realistic-sounding audio clips and convincing video corroborating textual claims, experts are worried that problems with misinformation on the Internet could get even more chaotic than they are right now.
Once applications like these are publicly available, almost everyone will have the ability to create videos of political figures (or anyone else) saying anything the author wants them to say. Even if there is a way to detect the inaccuracy of the fake media this kind of technology can create, it’s very possible that the speed with which the misinformation travels will lead to immediate consequences; the truth may be largely irrelevant. After all, if someone can be convinced to shoot up a pizza parlor based on a text-based fake news story, what will people do when they can see and hear a political figure spout off some other kind of nonsense?
iStock.com / roberthryons (edited)
Once this new type of fake media becomes commonplace, anything that happens can be called into question. Politicians and others in positions of trust might get away with denying just about anything. After all, news consumers will say, how do we know what’s true anymore?
This begs the question: what role will reality play in our post-truth world? Does objective reality become meaningless? Because it’s clear that adopting a doctored version of reality does result in a myriad of consequences regardless of the truth. If everyone thinks something happened even if it didn’t, won’t their reactions be reflective of that misinformation? And doesn’t the acceptance of that misinformation and its consequences create a new, bleak kind of truth itself?
Experts seem to be split over whether we can overcome this problem of misinformation in our media. In the summer of 2017, the Pew Research Center asked 1,116 experts, including scholars, technologists, and others, this question:
“In the next ten years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?”
They could answer in two ways: that the information environment could either improve or not improve. Just over half of respondents—51 percent—said that they thought it would not improve. Most of the explanations for this answer fell into two categories. The first is that the current state of our news responds to our “deepest human instincts.” By nature, we are attracted to information that supports our preexisting beliefs. And with online filter bubbles that are designed to show us exactly what we want to see, hear, and read, it will get more and more difficult to question factually untrue narratives.
The second is that our brains cannot keep up with the pace of technology, leading to chaos that will encourage many to completely give up trying to figure out what is going on in politics and culture.
However, 49 percent of respondents said that new technology can also help fix these problems and that humans have a natural inclination to come together to solve issues like this. This optimistic view of these changes begs the question: are the reasons to develop these technologies worth the risks we’re taking in national security, journalistic objectivism, and even societal stability? Vocal and facial fakery will mostly be useful for those producing audiobooks, podcasts, movies, and TV shows. Instead of having to re-film or re-record parts of productions that don’t turn out well, they will be able to inexpensively edit a line or facial expression. Does the reduction in expense for these industries warrant the further destabilization of an already-dubious political and journalistic atmosphere?
Some scientists and technologists believe that it is their job to blindly pursue innovation regardless of the costs. After that, it’s up to society to deal with the consequences. But this viewpoint reflects a moral laziness on the part of developers who insist that this technology is the wave of the future. Like the problems predicted to come with the further development of AI, additional problems with the trustworthiness of media can largely be avoided if we simply refrain from developing these new types of technology. It’s time that technologists take responsibility for the kinds of products they’re creating. Because until we do have technological solutions to these problems, the introduction of these products into society presents immediate and deeply concerning issues that could very well have drastic consequences.