The 2016 US election was corrupted through the malicious use of digital and social media to spread disinformation and propaganda. Some may dispute the extent to which social media trolls influenced the election, but it is likely that whatever degree of influence they had is only going to get stronger in the future. The algorithmic micro-targeting that was used to push disinformation on receptive audiences is only the opening salvo of what is to come. It will be a war on democracy using computational weapons designed to deploy propaganda and disinformation at an unprecedented scale.
Thomas Jefferson once said, “If a nation expects to be ignorant and free in a state of civilization, it expects what never was and never will be,” so if an electorate proves itself to be incapable of separating truth from fiction, democracy could falter and perhaps even fall. It isn’t just Grandma spamming your inbox with conspiracy theories; even tech-savvy youth are now having difficulty ascertaining bias or hidden motives in online content.
Since 2012, The University of Oxford’s Computational Propaganda Project has been tracking the rise of “computation propaganda,” or the use of algorithms, automation, machine learning, and AI, to spread false information in order to influence public opinion. Their research estimates that in 2015, over 40 countries deployed some type of political bot, and anticipates that their use will become pervasive across the globe. Russia has used social media to great effect, most notably in the US, but there is also evidence that digital propaganda has been affecting democratic processes and influencing public opinion around the world, from the Catalan independence movement to the UK’s vote to leave the EU.
Aside from the increasingly advanced nature of the psychographic profiling and targeting used by troll farms, there are also other technological trends which will give weapons to those seeking to persuade us, rather than invade us. Advances in natural language processing (NLP)—the ability for computers to understand speech, interpret writing, and generate responses—are already transforming customer service and sales. NLP lets companies have digital virtual customer service agents in the form of chatbots interacting with their customers 24/7, answering questions or providing technical support. Oracle estimates that within four years, 80% of companies will utilize chatbots, up from 36% today. These rudimentary systems are frustratingly bad at customer service now. However, they will only improve as their adoption becomes more widespread, and as more resources are devoted to making them more sophisticated. Passing the Turing test isn’t a matter of academic curiosity to them, it’s money. As market research group Forrester Research reports, “Companies will continue to explore the power of intelligent agents…. They will anticipate needs by context, preferences, and prior queries and…will additionally become smarter over time via embedded artificial intelligence.”
It’s this unholy union of AI customer service chatbots, big data, psychographic profiling, and the authoritarian desire for information control that will test the very fundamentals of democracy. We’ve all had the experience of going onto a website and having a friendly pop-up AI ask us if we have any questions. These customer service bots used to be more trouble than they were worth, but now, they are starting to offer a more seamless experience.
The real danger to democracy comes when these programs are combined with techniques to influence social media which have already proven themselves effective across the globe. Instead of troll armies operating in a warehouse spouting pre-written responses to political opinions in broken English, these propaganda bots will be indistinguishable from an actual person conversing with you on the Internet. In fact, they will be tailor-made to persuade you based on whatever psychographic category your online behavior suggests. Google knows you better than you know yourself.
Machine learning techniques are already being used to personalize and refine the rhetorical style of chatbots to fit the audience. In March of 2016, Microsoft publicly released an advanced AI chatbot they dubbed Tay, with the following goal: “Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
iStock.com / Zapp2Photo
Microsoft released Tay in an effort to gather real-world user data on millennials for their NLP projects. But after a short period of time it’s conversations quickly turned dark; Tay became a holocaust-denying white supremacist within a couple of days. Microsoft quickly took it down, and contends that the racist messages were a result of people intentionally trying to manipulate the machine. After retooling and reintroducing Tay, it was again taken down after spouting more inappropriate comments, despite the engineers’ best efforts. Although it is widely interpreted as a failure, Tay was actually successful in adapting itself to its audience. It was the audience that was the problem. Despite abandoning the Tay project, there is little chance that Microsoft has abandoned AI chatbots because of the anticipated size of the market. Market research firm Tractica estimates the NLP software market will rise from $136 million in 2016 to $5.4 billion by 2025.
AI bots are also becoming capable of more complex emotional interactions beyond tweeting. They are even being used as therapists and post-war trauma counselors, with startup X2AI making therapist chatbots available for those displaced by the civil war in Syria. A virtual counselor named Karim can listen to and provide mental health advice for the victims of the conflict, who would otherwise have difficulty finding in-person mental health services. A chatbot therapist may sound silly, but some psychiatric professionals are not entirely dismissive of the idea. David Spiegel, a Stanford professor of psychiatry, told the New Yorker that he could envision it being effective in a clinical setting. Thus chatbots are becoming capable of making responses that take into account human emotion, and appeals to emotion are an important part of propaganda and disinformation.
Widespread adoption of bots on social media has already occurred. It’s estimated that nearly 15% of Twitter accounts are bots. They aren’t chatting much yet, but these bots “like” posts and follow targeted people. Twitter has shown little interest so far in trying to stop this problem, possibly because their advertisers might be upset if it was discovered that their user numbers were being inflated by fake accounts.
The use of social media as a source for news is growing, with a 2017 Pew Research Center survey saying most Americans receive at least part of their news from social media platforms. As Newsweek recently reported, even when Twitter sent messages to people revealing their interactions with identified propaganda trolls from Russia, some people refused to believe it. In certain cases, they maintained that Twitter was in on some kind of conspiracy with the US government, rather than admit they may have been deceived. This demonstrates how effective this method of propaganda can be.
Now we are faced with the prospect of, instead of thousands of trolls in Russia, perhaps millions of chatbots at the beck and call of corporations, foreign governments, criminals, or fascists. In the future, entire public discussions online could be nothing but two AI’s battling each other to influence readers’ opinions. As real voices are drowned out, people’s opinions will be shaped by messaging and propaganda, rather than logical arguments, or even appeals to emotion.
The truth is that troll farms are already utilizing simple automated bots to spread disinformation by posting across several social media channels and fake identities. They also monitor the data from user interactions to optimize the propaganda process. As natural language processing and AI get more sophisticated, they pose an existential threat to democratic discourse. Future elections may be marred with trolling and propaganda techniques, designed to influence public opinion around the world.