Digital Eye

New AI Paper Warns of Malicious Intent

Discussions on the dangers of Artificial Intelligence (AI), whether fictional à la The Matrix, or drawn from real life, generally revolve around the same thing: the unintended consequences of AI built with poor controls and mismatched incentives. In those scenarios, once AI bursts the inadequate bonds humanity has placed on it, it mutates from its original harmless intent into something pernicious.

In February 2018, AI researchers and cybersecurity experts from Oxford’s Future of Humanity Institute, Cambridge University’s Center for the Study of Existential Risk, the Electronic Frontier Foundation, the Center for a New American Security, and others released a new white paper: “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.”

Unlike previous debates over AI, this new report examines what happens when very bad people deliberately use the expanding capabilities of AI to rob, to murder, to sabotage, and to destabilize nation-states. The report’s timeline is not focused on some far-off future, but what may be done with the technology available now or in the next five years.

Fortunately for readers that lack the time to plow through a 100-page policy paper penned by mathematicians and computer scientists, The Technoskeptic was able to parse the Malicious AI report with the help of two people intimately involved in the development and use of AI. One was Scott Amyx, managing partner of Amyx Ventures, a venture fund focusing on incubating tech startups around the world. The other was Nicholas Economou, Senior Advisor to the Artificial Intelligence Initiative of the Future Society at Harvard, and CEO of H5, a legal-services AI company.

[Note: We also sought comment on the report from the House Subcommittee on Research and Technology; from Senator Jeff Flake, who chairs the Senate Judiciary Subcommittee on Privacy, Technology, and the Law; and from the White House’s Office of Science and Technology Policy. None responded.]

The report lays out four brief scenarios that are already happening in some form, or might feasibly appear in the near term:

Sophisticated Phishing. An administrator in charge of a security robot has her social media “scraped” by AI and it learns of her personal interest in model trains. The AI places a model-train ad where she will see it online at work. When she clicks on a link to get a brochure, the AI sends her a file infected with malware that compromises her work network and her security robot.

Automated Hacking. AI is used to continuously update and modify ransomware to keep it ahead of anti-virus and security software designed to stop it, leading to a global ransomware plague. It is eventually tamed, but not before widely infecting Internet of Things devices, shutting down the heating and cooling systems in countless critical locations (such as hospitals) and permanently “bricking” untold mobile devices.

Commandeering of Physical Objects. A cleaning robot of the same make as that used by the German Ministry of Finance is infiltrated into Ministry grounds. This “Manchurian Candidate” robot performs routine cleaning tasks, exactly as its robotic counterparts do, until one day visually identifies the Finance Minister, moves towards her, and detonates an onboard IED, killing her.

No-Friction Totalitarianism. A citizen of a country angered at rampant cyber attacks conducted with impunity complains online that people should be protesting government inaction. He also buys smoke bombs. Neither act is illegal, but his repressive government, which seeks to stop protests before they start, uses AI to track all potential dissenters, and arrests him the next day.

The paper’s authors noted this was only a smattering of the myriad destructive uses AI could facilitate.

If the short scenarios were the appetizer, the main course of the report was four key recommendations to mitigate AI’s potential for malicious use.

Nick Bostrom

Nick Bostrom, Director of Oxford’s Future of Humanity Institute / ©Jean-Marc Ferré / CC

The first recommendation: “Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.”

Amyx had no doubt getting politicians involved would be a challenge. “I fear that this will fall on deaf ears with the government.” Amyx thinks that, outside of military and intelligence circles, which are starting to at least understand AI threats, politicians “lack imagination to understand what is possible. They are not technically sophisticated enough to have a deep enough conversation…they could work with teams of analysts, but I don’t think they’re going to be able to understand the technology enough, or that they will make the effort to learn.”

One of Amyx’s biggest concerns was an expansion on attacks on the legitimacy of Western governments. “Robotic process automation is a step below [future AI], but we have it now, and it incorporates AI. It makes it easy to scale things cheaply and easily, including proliferation of fake news on social media, programmatic ad buys and targeting…. Right now, at least we have things like video footage that document real events. In the future, with adversarial machine learning, it will be able to render what looks like a truthful high quality video, audio, text. And bots can create big enough social networks that self reinforce to give the impression that it is truthful information. Ultimately, the real risk is undermining democracy and governments.”

Economou agreed that getting policymakers engaged is the key—but perhaps most difficult—recommendation of the report to implement. “If we don’t find a way to govern and develop norms, I fear realities on the ground will pre-empt those efforts. But to operationalize governance, there are prerequisites for success. We need a framework as to what governance means.”

The second recommendation: “Researchers and engineers in AI should allow misuse-related considerations to influence research priorities and norms, and proactively reach out to relevant actors when harmful applications are foreseeable.”

Amyx concurred that this was an important point and explained why AI researchers are very prone to not see the potential downsides of their work. “They are looking at it from a very narrow perspective; looking for advancement and technological leap in capability…they don’t do scenario planning or game theory evaluation [of potential threats]. From a private-sector perspective, it is driven by economics. There is an AI race, so even if they kind of know about those concerns, they still forge ahead. It is misaligned incentives.”

Amyx related a recent encounter with researchers who were concerned about their own work, because they were developing computer code to beat the best human champions in Texas hold ’em poker. “Who would be interested in that research? People who run online gaming platforms, and those who want to rig Texas hold ’em games. People who want to create an unfair advantage.”

Digital Eye

iStock.com / ValeryBrozhinsky

The third recommendation: “Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.”

In other words, the report suggests that systems already used in computer security, which include “red teaming” (setting up an “opposition force” to attempt to breach security protocols), pre-publication review, and tight export controls, should spread to all areas of AI research.

Amyx commented that while the measures outlined were worthwhile, they were incomplete. “I took part in a workshop that included a company that made a profound technology and who confided they’d been contacted by ‘bad actors.’ Their attitude was they were not responsible for ethical or moral ramifications of what happens with their products once they are sold…even beyond that, there are researchers working for different parties that may not even realize that their funding source may be coming from so-called bad agents. They can take public research and deconstruct or reverse-engineer and be able to create their own set of algorithms fairly quickly.”

In other words, while incorporating best practices to limit risk is important, those researching AI have to be aware of what terrestrial security and intelligence experts call “insider threat”—actors within the field who will not be operating with benign motives—and proceed accordingly.

The final recommendation: “Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.”

Establishing this kind of dialogue is one of Economou’s chief preoccupations. He’s a member of the Law Committee of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, and served as chair of the Roundtable Committee on AI in the Law at the Global AI Governance Forum in Dubai in February 2018.

He believes one of the main drawbacks when grappling with the challenges posed by AI is that non-technical types believe it is above their head, and so have ceded the battlefield to tech elites. “So far there has been very little discussion of what a framework [for governing AI] would look like, or discussion of who should be involved. So far, it is tech or tech visionaries. That is not enough. We need representation from all aspects of society, even psychologists and philosophers. We also have to spell out at very basic levels what values we are trying to support with a governance framework, and build from there. One value would be: all humans are born equal in rights and dignity.”

From a Policy Paper to the Real World
When asked how to best manage the various suggestions of the report, Amyx was clear. “The best approach has to be industry-driven, it is a consortium, it is self-regulated, and they see from a viewpoint of public good will that it is in their best interests to make sure their product is doing the right thing for the right purpose. It is a matter of time before IBM Watson, Google, Microsoft, others active in AI research put together that kind of framework from a consortium, I think that tends to work the best.”

For Economou, creating measurements and about how well AI does what it purports to do is crucial. “How do we know, or don’t know, what AIs do? When AI was introduced into the discovery process, nobody knew how to measure if AI works to find the facts, but that is central to finding justice. NIST (the National Institute of Standards and Technology) did a study looking at whether AI in real-world fact-finding was meeting goals. Having measurements of whether AI is accurately doing what it is supposed to do is important if people are going to use AI instead of lawyers. Ten years ago, AI was used in law with no measurements, today thanks to NIST and other developments, we have very tangible ways to measure if it is working. That is hopeful.”

Economou also believes creating norms about who uses AI will be a key safeguard. “AI doesn’t work on its own very often; it is in hands of operator. We will rely on operators of AI to use it correctly, just the way we trust a surgeon with a scalpel, because for surgeons, society has established norms of competence. Today we have no norms of who uses AI, be it for financial services or diagnostics in medicine. Today any judge, doctor, lawyer can say, ‘I’m competent to understand, use, and measure AI.’ We need to measure the AI and the person using it together. We want in 20-30 years to have well-defined norms about who uses AI.”

A Technoskeptic’s Perspective on the Malicious AI Report
Possibly any paper created by a group of researchers and academics that can pass muster with all of them is by necessity so watered down it becomes soporific. “The Malicious Use of Artificial Intelligence” describes a clear and present danger. When considered as a document written by AI researchers for AI researchers, it is competently constructed. But when the report is considered as a piece of science communication, something to get people engaged, it fails. It is a report that only a policy wonk could love.

Malicious AI Report Cover

Although the report’s top recommendation is to get policymakers engaged with the development of AI now, as both Economou and Amyx believed, that call to action is quite likely to fall short.

For non-technical audiences, understanding is not reached through an agglomeration of factual bullet points, but by a narrative that engages and explains why he or she should get involved. If getting policymakers engaged now is truly crucial, the report misses the mark.

Missing was a scenario that might envision the political capital and influence a politician might gain by taking the lead on regulating a difficult area that we will have to grapple with as a society, like it or not.

Or, conversely, given how highly motivated by fear US politicians seem to be, a scenario spelling out how voters will blame them for ignoring this first big warning because they were too busy making fundraising phone calls and flaming each other on Twitter.

One could imagine a more effective version of the Malicious AI report including a scenario in which voters take extreme umbrage after an AI-fueled disaster when they learn that, just to pick a couple of random examples, even those that serve on The House Subcommittee on Research and Technology, the Senate Judiciary Subcommittee on Technology, or the White House’s Office of Science and Technology Policy, seem too busy to even read their emails or answer their phones….