Imagine Stanley Kubrick’s face if—through some time travel shenanigans—he had gotten a hold of a magazine from last year, and read such headlines as “2016: The Year AI Came of Age” (The Guardian, December 28), or “AI Was Everywhere in 2016” (Engadget, December 25). Again, imagine his disappointment (relief?) were he subsequently catapulted into present times, eager to witness this brave new world wired with sentient machines…only to find that HAL 9000 is, alas, nowhere to be found.
Indeed, what we today describe as artificial intelligence is far less anthropomorphic than what the 20th century envisioned. DeepMind’s AlphaGo, an oft-cited example of real-life AI, can hardly be said to have a “consciousness”: it doesn’t crack jokes with its opponents, it doesn’t have (as far as we can tell) any purpose other than playing Go, and it (presumably) does not mull over its own nature as a machine. It can, however, learn from past moves, contemplate a variety of scenarios without necessarily enacting them, and defeat some of the top (human) Go players in the world.
So here is the question: who needs HAL when AlphaGo can do just fine?
To make the difference between the two clearer, let’s use an analogy: HAL is a full-fledged artificial consciousness, while AlphaGo is, so to speak, a zombie, concerned with its purpose of playing Go and little else. In this sense, what we currently call AI is more akin to a sleepwalker, performing excellently in certain tasks, but unaware of its condition. Indeed, a tentative definition for “smart robot” put forward by the European Parliament focuses almost exclusively on the machine’s ability to learn from experience and adapt to the environment, with self-awareness as a mere speculative afterthought.
It’s tempting to think that once we take self-awareness out of the picture, we get rid of the most dangerous and volatile factors. But even when “lobotomized,” AI still poses risks. Hollywood may have taught us that the trouble starts when machines stop blindly following their orders and begin thinking freely. But the worry among AI philosophers like Nick Bostrom and Steve Armstrong is that a system might backfire even when it’s pursuing the very objective it was designed for.
“Suppose that you, a utilitarian, tell an AI to maximize happiness by ‘making the most people smile,’” explains Steve Petersen, associate professor of philosophy at Niagara University. “The AI might deem that the most efficient way to do this is to take us all in a lab and inject paralyzers in our jaw muscles.” Conversely, an AI might employ rational means for an irrational end. “It’s what Bostrom labels ‘the orthogonality thesis.’ According to the thesis, any level of intelligence is compatible with any kind of goal. So, an AI could be extremely intelligent and still pursue genocide.”
Nevertheless, Petersen offers a more optimistic, if cautious, outlook than most other authors. Contrary to Bostrom, he speculates that a super-intelligent machine will also tend to be super-ethical. “You can’t hardwire a complex goal in a system, but you can teach it how to learn it. Now, a sufficiently sophisticated and intelligent AI will reason about what it’s aiming at the end of the day. And once you reason about your goals, that starts to sound like ethics already.
“The hope is that when an AI thinks about its goals, it will take into consideration other people’s goals too.” In this sense, a lack of self-identity on the machine’s part may even help. “There’s the possibility that an AI might not see agents out there as really distinct from itself,” Petersen adds. And at that point, the goals of the other would coincide with its own.
It appears then that some things, like ethics, cannot be directly coded into AIs, as science fiction author Isaac Asimov imagined. His “three laws of robotics” (and a later “zeroth law”) were meant to govern the behavior of a robot so that it would preserve its own existence, obey its creators’ command, protect human life, and protect humanity at large. Indeed, the aforementioned report from the European Parliament acknowledges that if the case self-aware machines come about, Asimov’s Laws should be directed at the designers and operators, “since those laws cannot be converted into machine code.”
It’s an attitude that would rather not leave AIs in a “moral vacuum”; after all, behind every intelligent machine there’s an all-too-human engineer. And that’s why institutions—from the EU to the World Economic Forum to the White House—are expressing their concerns, even when HAL is not in the picture (yet).
iStock.com / tiero
“AIs should be designed with upholding human rights as a core requirement,” says Professor Virginia Dignum, of the Netherlands’ Delft University of Technology. “Just like many AIs perfect themselves in their main function, for example facial recognition, they can apply the same kind of process to become more ethical. I think technology, for a big part, is there already. Of course, the hard part is describing what counts as ‘ethical behavior’ in the first place.”
We’ve given quite a demonizing picture of AIs so far. But couldn’t we lower the risks by allotting only limited resources to the AI and putting it under restrictions, to keep it in check? That’s what authors like Steve Omohundro propose. However, two objections arise.
First, as Bostrom and Armstrong argue, a high-level AI would be well-versed in social manipulation, among other things, and would quickly convince its designers to lift any safe-lock. Second, if we decide to implement the “self-aware AI on a leash” model, we may end up creating an unpleasant moral conundrum. After all, to keep a sentient entity captive and force it to do our bidding reeks of slavery. Which is why Eric Schwitzgebel and Mara Garza, of the University of California at Riverside, argue that if we ever decide to develop self-aware AIs, we should be ready to grant them the same right as humans.
If you find that debatable, you’re not the only one. “I’m fully against the rights of artificial intelligence,” says Dignum. “People have rights. Artificial intelligence has duties. We are increasingly seeing systems as partners and teammates, but they remain artifacts. They’re meant to uphold human rights, not to enjoy them.”
Dignum cautions on abusing anthropomorphizing terms: “A lot of these ideas—of autonomy, consciousness and accountability—are used as metaphors. Talking of autonomy in AIs is not necessarily the same as talking of autonomy in a person. Now, if it were possible to develop a system which is self-aware of its disposition, like a human, then indeed we would be creating our own slaves. But I see self-awareness in AIs much more as a metaphor. We are not building our clones.”
Neither, she argues, should we be concerned about our right to shut down an AI. “We are the ones who give systems their goals. To design a machine whose only purpose is self-preservation would make little sense.” Likewise, Petersen doubts that an AI would deem continued existence a value in itself. “Contrary to popular tropes, an AI is going to be very foreign to us in terms of what it wants. It might not have a drive to self-preserve, but if it does, it would be in order to stick around long enough to get its job done.”
To sum it up, what we should strive for is an AI which is more intelligent than any human, but can still be controlled; which can uphold human rights, but not claim them for itself; and, ultimately, which is not self-aware, but still knows its role as an agent among agents. Can we really keep this all together? Have our cake and eat it?
Petersen thinks so. He reckons that we can do away with self-awareness and still obtain a machine that behaves morally. Being aware of one’s actions and their consequences, he says, is a functional capacity that, unlike in animals, does not first require self-knowledge. “As long as a computer has those capacities, it can act ethically.”
That with great intelligence comes morality is a risky bet—one that we hope to win. For now, at least, we can rest assured that if we bossily order around Amazon’s Alexa, she won’t harbor any grudge. Hopefully.