Automatic for the People

It always starts the same way.

I sit at my laptop to work on some meaningful bit of writing, plucking words from the keyboard, tappity-tap-tap. Next thing I know, it’s eerily silent. I’m no longer tapping. I look at my screen and see that I’m three page-scrolls deep into Facebook.

Gah! I have no memory of navigating here, let alone wanting to. How long have I been on Facebook? I don’t know.

I cannot tell you how many times this has happened, but each time has been so unsettling that I finally set up a sting operation to nab myself navigating to Facebook. After several attempts at self-surveillance, here’s what I found: At some point in writing, I would open my web browser to research a fact. As soon as the browser opened, though, I reflexively tapped the letter “f” into the URL field and voila, with a flourish of autocomplete, I was on Facebook.

The whole process was swift and unconscious. My left index finger tapped the ‘f’ key seemingly of its own. All it took was the stimulus of an opened browser.

Frankly, I felt betrayed that such an intimate part of me—my pointer finger—had summoned Facebook without my consent. This finger had long been a reliable indicator of my will. Before I could speak, I pointed to indicate my needs and alert the big people in my life to experiences I thought meaningful. Pointing was my first tenuous communication link with the outer world.

When I sit down at my computer—a portal to the wide world—I am often trying to point myself in the direction I want my life to go. Much of what I hope to achieve will be effected through this machine—whether through email, a word processer, or the internet. So I am deeply frustrated when that intention is re-circuited.

Unfortunately, computer technology is getting better at responding to these unconscious twitches. Already, the technology exists to track my eye movements and read my facial expressions. These capabilities are currently being developed to feed data to advertisers, but they can also be used for navigation. Within the past year, Google has patented an eye-tracking system that allows a user to unlock a phone with her eyes. This sort of optical navigation seems rather harmless. But most of our eye movements are rapid and unconscious. Once Google can detect a frustrated look at my essay or longing glance at my browser, Lord knows I could end up anywhere.

In fact, there is already a name for this deliberate technological bypass of human agency: “anticipatory computing.” The phrase was coined by San Francisco-based startup Expect Labs, which has pioneered the technology. “Instead [of] relying on ‘hard signals’ like text queries entered on a keyboard,” the company website explains, anticipatory computing interprets “‘soft signals’ like audio, video and location information.” In other words, computers act in response to context clues rather than to explicit human instructions.

To demonstrate the technology, Expert Labs created a group conferencing app called MindMeld, which deciphers the meaning of a conversation and retrieves relevant information in real time. Like a kind of psychic butler.

Om Malik, a long-time tech journalist, praised the rationale of this kind of anticipatory computing: “With more devices and more sensors coming into our lives, the amount of data being generated will reach a point where the machines need to start anticipating our needs. Search as a way to access information doesn’t and won’t work—mostly because search can only respond to questions we ask.”

Notably, in Malik’s observant language, this technological endeavor is driven by what “the machines need.” Anticipatory computing is deemed helpful because it removes the obstacle in their way: “the questions we ask.” So you have to wonder: How is this technology enriching human aims if it cannot wait long enough for humans to figure out what their own aims are?

Of course, startups like Expert Labs will have to pitch anticipatory computing as a solution to a human problem in order to sell these products to the public. Mobile devices have provided a seeming rationale. As we use the internet more and more on-the-go, rather than on a personal computer in our homes, initiating searches will become less convenient. How can you conduct a search as you’re ordering a coffee, for example, or boarding the subway? (I’ll leave aside, for a moment, the presumption that computing ought not pause for a cup of joe or a step onto a train.)

Foursquare has already harnessed the power of anticipatory computing. Last fall, it rolled out real-time suggestions that can spontaneously suggest a nearby activity you’ll enjoy based on your location, your likes, and others’ reviews. Google has created something similar in Google Now, an enhanced feature of its mobile search application. Google Now analyzes a user’s history and real-time data to present relevant information before a user can even request it. Tech watchers expect Google Now will become a crucial component of Google Glass; information will pop into a user’s field of view so that he need not appear to be computing. As in fact, he isn’t.

And so we enter an unprecedented realm of automation, in which even the human initiation of automation is automated, and a new chicken-and-the-egg problem arises: If my computer knows what I want before I do, and I come to know my wants from what my computer tells me, where are these wants actually coming from? Who or what is deciding them?


This is precisely the bewilderment I feel when I find myself on Facebook. Did I take myself here, or did autocomplete? The answer, it turns out, is not so simple, as there are actually two me’s at work.

According to dual processing theory, a dominant tenet of psychology, every human has two minds working in the same brain. One mental processing system, System 1, is all quick and dirty impulse, steering us through unconscious tasks like breathing and brushing our teeth. This mind is essential for all animals because it can rapidly attend to the million stimuli of life without taxing our brains. The other mental system, System 2, is slower, demanding, evolutionarily sophisticated, and unique to humans. This is the system of directed, conscious effort, the mind that chooses its words, weighs pros and cons, and makes plans for the day.

These minds exist in balance with one another and to some extent, responsibilities can be shifted back and forth between them. You can let System 1 handle breathing all day or take conscious control of the process, ala System 2, when your doctor asks you to take a deep breath. Sometimes it behooves us to transfer more of our brain-load to System 1. Concert pianists can effortlessly rip through arpeggios because they have shifted the process to System 1 through hours of practice. They no longer consciously think of each note; their fingers move all of themselves, like a certain someone’s finger tapping the “f” key.

But there are also good reasons not to cede too much control to System 1. This is a system of automatic impulse. It doesn’t weigh the merits of those impulses; it simply acts. When those are impulses to breathe, hooray, we live another day! When they are impulses to consume fat, salt, and sugar, we grow morbidly unhealthy. The System 2 mind that has read a medical study or two would do best to intervene.

This is where computer automation makes things tricky. Naturally, computers are becoming quite deft at interfacing with my System 1, which itself is rather computer-like. My quick, unconscious acts can prompt the computer’s quick, unconscious responses. If we are trying to generate speed for its own sake, this is quite an impressive accomplishment.

But there are at least two problems created by a computer-enabled System 1 mind. First, as computer automation becomes ever more integrated into our daily experience, greater chunks of our lives will be governed by our impulses. These impulses may be good or bad, but they are certainly not reasoned.

Second, self-control is an essential ingredient of the System 2 mind. Studies show that the more we exercise self-control, the better we get at it. If we use it less, the faculty atrophies. As we allow computers and our System 1 selves to automate more of our lives, not only will our impulses rule, but we will have less wherewithal to rein them in.

“Self-control” doesn’t have the sexiest ring, I know. But self-control is among the greatest powers humans possess; it is the power of self-definition. The work of meaningful living is to machete-chop our way through infinite short-term impulses toward the long-range desire for our lives.

Deep down, we know this to be true. Our ancient forbears understood it intuitively, re-telling myths of self-mastery long before the articulation of dual process theory. Aeneas had to leave the arms of Dido, the beautiful Carthaginian queen, to fulfill his life’s mission and found the city of Rome. The Buddha had to ignore a demon’s urgings to claim his father’s throne in order to achieve enlightenment beneath the Bodhi tree. The world over, battling temptation is not an idle act; it is a crucial, defining step in the hero’s journey. Our choices mean nothing if they do not require an activation of will, if they are not choices at all.

Modern science confirms this ancient wisdom. Repeated experiments have shown that people who can delay gratification will fare better in life by virtually every measure of success, happiness, and self-actualization. Counter-intuitively, a University of Chicago study conducted last year found that these strong-willed ones are not merely exchanging present happiness for future benefit; they reported being happier in the present than those who remain captive to impulse. Immediate gratification apparently pales beside the real-time sensation of flexing a sense of self. There is joy in the simple act of choosing.

This knowledge does not necessitate an apocalyptic showdown between humans and computers. If consumers demanded it, our technologies could preserve small delays to give our conscious minds time to engage. Rather than anticipate our wants, computers could be designed to await our conscious signal—and then do their blindingly fast work.

The key will be to make “human control” a desirable selling point in new technologies. Surely this is possible. I can’t be the only person who has blinked at their surroundings, wondering, “How did I end up here on Facebook?” If we learn to value technologies according to the breathing room they accord our conscious selves, products like MindMeld and Google Now may no longer sound so sexy. Even autocomplete will have the ring of absurdity. For as we become more automatic, we become less complete.