Warm and Fuzzy Logic

[Metropolis magazine, 2010]

When the movie 2001: A Space Odysseywas released in 1968, the notion of a computer that could recognize and react to human emotion seemed far-fetched and frightening. Filmgoers got a cold chill watching HAL, the homicidal supercomputer on board the spaceship Discovery, invent a malfunction in the ship’s communications link with Earth, engineer the deaths of four astronauts, and desperately try to talk the sole survivor out of dismantling its memory bank. "Look, Dave," HAL implores,"I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over."

A half-century later, affective computers are more a promising reality than a sci-fi nightmare. Researchers at MIT, IBM, Sony, and other laboratories are developing technologies that may one day make it possible for your personal computer to sense whether you are happy or sad, anxious or relaxed, interested or bored, and to use that information to shape how it interacts with you.

The idea of giving computers personality or "emotional intelligence" may seem creepy, but technologists say such machines would offer important advantages. Despite their lightning speed and awesome powers of computation, today’s PCs are essentially deaf, dumb, and blind. They can’t see you, they can’t hear you, and they certainly don’t care a wit how you feel. Every computer user knows the frustration of nonsensical error messages, buggy software, and abruptsystem crashes. We might berate the computer as if it were an unruly child, but, of course, the machine can’t respond. "It’s ironic that people feel like dummies in front of their computers, when in fact the computer is the dummy," says Rosalind Picard, a computer science professor at the MIT Media Lab in Cambridge.

It’s ironic that people feel like dummies in front of their computers, when in fact the computer is the dummy.
— Rosalind Picard, MIT Media Lab

A computer endowed with emotional intelligence, on the other hand, could recognize when its operator is feeling angry or frustrated and try to respond in an appropriate fashion. Such a computer might slow down or replay a tutorial program for a confused student, or recognize when a designer is burned out and suggest he take a break. It could even play a recording of Beethoven’s "Moonlight Sonata" if it sensed anxiety or serve up a rousing Springsteen anthem if it detected lethargy. The possible applications of "emotion technology" extend far beyond the desktop. A car equipped with an affective computing system could recognize when a driver is feeling drowsy and advise her to pull over, or it might sense when a stressed-out motorist is about to boil over and warn him to slow down and cool off.

Beyond making computers more responsive to people’s feelings, researchers say there is another compelling reason for giving machines emotional intelligence. Contrary to the common wisdom that emotions contribute to irrational behavior, studies have shown that feelings actually play a vital role in logical thought and decision-making. Emotionally impaired people often find it difficult to make decisions because they fail to recognize the subtle clues and signals -- does this make me feel happy or sad, excited or bored? -- that help direct healthy thought processes. It stands to reason, therefore, that computers that can emulate human emotions are more likely to behave rationally, in a manner we can understand. "Emotions are like the weather," Picard observes. "We only pay attention to them when there is a sudden outburst, like a tornado, but in fact they are constantly operating in the background, helping to monitor and guide our day-to-day activities."

Picard, who is also the author of the groundbreaking book Affective Computing, argues that the same should be true of computers. "They have tremendous mathematical abilities, but when it comes to interacting with people, they are autistic," she says. "If we want computers to be genuinely intelligent and interact naturally with us, we must give them the ability to recognize, understand, and even to ‘have’ and express emotions." Imagine the benefit of a computer that could remember that a particular Internet search resulted in a frustrating and futile exploration of cyberspace. Next time, it might modify its investigation to improve the chances of success when a similar request is made.

So how does one build emotive computers? The first step, researchers say, is to give machines the equivalent of the eyes, ears, and other sensory organs that humans use to recognize and express emotion. To that end, computer scientists are exploring a variety of mechanisms including voice-recognition software that can discernnot only what is being said but the tone in which it is said; cameras that can track subtle facial expressions, eye movements, and hand gestures; and biometric sensors that can measure body temperature, blood pressure, muscle tension, and other physiological signals associated with emotion.

One product of this research is IBM’s Emotion Mouse, a computer mouseembedded with sensors to gauge skin temperature, heart rate, and minuscule hand movements – key indicators of theuser’s emotional state. At the MIT Media Lab, researchers are weaving biometric sensors into a variety of wearable items. An affective earring, for example, can measure the wearer’s blood pressure and wirelessly transmit that data to a computer. Another item for the fashionable digerati, Galvanic Skin Response Shoes, contains electrodes that can assess the electrical conductivity of the wearer’s skin (which reflects how much a person is sweating, an indicator of anxiety). One of the lab’s more intriguing inventions, Expression Glasses, is designed to determine the wearer’s level of interest or confusion by measuring the movement of muscles around the eyes. The data is then transmitted to a computer, which creates a graphical display showing two colored bar graphs (red for confusion and green for interest). Thus a video lecturer could gauge the reactions of students in remote locations.

Of course, detecting the subtle clues that betray human emotions is only half the equation in developing effective emotion technology. Computers must also learn to interpret these signals and respond appropriately -- skills that many humans find difficult to master. IBM says its Emotion Mouse can accurately determine a user’s basic emotional state -- happy, angry, frustrated -- about 75 percent of the time. MIT researchers say their computers can recognize individual facial expressions better than 90 percent of the time, but the machines are far less successful at processing the emotional content of speech: they get it right only about six out of 10 times.

Research at MIT has shown that while computers are quite good at recognizing basic emotions such as excited and calm, they are not very adept at distinguishing between more subtle shades of feeling such as excited-happy and excited-angry. An example of what might happen if a computer fails to grasp such a distinction can be found in the first Star Wars film, when Luke Skywalker, Princess Leia, Han Solo and Chewie are trapped inside a giant trashmasher on the Death Star. As the rumbling walls close in on them, Luke frantically radios his android companions C-3PO and R2-D2 for help, urging them to shut down the power supply to the masher. Moments later, the droids hear their human masters screaming and hollering over the radio and assume the worst. “Listen to them! They’re dying, R2! Curse my metal body!” C-3PO exclaims. But, of course, what the droids were hearing were screams of joy, not anguish; the walls had stopped moving and our heroes were saved.

while computers are good at recognizing basic emotions they are not very adept at distinguishing between more subtle shades of feeling.

Most of this technology is in the very early stages of development and will take years, if not decades, before it results in commercial applications. Some devices, such as the Emotion Mouse, could be on the market within two or three years, but others will probably never make it beyond the laboratory walls. Picard believes it will take at least a decade of additional research and engineering before computers will have the skills to intelligently read and respond to human emotions.

Some technologists believe that teaching computers to be more aware of the user’s behavior, rather than her emotional state, may be a more effective and less intrusive way to improve relations between man and machine. That’s the philosophy behind the Blue Eyes Project at IBM’s Almaden Research Center in San Jose, which is developing "attentive computers" designed to perceive, interpret, and respond to the movements, sounds, and touch of the user. "Up until now we have interacted with computers through explicit commands. You type on a keyboard, push a button, click a mouse," says Myron Flickner, manager of the Blue Eyes Project. "We believe computing will move from explicit commands to implicit ones. Computers need to become more attentive to users’ needs. They should pay attention to you when you pay attention to them." In the future, Flickner says, ordinary household devices such as telephones, refrigerators, and ovens will do their jobs when we look at them and speak to them. A Blue Eyes-enabled TV set, for example, would become active when the viewer looks in its direction, at which point the television could respond to voice commands to tune in CNN and turn up the volume.

One of the more remarkable members of IBM’s Blue Eyes team is Pong, a disembodied robotic head with bushy eyebrows, bulging cartoon-character eyes, and a round nose that conceals a tiny camera. Pong sits on a desk and is equipped with infrared sensors that can determine the direction of a person’s gaze to within one degree of accuracy by tracking the movement of his irises. When someone walks into Pong’s field of view, the [robot’s]head will swivel in the direction of the visitor, its eyebrows will arch upward to convey mild surprise, and the surgical tubing that forms its mouth will bend into a Herman Munster-like smile. When the visitor looks at Pong, the robot’s eyes will return his gaze to show that it is paying attention. Pong is also capable of identifying individuals by listening to their voices or scanning their faces and comparing the sounds and images to databases stored in a connected computer. The technology could be used for security purposes (if an unauthorized person tried to use a computer that Pong was watching, the robot could lock up the keyboard) or to customize the computer interface to suit an individual’s preferences. Another Blue Eyes technology, called Suitor, takes note of the Web sites and software applications a person is accessing and uses that data to determine what additional information might be useful. Suitor then hunts down the information on the Internet or from another source and delivers it to the desktop. IBM says the technology could be used in a wide range of applications, from computer training and education programs to interactive entertainment and advertising.

"A lot of the technological nightmares we wallow in would disappear if a device knew who we are and what our environment is," says Mark Smith, manager of appliances and media systems at Hewlett-Packard Labs in Palo Alto. "It could eliminate the need to remember pins and passwords. You could sit down at a computer and it would automatically configure itself to your preferences. Or you could sit in your car and it would adjust the seat, the mirrors, and the climate control to your taste."

Some of these technologies are already in the marketplace, or will be shortly. HP has developed an electronic badge that can identify the wearer and track his movements and activities with an array of sensors. The device, called a Secure Pad, was designed for a major health-care provider to track the activities of doctors and other personnel at large medical facilities. "It knows who you are and where you are, and it has a pretty good idea of what you are doing and when you are doing it," Smith says.

While that may sound ominous, Smith says the device will ultimately benefit patients by enhancing the security and accountability of medical facilities. For example, the device will know who has accessed a drug locker and what drugs were removed. It will also allow doctors to access confidential medical information without carrying around paper charts, which can be misplaced or read by unauthorized personnel.

"When a doctor wearing the Secure Pad enters a patient’s room, the patient’s medical records will automatically appear on a wall monitor when the doctor looks at it," Smith says. "When he looks away, or another person enters the room, the records will disappear." Another advantage of the Secure Pad is that it’s interchangeable; when the wearer removes the badge from his body, the device automatically deactivates and the slate is wiped clean until the next person puts it on.

Technologists at Xerox’s legendary Palo Alto Research Center are taking a slightly different tack in their efforts to make computers more user-friendly. "Calm computing," one of the guiding principles at PARC, strives to build computers that are simpler and more intuitive to use and that create a sense of calm rather than frustration. "Computer interfaces are far more complex than they need to be," says Roy Want, a leading Xerox researcher. "A lot of people find their computers exasperating because they are becoming the focus of their work. When you read a book, you don’t notice the font it’s printed with, you see right through that. The same philosophy should apply to computers."

One way Want and other Xerox researchers are trying to accomplish that is through the use of electronic tags, similar to bar codes, that can automatically summon information about a particular object from the Internet when the object is placed near a computer terminal. For example, an electronically tagged book could call up information about the book’s author and suggest related reading material. Similarly, a VCR with an electronic tag could summon the user guide.

Judging from the halting progress made by other technologies designed to make computers easier to use, it may be a long time before we are able to develop warm, personal relationships with our computers. A decade ago, voice recognition was touted as the next new thing — a technology that might eventually supplant the keyboard and mouse as the primary interface between people and computers. But today’s voice-recognition programs, though dramatically better than they were, can still be maddeningly maladroit at recognizing the idiosyncrasies of individual speech patterns and filtering out background sounds.

If computers are to become tame and polite members of our world, we must give them the eyes and ears to interact with us on a deeper, more subtle level.
— Paul Saffo, director of Institute for the Future

Taking emotion technology to the ultimate extreme, scientists are exploring the possibility of creating computers that can literally read our minds. Researchers have already achieved limited success in implanting electrodes in the brain to detect neurological activity and translating that information into identifiable emotional states. The Air Force, for example, has built a simulated cockpit that, in theory at least, will allow pilots to control the roll angle of a plane simply by thinking about it. Another device, similar to an MRI, can produce three-dimensional images of chemical distributions in the brain to locate where metabolic activity is happening. Under ideal circumstances, this information could be used to deduce something about what a person is thinking.

Previous attempts to endow personal computers with artificial intelligence have been rather dismal. In 1995, Microsoft introduced Bob, an avuncular "intelligent agent" designed to make inexperienced computer users feel more at ease. But Bob was a dork and Microsoft quickly showed him the door. He was replaced by Clippie, a dancing paperclip on Microsoft Office, but it, too, tended to be more irritating than helpful.

Beyond the question of whether affective technologies will ever work effectively is a larger question: Do we really want our computers to know how we feel? In a world where we are already bombarded with a constant onslaught of electronically generated words, images, and sounds, do we want machines making smiley faces, shedding crocodile tears, or giving us pep talks? For many people, the answer is probably no.

Affective technologies also raise serious concerns about privacy. If computers were capable of reading and recording our emotions, that information could easily be misused by inquisitive employers or prying marketers — or, for that matter, unscrupulous divorce lawyers. And what happens if our computer has a nervous breakdown? Do we send it to a cyber-psychiatrist?

John Seely Brown, the chief scientist at Xerox, believes that emotion technology, if done sensibly, could be useful in improving human-computer relations. But he warns that if it’s crudely done — if computers become too intrusive, for instance, or if they misread the user’s emotional state and respond in inappropriate ways — it could be counterproductive. "There is nothing that pisses you off more than something that appears authentic and engaging, and a nanosecond later turns you off," he says. "The lack of authenticity could generate a mistrust of technology."

Paul Saffo, director of the Institute for the Future in Menlo Park, Calif., and a leading technology forecaster, agrees with that assessment. "We’re hunting for very big game," Saffo says. "This is hard stuff to do, and it will make the misunderstanding worse if it’s done badly."

Which brings us back to HAL. Could there be such a thing as a computer that’s too smart for our own good? A computer that values its own interests above those of its human creators? While it is highly unlikely that computers will ever achieve enough autonomy to go on a murder spree, technologists say they are well aware of the potential for computers with too much attitude to do more harm than good. "If we want digital assistants that are truly useful, we have to figure out a way to make them nonintrusive," said IBM’s Myron Flickner. "We have to do it in a way that the users are always in control."

Saffo is convinced that the day will come when computers "will slip elegantly and unobtrusively into our lives," but that day may be a long time coming. "Right now we live in two parallel universes. The physical world and the newer electronic construct," he says. "So far these two worlds have barely connected. If computers are to become tame and polite members of our world, we must give them the eyes and ears to interact with us on a deeper, more subtle level."