Skip to main content

Chaz Firestone

Works in Progress: Human Versus Computer

illustration by James Yang
Image: © James Yang

Try zeroing in on an orange. Or a daisy. While the human brain may correctly identify the images, a machine might mistake them for a missile or jaguar, says Chaz Firestone, assistant professor in the Department of Psychological and Brain Sciences.

Those mistakes might seem comical at face value, but could prove deadly if a self-driving car doesn’t recognize a person in its path, for example. Or when we begin relying more on automated radiology to screen for anomalies like tumors or tissue damage.

“Most of the time, research in our field [of artificial intelligence] is about getting computers to think like people,” says Firestone. But his recent work does the opposite, he says. “We’re asking whether people can think like computers. Human intuition may be a surprisingly reliable guide to machine (mis)classification—with consequences for minds and machines alike,” say Firestone and his co-author, Zhenglong Zhou ’19—then a senior majoring in cognitive science—in a 2019 paper published in Nature Communications.

Firestone and Zhou conducted their research through a series of experiments in which they presented human participants with “adversarial” or “fooling” images—images that computers are known to have difficulty “seeing.” Such images are a big problem because not only could they be exploited by hackers and cause security risks, but they suggest that humans and machines are actually seeing images very differently.

In their experiment, they provided their human test subjects with the same constraints given to a computer and then asked them to “think like a machine” as they pondered various “fooling images.” The researchers found that people chose the same answer as computers 75 percent of the time. Perhaps even more remarkably, 98 percent of people tended to answer like the computers did.

These findings show that the machine’s errors are not completely mysterious and that it’s entirely possible that a person could predict what kinds of errors a machine will make. This is important because with any new technology, it’s not enough to know when it will work; we also need to know when the technology might get stumped, Firestone says.

“If suddenly we have automated radiology and self-driving cars, we’re going to have to learn, ‘When is my technology going to fail?’” Firestone says.

“Most of the time, research in our field is about getting computers to think like people, [but] we’re asking whether people can think like computers.”

—Chaz Firestone, Assistant Professor