Skip to main content

Learning About Minds and Machines

Wires in the form of a human head in an abstract space.

Is the human mind a computer? Could a computer ever think like a human? In the course of the 1940s to 1960s, an influential group of computer engineers, psychologists, and philosophers argued for a resounding “yes” to these questions. Known as “computationalists,” these thinkers founded the fields known as artificial intelligence (AI) and computational psychology, traditions of research that continue to the present day. 

For a while, funders invested heavily in the computationalist paradigm. This included AI researchers’ efforts to create models of human thinking in the form of computer programs. By the late 1970s, however, technical disappointments and powerful theoretical objections led to lost traction for both computationalism and AI, a period known as the “AI winter.” Since then, AI eventually found its way back to the public spotlight. 

Philosophical Questions of AI

Phillip Honenberger, a faculty affiliate of the SNF Agora Institute, is fascinated by the philosophical questions: Is the mind a computer? To what extent can computers think? But he is also captivated by the historical ones. He wonders what the history of computationalism and AI research can teach us about their prospects today. Honenberger shared his curiosity with students in a class he taught over the summer, Minds and Machines. Students explored these topics through discussions, primary and secondary source readings, and interactive websites on neural networks and machine learning. 

“There have been at least two approaches to AI since the 1940s,” notes Honenberger. “Symbol processing approaches conceived of intelligence as symbol manipulation according to rules and heuristics, while artificial neural network (ANN) approaches modeled it on the structure of interconnected neurons in animal brains. AI’s pre-winter public profile was primarily symbol processing, but its post-winter resurgence has relied much more heavily on ANN.” 

One major goal of the course, Honenberger says, was for students to learn to distinguish between the technology, how it works, and what it is actually currently capable of, on the one hand. And the various abilities people have sometimes fantastically ascribed to it, on the other. 

“Futuristic prognostications are sometimes just science fiction-y, and involve philosophical leaps of inference—like, if we built a machine that acted indistinguishably from a human, it would have to be conscious; or, if we could somehow ‘upload’ our personality to a machine, we could live forever. Sometimes people just assume the technology will deliver something like ‘consciousness’ or ‘personality’ that there’s no clear understanding of the mechanisms of,” he says. Honenberger wants his students to notice and critically dissect those leaps in logic. 

Everything is Connected

The course closed with small groups of students producing reports on topics related to the mind/computer analogy. Honenberger designed the project Honenberger to create “cross pollination” as students tackled the interdisciplinarity of their topics. 

Incoming first-year student Alex Witzke was in a group that studied memory. He wrote a paper exploring how close AI could get to human memory from a philosophical perspective. 

“The class showed me how interconnected things are that I used to think were a lot more separate, like technology and philosophy,” Witzke says. “It showed that there’s definitely a connection between almost every single field you can think of, if you want there to be a connection, which is really interesting to me.”