Image of microchip
An arrow pointing leftHome

Can AI be rational? A discussion with Steven Pinker


The two-time Pulitzer Prize finalist, one of Time magazine’s 100 Most Influential People, and one of Foreign Policy‘s 100 Leading Global Thinkers, discusses his new book, Rationality: What It Is, Why It Seems Scarce, Why It Matters

Steven Pinker Photo by Rose Lincoln, courtesy Harvard University

Steven Pinker likes to talk. The Harvard professor and experimental psychologist has written around a dozen books and countless articles on language and the mind, focusing on everything from visual cognition to the way language reveals our thoughts and even social relationships. He’s well known for using evolution to explain how humans are hard-wired to communicate verbally, even before formal languages came into existence. (We’re looking at you, Music!) His theories have been lauded and derided, leading to public battles with the writer Malcolm Gladwell and even to an open letter from members of the Linguistic Society of America requesting Dr. Pinker’s removal as one of LSA’s fellows. None of his detractors have prevented the professor from speaking his mind (or chopping off his glorious and celebrated silver locks).

Not one to shrink from controversy, the New York Times bestselling author just released his latest book, Rationality: What It Is, Why It Seems Scarce, Why It Matters. He spoke during a virtual book tour* at the UW’s University Bookstore about teaching rationality, inequality in reasoning, and whether an AI can evolve to become perfectly rational.

*(The following interview has been edited for clarity and concision.)

Behavioral psychologists have long agreed that there are two cognition systems: fast, which is intuitive and gut-level, and slow, which is reflective and deliberative. You put rationality within the latter. But do you think that the intuitive system has anything to do with human rationality? Our intuitions, our snap judgments, can have some rationality packed into them because they are the result of our experience, learning through patterns that can’t be deduced by step-by-step logic. They emerge from statistical correlations between lots and lots of features. And they do some things well — like recognizing faces and voices — but very often they can lead to a confident, wrong conclusion.

Can rationality be taught? Why are they still teaching trigonometry when they should be teaching probability? It’s much more useful, and a lot of errors that people make in everyday life, like in risk assessment, come from faulty intuitions of probability. It’s not just a narrow subject, it’s a tool of thinking. So, I think we should make room in the curriculum not just for probability, but also for principles and critical thinking — like avoiding ad hominem arguments, arguments from authority, or arguments from anecdotes. These are all too common in our discourse. There’s even a subculture that calls itself the Rationality Community that tries to promote these values. Like, you shouldn’t dig your heels in and argue for a position to move to the bitter end. You should leave room for a level of credence of probability — like, I’m 0.8, confident that this is true, but listening to your argument, now I’m going to ratchet it down to 0.7. That would be rationality in the sense that it is a kind of commitment to use the tools of rationale more broadly.

If you educate people, do you think that being rational can only lead to good outcomes? Rationality is relative to a goal. You have to deploy a reason in order to come to some conclusion, whether it be an objective truth or a deep explanation, or getting something done in the world. And there’s nothing that says that that goal can’t be deep human relationships and love and appreciation of beauty. So, actually, once you realize that rationality is not the opposite of pleasure, joy, beauty, or meaning, I think it is an unmitigated good. That isn’t to say that everyone who claims to be rational is rational.

Do you need language to be rational? I think language is not in itself the essence of rationality, because English [for example] is just too vague and ambiguous and incomplete and sloppy to capture rationality and reasoning. We think in a much more abstract medium. But once we have language, we can kind of exponentiate rationality, because we can criticize other people’s ideas. We can build on their tools and kind of lift ourselves up by our bootstraps to pull the feats that we, as a civilization, can enjoy — like smartphones and vaccines.

What do you think about the role of diversity in rationality? Is it possible that hearing a diversity of opinions is kind of a spark plug for building rationality? Diversity of opinion is absolutely essential simply because none of us is infallible. None of us has been both saved from the truth and through the revelation. You know, we all bumble along, trying out hypotheses, hoping that some will be confirmed. But we’ve got to have other people that don’t believe those hypotheses that will hold our feet to the fire and point out the flaws.

Do we live in a more rational world today than we did 50 years ago, a hundred years ago…or is rationality decreasing? I tend to think there’s a lot of rationality inequality. We have the means to move away from them, and a lot of us still don’t, to my great disappointment. I like to plot charts of human progress, so I went back to survey data on astrology and various paranormal phenomena over the last 40 or 50 years — as long as we have had continuous data. And I’m disappointed to say that it’s pretty flat. I had thought that we would be far more rational than the generation before, but no.

Will greater rationality increase human happiness and flourishing? The data suggest that we have reduced disease, war, poverty, crime, and illiteracy. And it didn’t happen because nature smiled on us — quite the contrary. To the extent to which progress happens it’s because people have deployed their brain power with the goal of making other people better off. That’s really the only explanation for how progress could happen. Of course, it’s not enough to be rational because you could apply your rationality to bigger and better nuclear weapons or to complicated financial instruments that make investors some money at the cost of taking down the whole economy. There still has to be the goal of improving welfare humanism, as I call it. Although I think when we step back, human flourishing is what we end up with.

How does rationality align with artificial intelligence? I think they do. I’ve been influenced by the foundations of artificial intelligence, going back to the origin of the field, which required thinking about what we mean by intelligence or rationality. And I think of artificial intelligence — like logic, like probability, like statistics — as a source of normative models; that is, models of how one ought to reason in order to attain a certain goal in a particular environment. Artificial intelligence obviously is not by itself psychology, but it sets a kind of benchmark. It asks questions like, how could any system accomplish things like recognizing an object, retrieving a memory, coming to a sensible conclusion? I think that rationality is intimately related in the way that AI helps us clarify normative models of what rationality actually consists of.

Does rationality promote or hinder human imagination? Rationality is not the same as creativity. Although once you’ve set the goal of, ‘How do I come up with a solution to this problem?’, then you can deploy your rationality to achieve a new source of pleasure or enlightenment or transcendence. That is a rational way of pursuing a pure form of creativity.

How can parents, caregivers, or society raise future generations to foster rationality? Instead of digging in and defending your belief, like it’s a precious possession, you should always be asking: What if someone told me that I was wrong? A lot of rationality consists of submitting to the rules of a community that enforces norms of criticism and evaluation and peer review; fact-checking and adversarial processes allow the whole community to be more rational than any of the individual members. So, you’ve got to be able to say, it’s not about me. I’ve got to be willing to be part of a group that follows the rules that make us rational.

Do you believe AI will ever produce perfect rationality? There’s probably no such thing as perfect rationality because thre are always trade-offs. And going back to one of the founders of rationality, Herbert Simon introduced the concept of bounded rationality. It may have been one of the ideas that earned him his Nobel prize. The idea is that any particular kind of rationality has costs in terms of resources, memory, CPU, cycles, acquisition, relevant data. All systems have to have various compromises and trade-offs. And probably another reason that perfect rationality can’t exist is when it comes to induction — that is, going from some observations to a general law. The truth is always inherently fallible. It depends on experimentation and on feedback from the world. So, the image of the all-powerful, perfectly rational AI is almost certainly impossible.

Rationality is intimately related in the way that AI helps us clarify normative models of what rationality actually consists of.
Steven Pinker