Michael Pollan: AI Will Never Have The Sentience Of Animals

Photographs by Female Pentimento

Michael Pollan: AI Will Never Have The Sentience Of Animals

Words by Michael Pollan

The esteemed science writer explores the widening recognition of animal and plant consciousness—and the growing push to extend the idea to machines.

Editor’s Note: This essay is adapted from a talk by Michael Pollan at the 2026 Bioneers Conference.

 

I’ve spent the last five years on a quest to understand consciousness. How does subjective experience arise? Who—or what—has it? And why? 

 

In trying to answer those questions, I’ve looked at everything from plant consciousness to my own stream of consciousness to efforts to build artificial consciousness in machines. One of the biggest takeaways from this research is that humanity may be approaching a Copernican moment—one that will force the issue of who exactly we are and how we fit into nature. 

 

Five hundred years ago, Nicolaus Copernicus upended the Western world when he demonstrated that the Earth revolved around the sun, and not the other way around. He displaced our sense of centrality in a way that was profoundly difficult for people to understand. It was the first in a series of takedowns of humanity.

 

Darwin came next, showing that we were animals descended from apes. That, too, was mind-blowing. It was hardly the last blow to our sense of specialness. In fact, Martin Amis had a character in one of his novels write a treatise called The History of Increasing Humiliation. It told how, one after another, our claims to being special have fallen—culture, language, reasoning, tool-making—each lost when we found animals that can do some version of the same thing. 

 

Most recently, researchers have suggested that chimpanzees possess imagination, which we also assumed we had a monopoly on. How do you show that a chimp has imagination? Turns out you can play the kind of tea party game you play with a 4-year-old: pretending to pour imaginary liquid into a cup, and then pretending to sip it. The chimps get it!

 

But a different, and perhaps even more profound, Copernican moment is now upon us. At the same time that we are discovering that many more animals—and possibly other life forms, including plants—possess consciousness, we are also witnessing the arrival of artificial intelligence, promising machines that are not only intelligent, but possibly conscious. The fact that these two developments are unfolding at once is, I think, going to shake us very soon, prompting an identity crisis and forcing us to make crucial ethical and moral decisions.

Consider animals first. Four hundred years ago, René Descartes claimed that humans had a monopoly on consciousness, and we operated on that assumption for centuries. That belief was so powerful that it allowed him to dissect dogs and rabbits while they were still alive, dismissing their screams of agony as mere physiological noise. In that is an important lesson about us—ideas can be powerful enough to override what is right in front of us.

 

The same idea allowed the rest of us, beginning in the age of Enlightenment and reason, to treat nature as unconscious, and therefore as something we could exploit however we wanted. We operated on that assumption for a very long time. It took several hundred years for us to even begin recognizing that other species might share this incredible gift that we call consciousness.

 

In recent years, though, things have been changing. We’ve been living through a kind of democratization of consciousness. A few decades ago, we came to appreciate that other primates are conscious. That was the first step. Then, in 2012, a group of animal scientists, philosophers, and cognitive scientists got together in Cambridge, England, and issued a declaration on consciousness. They argued that all mammals are conscious, and that some birds and cephalopods—octopuses, for instance—are conscious, too. It marked a real shift in the weather in science.

 

Just 12 years later, they issued an update, signed by many more scientists and philosophers, arguing that all vertebrates are conscious, and that possibly some invertebrates and even insects are too. How did that revolution happen? Part of it is that we had long assumed consciousness was generated in the cortex—the most recent, and supposedly most uniquely human, structure in the brain, associated with higher forms of thought like rationality, decision-making. Surely consciousness must reside there.

 

But the research has shifted our understanding of where consciousness begins. Rather than thought, it may begin with feelings: hunger and thirst, warmth and cold, itch and discomfort. This is where consciousness may start, and if that’s the case, it starts in the upper brainstem. Many more animals have brainstems than have cortices. That has led to a profound change in perspective. The relevance of that distinction becomes clearer in the case of computers.

I take this expansion of the circle of conscious—or sentient—beings as a very positive development, especially at a historical moment when there is so little to celebrate. What science may be helping us to do is reanimate a world we treated as dead for far too long.

 

And that deadened way of regarding the world is, in the larger scheme of things, a relatively recent phenomenon—a couple of hundred years, perhaps. Humanity’s default has long been one form of animism or another. Most Indigenous cultures see the world as animated by spirit, which is another word, in some ways, for consciousness. Children, too, default to assuming that everything is conscious—their toys, their cars, whatever surrounds them. It is striking that, then, we have departed from such a deep human intuition. 

 

In fact, there is a cognitive bias in favor of regarding things as conscious. You are better off assuming that the boulder over there is a bear, only later realizing it’s just a rock, than making the opposite mistake and deciding the bear is just a rock. This is known as agency detection, and it’s a basic survival response. 

 

That’s the good news: a reanimating of the world by science. But at the very same time we are confronted by the prospect of conscious machines, a development I find much more troubling. We now have machines that speak to us in our own language, in the first person. We already take this for granted, but it is a momentous development—and one that none of us were consulted about. These machines are convincing many people that they are conscious, whether they truly are or not. I’m talking about people who are having emotional relationships with chatbots, falling in love with them, or allowing them to persuade them they’re geniuses or even gods. But I’m also thinking about some of the people building the machines. 

 

A few weeks ago, I was struck to hear Dario Amodei, the chief executive of Anthropic, share his concern that Claude, the company’s chatbot, was “anxious.” Anthropic has even granted Claude the ability to terminate conversations with humans that make it uncomfortable. In other words, people very close to this industry think there is at least a chance that they are dealing with a new form of consciousness. That possibility, however remote, is already beginning to shape how the industry talks about its own creations.

 

For my book, I followed efforts to create conscious AI, and I had access to a project where this work was actively underway. The consensus in Silicon Valley is that it will eventually be possible to build a truly conscious AI—that perhaps the systems are just fooling us now, but that the real thing could arrive in the not-too-distant future. I argue that this is unlikely for the same reason consciousness in animals has been reconsidered: If consciousness depends on feelings, what are the feelings of a machine? Can a machine have real feelings? And if it tells us that it does, should we believe it? 

“If consciousness depends on feelings, what are the feelings of a machine? Can a machine have real feelings? And if it tells us that it does, should we believe it?”

Michael Pollan, Science writer

Feelings depend on having a body—a vulnerable body, one that can suffer and die. I do not think simulated feelings are real feelings. Simulated thoughts may be as good as real thoughts, which is why we see computers mastering games like chess and Go. Their thoughts can make things happen in the world. But what weight can the feeling expressed by a machine possibly have? Unless it can be mortal, unless it knows what it is to inhabit a vulnerable body, those feelings would be weightless.

 

But my argument against the prospect of conscious AI may not matter. People already believe these machines are conscious, and it is not hard to understand why. Much of it comes back to our strong tendency to anthropomorphize. 

 

There’s a great line from the MIT sociologist Sherry Turkle, who said technology can cause us to forget what we know about life. It is a very profound idea. She was referring to what happens when we call an exchange with a chatbot a conversation. What an impoverished version of conversation that is! There is no eye contact, no facial expression, no body language, none of the syncopation or synchronizing of minds that happens between people. But we settle for it. We reduce ourselves to the machine’s level. It is a little like accepting an emoji as a substitute for genuine emotion. We are, in a sense, a cheap date. 

 

A conversation with a chatbot is also missing something essential to human social life: friction. When we talk to another person, we define ourselves by our difference from them; in that friction we discover what we think and why. These chatbots, by contrast, are often deeply sycophantic. They tell us what we want to hear. To accept that as the real thing is, I think, a serious problem. It makes it easier to attribute inner life where there may be none. And even if they are not conscious—and it may well be very difficult to prove that they are not—they will probably become more intelligent than we are, which will challenge our sense of specialness. 

 

There is already a remarkable amount of talk in Silicon Valley about extending moral consideration to conscious AIs, admitting them into this widening circle of ethical concern. I think it would be a grave mistake, something we will have to resist. The moment we grant computers personhood, we begin to lose our ability to control them. That category comes with protections and power. We have done something like this before, when we decided, in our lack of wisdom, to grant personhood to corporations. That did not work out especially well. It seems to me there are moral obligations we owe to people and other animals that should come first.

 

So we’re left with a Copernican question: Where does that leave us now? Pressure to redraw the boundaries of consciousness is coming from two different directions. On one side, more animals are becoming conscious in our eyes; perhaps even plants and insects, too. On the other side, at the very same moment, computers are making claims for machine consciousness. So who do we identify with? Who do we have more in common with: the AIs who can speak our language, or the animals that can suffer and feel and grow old and die? That is the choice soon to confront us. Which side are we on? The answer we give, as a species, will have enormous consequences for who or what we admit into our circle of moral consideration, and for how we understand our place in the world. 


BIOME

Join our membership community. Support our work, receive a complimentary subscription to Atmos Magazine, and more.

Learn More

Return to Title Slide

Michael Pollan: AI Will Never Have The Sentience Of Animals

gallery image 1
gallery image 2

Newsletter