[audio]
Stupid Questions: Consciousness
10 Jan 2026
The hard problem of consciousness is just a complicated debate with no real outcomes. It’s the behaviour that matters, not whether there’s ineffable qualia behind the curtain.
¶Show Notes
¶Further reading
- Panpsychism
- AI Consciousness
- Consciousness vs Conscious Access
- Mundane Cults
- The Placebo Effect
- Spirituality of the Mind
¶References
- John Locke: Primary-Secondary Quality Distinction
- Mary’s Room (Knowledge Argument)
- Qualia on Wikipedia
- Thomas Nagel: “What Is It Like to Be a Bat?”
- Animal Echolocation
- Hard Problem of Consciousness
- Philosophical Zombie
- Sam Harris: The Moral Landscape
- Sam Harris TED Talk: Science Can Answer Moral Questions
- New Mysterianism
- Brian Key: Fish Cannot Feel Pain
- Do Honey Bees Have Conscious Experience?
¶Pattern
- What seems to be X is actually non-X
¶Speechnotes
¶Intro
Welcome to the btrmt Lectures. My name is Dr Dorian Minors, and if there’s one thing I’ve learned as a brain scientist, it’s that there’s no instruction manual for this device in our head. But there are patterns to the thing. Patterns of thought, of feeling, and of action. That’s what brains do. So let me teach you. One pattern, one podcast. You see if it works for you.
So, we are officially past the introductory lectures and into the full swing of things. This is my shorter intro, I’ll by curious what you think.
Now, to get things on the straight and narrow, I’ll do a series of bits that I have on questions that seem important, but actually don’t really end up mattering for most people. Certainly not in the way they’re typically deployed.
Typically they’re deployed like stupid questions which make you seem smart.
Let me tell you about one.
¶Assumption
What is consciousness? Lots of people want to know. Lately this is largely because everyone is wondering if AI is conscious. But understanding whether something is conscious requires an understanding of what consciousness is. And this is where you start running into problems. Problems that I reckon don’t actually really matter.
The typical way people teach consciousness is to talk about the colour red. The ‘redness’ of red. I’ve no idea why. Maybe it’s because it’s such a good illustration of the thing, but this goes back to at least John Locke. So we will also use “redness”, so when someone starts talking about the “redness” of red, you will know what’s coming and make an excuse to leave.
¶What is consciousness?
For me, the cleanest example is Frank Jackson’s thought experiment, Mary’s Room:
Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specialises in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on…What will happen when Mary is released from her black and white room or is given a colour television monitor? Will she learn anything or not?
Television, because this essay is from the ‘80s, but you get the point. Mary’s never seen red, but obtains all there is to know about colour. Then one day, she sees red. Has she learned something new about the colour red? I think you’ll agree that she has. Jackson certainly thought so,[^6] that there is some kind of knowledge beyond the physical properties we understand about them—it’s “redness”. That knowing about red is not the same as experiencing it.
This, whatever this is, is an example of what’s known as qualia. True to form, Wikipedia, at the time of writing, has a red colour patch with the caption:
The “redness” of red is an example of a quale.[^7]
Thomas Nagel puts it in an interesting way in his famous paper. He says that, though we can, in theory, understand everything there is to know about how bats echolocate—the physics of sound waves, the physiology, the behavioural responses, the information processing—we will never know what it’s like to experience the world through echolocation. There is “something it is like” to be a bat, and it doesn’t really seem like you can pass along that subjective, phenomenal character with a description.
¶Why is consciousness considered a ‘problem’?
Now. Here is the problem. Why does red have “redness”? Why is there “something it is like to be” at all? It’s a difficult question to answer.
Chalmers calls this the hard problem of consciousness to pose it against easy problems—related phenomena that we can, in theory, explain:
the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report
These are all processes that lend themselves to examination. They can be explained functionally and mechanistically. They are easy problems. The hard problem is explaining why these things are accompanied by a sense of experience—by qualia. Why does Mary learn something new when she sees red, beyond knowing all of its physical properties? Why can’t we know what it’s like to be a bat?
Chalmers uses the example of a sort-of automaton to illustrate: we can imagine a person who goes about behaving in all the ways you or I do, but with absolutely no experience attached. A zombie or a robot, mechanically acting and reacting to the world around it. There doesn’t seem, on the surface of it, any reason for it to also experience that stuff.
¶Subversion
¶The ‘solutions’ to the hard problem don’t really solve anything
Now, lots of people try to solve this problem in lots of ways. I have a whole article on solutions to the hard problem, but I will run through them here in brief:
- Non-materialists say that consciousness just isn’t a material, physical thing. Think of a soul or a mind.
- Emergentists and functionalists say that consciousness just emerges from certain configuration of neurons, like water emerges from a configuration of atoms.
- Illusionists and eliminativists say that thinking of consciousness as anything at all is a mistake. It’s some kind of illusion, or a category error. Like asking where the University is when you’ve been shown all the buildings and the faculty and so on.
- Panpsychists go particularly off-piste, and say that it’s the intrinsic nature of the physical stuff that physics describes: physics tells us what things do, not what things are, so maybe consciousness is what things are.
Are you getting tired yet? I am.
What’s annoying about all this is that it’s impossible to have a proper conversation about it with all these perspectives because you have to take whichever one of them you prefer on faith. They all suffer the same explanatory gap. Whether you think it arises from configurations of neurons, or a soul, or the space that physics leaves unexplained, you still have to explain how it actually interacts with the stuff we do know about—Chalmers’ easy problems. Nobody has managed this. No one is really even close.
The modal position in academia is the emergentist one—that consciousness sort of comes about with the right configuration of neurons or whatever. You walk around the lab I used to work at and this is what people would say. It’s what I would have said (and did). In fact, people would probably be confused to learn there were other perspectives on it, because we are well into the scientistic era and this feels like science. It also feels reasonable because consciousness sure seems like it’s dependent on our perceptions. You can’t really experience something without perceiving it first.
And so, there’s some optimism here that, if we study the brain hard enough, consciousness will turn from a hard problem into an easy one. We’ll make this thus far impossible jump from the perception of something to the experience of it.
And so you see debates about whether AI systems have genuine experiences, or whether honey bees are conscious, or, Brian Key, in one of my favourite academic articles, writing in the equivalent of academic caps lock that fish cannot feel pain. They don’t have the neurobiology for it. The journal, Animal Sentience, invites responses, and there are tens of responses arguing that not only can fish feel pain, they can suffer too! They’re conscious!
¶Implication and outro
¶And more to the point, none of it matters!
And this is where I start to lose interest in the project, because so what? Under what realistic circumstances, precisely, would this matter? When would it actually matter whether something was truly conscious or illusorily? If things seem conscious, we already know how to respond.
Sam Harris has a nice essay and TED Talk about this. He says:
Why is it that we don’t have ethical obligations toward rocks? Why don’t we feel compassion for rocks? It’s because we don’t think rocks can suffer. And if we’re more concerned about our fellow primates than we are about insects, as indeed we are, it’s because we think they’re exposed to a greater range of potential happiness and suffering.
The secret hope, I suspect, is that, by working out what consciousness is, we can reduce suffering. But we can do that now. We can do it by caring about things that seem to suffer in a way that makes them seem to suffer less, and we can do all that without proving that they have qualia.
We’re not close to understanding the distinction. Some reckon we’ll never crack it, like an ant will never crack calculus. And even if we could, I can’t actually tell what would change. Would we stop caring about animal welfare if we proved they weren’t strictly conscious? Or treat rocks differently if we found out they were?
Of course not, because it’s not an interesting question. It’s the behaviour that matters. Whether there is some ineffable ‘what it’s like’ behind the curtain is practically irrelevant.
So why bother asking?
¶Edited Transcript
Below is a lightly edited transcript. For the article that inspired this one, see Stupid Questions.
Welcome to the btrmt. Lectures. My name is Dr Dorian Minors, and if there’s one thing I’ve learnt as a brain scientist, it’s that there’s no instruction manual for this device in our head. But there are patterns to the thing—patterns of thought, patterns of feeling, patterns of action. Because that’s what brains do. So let me teach you about them. One pattern, one podcast. You see if it works for you.
Now, this is another one in my series of bits that I have on questions that people ask me which seem important but actually don’t really end up mattering for most people, certainly not in the way that they’re typically deployed. When people are asking me questions about this, it’s because they’ve heard about an interesting problem in science. But often when I see it out in the wild, they’re deployed like stupid questions that make you seem smart.
So let me tell you about these irrelevant questions so you can avoid wasting mental energy on them.
¶The Redness of Red
What is consciousness? Everybody wants to know lately, largely because we wonder if AI is conscious. But understanding whether something is conscious means that we first have to understand what consciousness is. And this is where you start running into problems. And these are problems that I actually don’t really think matter.
Now, the typical way that people teach consciousness is to talk about the colour red—the redness of red. And I have no idea why this is true. Maybe it’s because it’s a good illustration of the thing, because there aren’t actually many good illustrations of the thing.
So let me give you an example of this. And for me, the cleanest example is Frank Jackson’s thought experiment that he called Mary’s Room.
So you imagine this woman, Mary. She’s a brilliant scientist, just like me, and for whatever reason, she was forced to investigate the world from some kind of black-and-white room. And she’s looking into a computer screen that is also completely black-and-white. And what she specialises in is the neurophysiology of vision. And in the process of her studying the world from a black-and-white room through a black-and-white computer screen, she obtains everything there is to know—every physical fact there is to obtain about colour, about red, about ripe tomatoes or how we see the sky. She knows the terms red and blue. She can describe everything about how colour is processed in the brain.
But the question that we want to know is: what happens when Mary is released from a black-and-white room or her computer screen is transformed into a colour monitor? Does she learn something new?
Mary has never seen the colour red before, but she knows everything there is to know about it. And then one day she sees it. Has she learned something new about the colour red?
I think you’ll agree that she has. Certainly Frank Jackson thought so. Lots of people think so. There is some kind of knowledge beyond the physical properties that we understand about them. For red, it’s its redness. And that knowing about red isn’t the same as experiencing it.
Now, this—whatever this is, the redness of the colour red, the feeling of pain that we experience when we’re slapped across the face, the feeling of beauty that we experience when we’re looking out over a vista—whatever this is, is an example of what’s known as qualia.
And Thomas Nagel is another famous philosopher, famous for consciousness, who puts it in an interesting way. And I’ll link the paper, although honestly, it’s a bit impenetrable. But what he says is that although we can in theory understand everything there is to know about how bats echolocate—how they make their clicking sounds that allow them to see through their ears—we can understand the physics of sound waves, we can understand the physiology, we can understand the behavioural responses, the information processing that happens in the brain. We can understand all of this stuff, but we will never know what it’s like to experience the world through echolocation.
There is something that it is like to be a bat, and it doesn’t really seem like you can pass along that subjective phenomenal character of batness with a description. That is consciousness.
¶Chalmers and the Hard Problem
So now to the problem. And the problem is: why does red have redness? Why is there something it is like to be at all, never mind something it is like to be a bat? And it’s a difficult question to answer.
Chalmers famously called this the hard problem of consciousness. And he called it the hard problem of consciousness because he poses it against what he calls easy problems. There are phenomena that are related to qualia, to experience, that we can in theory explain. I’ll quote his book: “the performance of all the cognitive and behavioural functions in the vicinity of experience—perceptual discrimination, categorisation, internal access, verbal report.” That’s the quote.
You know, all of these are processes that lend themselves to examination. They can be explained functionally and mechanistically. To Chalmers, they are easy problems. The hard problem is explaining why these things are accompanied by a sense of experience, by qualia. Why does Mary learn something new when she sees red beyond knowing all its physical properties? And why can’t we know what it’s like to be a bat? That’s the question.
And Chalmers uses the example of a sort of automaton to drive this home. So you could imagine a person who goes about behaving in all the ways that you or I do, but they have absolutely no experience attached to that behaviour. They’re some kind of zombie or a robot, just mechanically acting and reacting to the world around them.
There doesn’t seem on the surface of it any reason to build that robot so that it also has to experience the stuff. If you were going to save money, you would save money on the experience. It doesn’t need it—at least in theory, conceivably. That is the hard problem of consciousness.
¶The Non-Solutions
And there are a lot of solutions to it. And I’m going to detail them briefly in a second. But what I really want to point out is that none of them seem to really matter very much. So let’s get into that.
So lots of people try and solve the hard problem of consciousness in lots of ways. And I have an entire article that details this. So I’ll link that in the show notes. But I will talk about them here kind of briefly—maybe even a little more than briefly, to be honest, because I think it is kind of interesting.
So the first solution to the hard problem of consciousness is the non-materialist view. Non-materialists say that there’s both material stuff, physical stuff, and there’s this separate kind of experience stuff. A classic example of this is dualism. So this is the idea that there is a distinction between the mind and the body or the body and the soul. And this is kind of sliding out of fashion in an increasingly secular world, but it can be a secular position.
Then there are the emergentists and the functionalists. And these people say that consciousness is just some kind of special property of the material world that emerges from specific configurations of material stuff. So in the same way that you get water when you put together two oxygen molecules and a hydrogen molecule, you get this sort of property of liquidity. If you arrange neurones in a certain way, you get consciousness. That’s the sort of basic idea.
And then there’s people who treat it as a mistake. There are illusionists and eliminativists. And these people say that thinking of consciousness at all is a mistake.
Illusionists, the easier position to describe, they basically say that consciousness is an illusion. In the same way, I guess, that movies are a sort of illusion. Movies produce the illusion of motion by flashing still images so fast that we process them as moving. And in the same way, maybe consciousness, this experience, is produced by a bunch of snapshots of what the brain is doing at any given point in time. I think it was Daniel Dennett who called it an edited digest of all the events going on in the brain, like a general sense of the shape of things.
And then there’s the sort of last group, and these guys are called panpsychists. And what they say is that maybe consciousness sits in the space that physics doesn’t explain. So maybe it’s like the intrinsic nature of stuff—and I’m going to have to explain that a little bit, aren’t I?
So physics tells us what things do. It doesn’t tell us what things are. Physics can tell us that atoms have a certain mass, for example, but mass is characterised by behavioural properties. So you’ve got gravitational attraction or resistance to acceleration. I don’t know, I’m not a physicist. But these physical properties don’t actually have anything to say about their underlying nature. So maybe physics describes what things do, and consciousness is what things are.
¶Why None of It Matters
And at this point, I think we can stop, because if you’re getting tired, I’ve sort of made my point. Because what’s super annoying about all of this is that it’s impossible to have a conversation with all these different perspectives in the room, because you basically have to take whichever one you prefer on faith. All of them—and I’ll link to an article that describes this in more detail—all of them suffer from the same explanatory gap.
Whether you think that consciousness arises from the configurations of neurones, or it comes from a soul, or it comes from this sort of space that physics leaves unexplained, you still have to explain how it actually interacts with the stuff that we do know about—Chalmers’ easy problems. And nobody’s managed this. Nobody’s even close.
Now, the modal position in academia is the emergentist one—that consciousness sort of comes about with the right configuration of neurones or whatever. Back at my old lab, you ask any given brain scientist there, and this is basically what they would say. And in fact, they would probably be confused that there were other perspectives on this. I would have said this a while ago, even having studied consciousness as part of my academic trajectory.
And I think that this view is so popular because it feels like science, even though the explanatory gap is actually identical. It feels more scientistic, and that’s what we sort of value now. And maybe it also feels reasonable because consciousness certainly seems like it’s dependent on our perceptions. The hard problem seems related to the easy problem because you can’t really experience something without perceiving it first.
So I think there’s this sort of optimism that if we just study the brain hard enough, eventually consciousness will turn from a hard problem into an easy one. And so what you see is all these debates about whether AI systems have genuine experiences or whether honeybees are conscious. Or there’s one of my favourite articles out there by a guy called Brian Key, and he’s writing in the equivalent of academic caps lock, some of the most vehement academic writing I’ve ever seen, that fish cannot feel pain, followed by tens of people responding with articles talking about how fish do have pain, that they do have the neurobiology, that they can suffer, that they are conscious.
People love to talk about this stuff, even though none of them have actually got any closer to solving the hard problem of consciousness.
¶So What?
And more to the point, you don’t need to pay attention to any of this because none of it matters. Let me wrap up and show you why.
You know, these kinds of academic debates are precisely when I start to lose interest in the project, because so what? Under precisely what circumstances does any of this stuff matter? When would it actually matter whether something was truly conscious or illusorily conscious? Because if things seem conscious, we already know what to do about it.
And Sam Harris is a philosopher who has a nice bit on this, and I’ll link to it in the show notes. He’s got both an essay and a TED Talk. And what he says—and I’ll quote him—“Why is it that we don’t have ethical obligations towards rocks? Why don’t we feel compassion for rocks? It’s because we don’t think rocks can suffer. And if we’re more concerned about our fellow primates than we are about insects, as indeed we are, it’s because we think they’re exposed to a greater range of potential happiness and suffering.” That’s the quote.
So I think Sam’s pointing at this sort of secret hope that we have that by working out what consciousness is, we can reduce suffering. But we can do that now. We can do it by caring about things that seem to suffer in a way that makes them seem to suffer less. And we can do all of that without proving that they have qualia, because we’re just not close to understanding this distinction.
And some people reckon that we’ll never crack it. There’s a group that I didn’t talk about called mysterians, and they reckon that like an ant would never crack calculus, we’re never going to crack consciousness.
But I think, again, that’s sort of a distraction, because even if we could, I can’t actually tell what would change. Would we stop caring about animal welfare if we proved that they weren’t strictly conscious? Would we treat rocks differently if we found out that they were?
Of course not. Because it’s not an interesting question. Whatever consciousness rocks might have isn’t likely to change how we treat them, because what matters is the behaviour and how the behaviour expresses suffering. Whether there’s some ineffable “what it’s like” behind the curtain is practically irrelevant.
So why bother asking?
I’ll leave you with that.
¶Transcript
[00:00] Welcome to the btrmt
[00:11] Lectures. My name is
[00:13] Dr. Dorian Minors, and if there’s one thing I’ve learned as a brain scientist, it’s that there is no instruction manual for this device in our
[00:19] head. But there are patterns to the thing, patterns of thought, patterns of feeling, patterns of action, because that’s what brains
[00:25] do. So let me teach you about
[00:27] them. One pattern, one
[00:28] podcast. You see if it works for
[00:29] you. Now, this is another one in my series of bits that I have on questions that people ask me which seem important but actually don’t really end up mattering for most people, certainly not in the way that they’re typically
[00:44] deployed. When people are asking me questions about this, it’s because they’ve heard about an interesting problem in science, but often when I see it out in the wild, they are deployed like stupid questions that make you seem
[00:57] smart. So let me tell you about these irrelevant questions so you can avoid wasting mental energy on
[01:03] them. What is
[01:10] consciousness? Everybody wants to know lately, largely because we wonder if AI is
[01:16] conscious. But understanding whether something is conscious means that we first have to understand what consciousness
[01:22] is. And this is where you start running into
[01:25] problems. And these are problems that I actually don’t really think
[01:29] matter. Now, the typical way that people teach consciousness is to talk about the color red, the redness of
[01:36] red. And I have no idea why this is
[01:38] true. Maybe it’s because it’s a good illustration of the thing, because there aren’t actually many good illustrations of the
[01:46] thing. So let me give you an example of
[01:47] this. And for me, the cleanest example is Frank Jackson’s thought experiment that he called Mary’s
[01:53] room. So you imagine this woman, Mary, she’s a brilliant scientist, just like me, and for whatever reason, she was forced to investigate the world from some kind of black and white
[02:04] room. And she’s looking into a computer screen that is also completely black and
[02:09] white. And what she specializes in is the neurophysiology of
[02:15] vision. And in the process of her studying the world from a black and white room through a black and white computer screen is everything there is to know, every physical fact there is to obtain about color, about red, about ripe tomatoes or how we see the
[02:33] sky. You know, she knows the terms red and
[02:36] blue.
[02:37] She. She can describe everything about how color is processed in the
[02:40] brain. But the question that we want to know is what happens when Mary is released from a black and white room or her computer screen is transformed into a color
[02:52] monitor? Does she learn something
[02:56] new? Mary has never seen the color red before, but she knows Everything there is to know about
[03:02] it. And then one day she sees
[03:04] it. Has she learned something new about the color
[03:06] red? I think you’ll agree that she
[03:09] has. Certainly Frank Jackson thought
[03:11] so. Lots of people think
[03:12] so. There is some kind of knowledge beyond the physical properties that we understand about them,
[03:19] right? For red, it’s its
[03:20] redness. And that knowing about red isn’t the same as experiencing
[03:25] it. Now, this, whatever this is the redness of the color red, the feeling of pain that we experience when we’re slapped across the face,
[03:33] right? The feeling of beauty that we experience when we’re looking out over a vista,
[03:39] right? Whatever this is, is an example of what’s known as
[03:42] Qualia. And Thomas Nagel is another famous philosopher, famous for consciousness, who puts it in an interesting
[03:50] way. And I’ll link the paper, although honestly, it’s a bit
[03:53] impenetrable. But what he says is that although we can in theory understand everything there is to know about how bats echolocate,
[04:00] right? How they make their clicking sounds that allow them to see through their ears,
[04:05] right? We, we can understand the physics of sound waves, we can understand the physiology, we can understand the behavioral responses, the information processing that happens in the
[04:14] brain. We can understand all of this stuff, but we will never know what it’s like to experience the world through
[04:20] echolocation. There is something that it is like to be a bat, and it doesn’t really seem like you can pass along that subjective phenomenal character of batness with the description that is
[04:35] consciousness. So now to the
[04:37] problem. And the problem is, why does red have
[04:40] redness? Why is there something it is like to be at all, never mind something like it is to be a
[04:48] bat? And it’s a difficult question to
[04:51] answer. Chalmers famously called this the hard problem of
[04:55] consciousness. And, and he called it the hard problem of consciousness because he poses it against what he calls easy problems,
[05:02] right? There are phenomena that are related to Qualia, to experience that we can, in theory,
[05:08] explain. I’ll quote his book the performance of all the cognitive and behavioral functions in the vicinity of
[05:15] experience. Perceptual discrimination, categorization, internal access, verbal
[05:20] report. That’s the
[05:22] quote. You know, all of these are processes that lend themselves to
[05:25] examination. They can be explained functionally and
[05:27] mechanistically. To Chalmers, they are easy
[05:30] problems. The hard problem is explaining why these things are accompanied by a sense of experience, by
[05:36] Qualia. Why does Mary learn something new when she sees red beyond knowing all its physical
[05:42] properties? And why can’t we know what it’s like to be a
[05:46] Bat. That’s the
[05:46] question. And Chalmers uses the example of sort of automaton to drive this
[05:53] home. So you could imagine a person who goes about behaving in all the ways that you or I do, but they have absolutely no experience attached to that
[06:03] behavior. They’re some kind of zombie or a robot, just mechanically acting and reacting to the world around
[06:09] them. There doesn’t seem on the surface of it any reason to build that robot so that it also has to experience the
[06:18] stuff.
[06:18] Right? If you were going to save money, you would save money on the
[06:22] experience. It doesn’t need
[06:23] it. At
[06:24] least. At least in theory,
[06:25] right? Conceivably, that is the hard problem of
[06:29] consciousness. And there are a lot of solutions to
[06:32] it. And I’m going to detail them briefly in a
[06:34] second. But what I really want to point out is that none of them seem to really matter very
[06:39] much. So let’s get into
[06:41] that. So lots of people try and solve the hard problem of consciousness in lots of
[06:50] ways. And I have an entire article that details
[06:53] this. So I’ll link that in the show
[06:55] notes. But I will talk about them here kind of
[06:57] briefly. Maybe even a little more than briefly, to be honest, because I think it is kind of
[07:01] interesting. So the first solution to the hard problem of consciousness is the non materialist
[07:08] view. Non materialists say that there’s both, like, material stuff, physical stuff, and there’s this separate kind of experience
[07:16] stuff. A classic example of this is
[07:18] dualism. So this is the idea that there is a distinction between the mind and the body or the body and the
[07:25] soul. And this is kind of sliding out of fashion in an increasingly secular
[07:29] world. But it can be a secular
[07:32] position. Then there are the emergentists and the
[07:35] functionalists. And these people say that consciousness is just some kind of special property of the material world that emerges from specific configurations of material
[07:48] stuff. So in the same way that you get water, when you put together two oxygen molecules and a hydrogen molecule,
[07:54] right? You get this sort of property of
[07:56] liquidity. If you arrange neurons in a certain way, you get
[07:59] consciousness. That’s the sort of basic
[08:02] idea. And then there’s people who treat it as a
[08:07] mistake. There are illusionists and
[08:09] limitiv. There are illusionists and
[08:12] eliminatives. There are illusionists and
[08:17] eliminativists. And these people say that thinking of consciousness at all is a
[08:22] mistake. Illusionists, the easier position to
[08:26] describe. They basically say that consciousness is an
[08:29] illusion. In the same way, I guess, that movies are a sort of
[08:32] illusion. Movies produce the illusion of motion by flashing still images so fast that we process them as
[08:39] moving. And in the same way, maybe consciousness, this experience is produced by a bunch of snapshots of what the brain is doing at any given point in
[08:49] time. I think it was Daniel Dennett who called it an edited digest of all the events going on in the brain, like a general sense of the shape of
[08:58] things. And then there’s the sort of last group, and these guys are called
[09:03] panpsychists. And what they say is that maybe consciousness sits in
[09:09] the. In the space that physics doesn’t
[09:12] explain. So maybe it’s like the intrinsic nature of stuff, and I’m gonna have to explain that a little bit, aren’t
[09:19] I? So physics tells us what things
[09:23] do. It doesn’t tell us what things
[09:25] are. Physics can tell us that atoms have a certain mass, for example, but mass is characterized by behavioral
[09:33] properties. So you’ve got like, gravitational attraction or resistance to
[09:37] acceleration. I don’t know, I’m not a
[09:39] physicist. But these physical properties don’t actually have anything to say about their underlying
[09:46] nature. So maybe physics describes what things do, and consciousness is what things
[09:53] do. And at this point, I think we can stop, because if you’re getting tired, I’ve sort of made my
[09:59] point. Because what’s super annoying about all of this is that it’s impossible to have a conversation with all these different perspectives in the room, because you basically have to take whichever one you prefer on faith, all of them, and I’ll link to an article that describes this in more
[10:15] detail. All of them suffer from the same explanatory
[10:18] gap. Whether you think that consciousness arises from the configurations of neurons
[10:23] or. Or it comes from a soul, or it comes from this sort of space that physics leaves
[10:28] unexplained. You still have to explain how it actually interacts with the stuff that we do know
[10:34] about. Charm is easy problems, and nobody’s managed
[10:38] this. Nobody’s even
[10:39] close. Now, the modal position in academia is the emergentist one, that consciousness sort of comes about with the right configuration of neurons or
[10:47] whatever. Back at my old lab, you ask any given brain scientist there, and this is basically what they would
[10:53] say. And in fact, they would probably be confused that there were other perspectives on
[10:58] this. I would have said this a while ago, even having studied consciousness as part of my academic
[11:05] trajectory. And I think that this view is so popular because it feels like science, even though the explanatory gap is actually
[11:12] identical. It feels more scientistic, and that’s what we sort of value
[11:17] now. And maybe it also feels reasonable because consciousness Certainly seems like it’s dependent on our perceptions,
[11:24] right? The hard problem seems related to the easy problem because you can’t really experience something without perceiving it
[11:30] first. So I think there’s this sort of optimism that if we just study the brain hard enough, eventually consciousness will turn from a hard problem into an easy
[11:38] one. And so what you see is all these debates about whether AI systems have genuine experiences or whether honeybees are conscious
[11:46] or. There’s one of my favorite articles out there by a guy called Brian Key, and he’s writing in the equivalent of academic caps lock, some of the most vehement academic writing I’ve ever seen, that fish cannot feel pain, followed by tens of people responding with articles talking about how fish do have pain, that they do have the neurobiology that they can suffer, that they are
[12:11] conscious. People love to talk about this stuff, even though none of them have actually gotten any closer to solving the hard problem of
[12:23] consciousness. And more to the point, you don’t need to pay attention to any of this because none of it
[12:31] matters. Let me wrap up and show you
[12:34] why. You know, these kinds of academic debates are precisely when I start to lose interest in the project, because so
[12:50] what? Under precisely what circumstances does any of this stuff
[12:55] matter? You know, when would it actually matter whether something was truly conscious or illusorily
[13:01] conscious? Because if things seem conscious, we already know what to do about
[13:06] it. And Sam Harris is a philosopher who has a nice bit on this, and I’ll link to it in the show
[13:11] notes. He’s got both an essay and a TED Talk and what he says, and I’ll quote him, why is it that we don’t have ethical obligations towards
[13:20] rocks? Why don’t we feel compassion for
[13:23] rocks? It’s because we don’t think rocks can
[13:25] suffer. And if we’re more concerned about our fellow primates than we are about insects, as indeed we are, it’s because we think they’re exposed to a greater range of potential happiness and
[13:35] suffering. That’s the
[13:36] quote. So I think Sam’s pointing at this sort of secret hope that we have that by working out what consciousness is, we can reduce
[13:46] suffering. But we can do that now,
[13:48] right? We can do it by caring about things that seem to suffer in a way that makes them seem to suffer
[13:54] less. And we can do all of that without proving that they have qualia, because we’re just not close to understanding this
[14:01] distinction. And some people reckon that will never crack
[14:05] it. There’s a group that I didn’t talk about called mysterionists and they reckon that, like, an ant would never crack
[14:13] calculus. We’re never going to crack
[14:15] consciousness. But I think, again, that’s sort of a distraction, because even if we could, I can’t actually tell what would
[14:21] change. Would we stop caring about animal welfare if we proved that they weren’t strictly
[14:27] conscious? Would
[14:28] we. Would we treat rocks differently if we found out that they were
[14:32] like. Of course
[14:33] not.
[14:34] Right. Because it’s not an interesting
[14:35] question. Whatever consciousness rocks might have isn’t likely to change how we treat them, because what matters is the behavior and how the behavior expresses
[14:45] suffering. Whether there’s some ineffable what it’s like behind the curtain is practically
[14:50] irrelevant. So why bother
[14:52] asking? I’ll leave you with that,
[14:54] Sam.
Anthologies: Betterment, Thought Architecture, Animal Sentience, Narrative Culture, Neurotypica, AI, On Being Fruitful, On Ethics, On the Nature of Things, On Thinking and Reasoning