[audio]
Values Don't Matter
24 Jan 2026
Values are virtue ethics in disguise—traits we’re expected to cultivate. But virtues are context-dependent (courage for a soldier isn’t courage for a teacher) and the situation overwhelmingly drives behaviour. The real task is designing the context, not listing the virtues.
¶Show Notes
Further reading:
- The btrmt. article that inspired this
- Overview of the ethical landscape
- On catastrophic leadership failure
- Everything is choice architecture
- Making strong group dynamics
- On motivation
- Belief-consistent information processing
References:
- MacIntyre’s After Virtue
- Situationist critique of virtue ethics
- Person-situation debate
- Milgram’s obedience experiments
- Stanford Prison Experiment
- John Doris on the ecological approach to ethics
- Maria Merritt on “humility ethics“
- Moore and Beadle on organisations as MacIntyrean practices
¶Audio
¶Pattern
- Pick one of Davies’ basic patterns:
- What seems to be individual phenomena are actually organisational/structural
- (Or: What seems to be X is actually non-X)
Values/virtues seem to be about individual character, but are actually about context and situation. The individual can’t reliably embody virtues; the situation drives behaviour.
¶Speechnotes
¶Intro
Welcome to the btrmt Lectures. My name is Dr Dorian Minors, and if there’s one thing I’ve learned as a brain scientist, it’s that there’s no instruction manual for this device in our head. But there are patterns. Patterns of thought, of feeling, and of action. That’s what brains do. So let me teach you. One pattern, one podcast. You see if it works for you.
Today’s pattern: values don’t matter. Or at least, not the way we think they do.
¶Assumption
What is the assumption from the pattern?
-
Want to play a bit today
-
Teach ethics, and it can get a little tricky
-
a study of ethics should tell you how to be—what you should do. What’s “good”.
-
Mostly, it just raises more questions than it answers, particularly at work. If your job involves you going out and trying to work out who is a lawful target—shooters, women, children—you have an ethical framework that helps—the Law of Armed Conflict. But that’s not going to be very particularly encouraging when you get the go ahead to engage a child soldier.
- This is how moral injuries happen. More questions than answers.
-
now, this is more of a work conversation, and we consider these things very seriously there—buit this isn’t Sandhurst it’s just Dorian’s little podcast so
-
But let’s pivot away from that kind of heavy talk. I want to play, like I said. A little lighter.
-
but one thing i notived is there’s one ethical framework that I’ve noticed that everyone loves—implicitly, or explicitly
-
you’ll see it everywhere once I tell you about it
-
and the best thing about it is that it’s almost as useless as it is ubiquitous as its ordinarily deployed
-
I’m talking about values.
-
Now, I noticed this when an old colleague of mine called me up for help—his start-up was big enough to start thinking about what their company values were.
-
But as I thought about it, i realised this isn’t just about organisational values, but values anyplace epople collect in a serious way to do things, down to house rules in a D&D game.
when you set out what kind of person peolple should be trying to be, you’re establishing values—and I’ll talk more about this in the moment, but the point is
- People love values. They just have no idea how to implement them. They want them to work, but they don’t, and there doesn’t really seem to be good answers as to why. And most people don’t have the kind of cash my colleague does to get someone like me to help them fix the problem.
But, as I found out, ethics does actually, have something to say.
Let me give you a couple examples. The british army has a good set, as you might imagine:
The British Army claims courage, discipline, respect for others, integrity, loyalty, and selfless commitment. The australian army has a similar set, but we add in Excellence. Take that as you will.
This is the kind of standard case. Most organisations will do something like this. My colleague will do somehting like this, probably. If it’s a particularly snappy kind of organisation, who notices that organisational values end up mostly just being decoration, they might make them into verbs—be courageous, be disciplined. You know. That kind of thing. The hope, if you read the Harvard Business Review or whatever, is that by making them doiing words will make them easier to do.
Tesco is an interesting counterpoint. and I like tesco because it was once owned by bermudians, and as a bermudian myself, I have some loyalty. Grocery store. has no one tries harder for customers, we treat people how they want to be treated, and every little help makes a big difference.
This shows you what I mean about how everyone wants them but worries that no one will actually do thm,. Tesco has basically gone for smart goals. Look how specificl.
When you’re doing values in your non-organisational groups, you go for more the tesco thing.
and you’ll go for something that’s Closer to rules than values, if we’re being honest, but the intent is the same. House rules in a D&D group can include stuff like don’t be a rules-lawyer be a turn-taker. Talking about how people should be. Talking about values
People love values.
So why don’t they work?
and that’s what I want to talk about today
¶Subversion
What is the subversion from the pattern?
Values are really just virtue ethics in disguise.
Ethics is one of the main branches of philosophy. I have a sort of ethics primer that explains things more substantively, but essentially, it’s the philosophy of how to be good.
We all want to be good, but what is good? What does good mean? And, as such, how should we go about as a result, to achieve that goodness?
Virtue ethics are a particular approach to them, that make the most sense—at least to me—when I explain the other kinds.
You might think “well, just do the least harm and the most good”. That’s consequentialism, and I reckon most people are intuitively consequentialist. But consequences aren’t all commensurate—they don’t have the same cash value so to speak. If some surgeon killed one person, harvested all their organs, and used those organs to save five people, then we probably wouldn’t really be that interested in her consequential calculus. Or for a more realistic example, I say elsewhere:
There this way of thinking that says, if we just increase overall economic wealth, everyone will be better off … You know, bring the average up, and that’ll bring everyone up, sort of thing … [but] there seems something quite odd about preferring future, hypothetical people over the suffering of real, current people
So you might resort to principles instead. Killing people might save lives, but maybe you just think you shouldn’t kill because of the principle of the thing. You might sometimes feel it’s appropriate to cause harm, but you have a general duty to try not to harm people. Indeed, this is why we follow laws. Not because they’re always good, but because you believe in them on principle. This is deontology.
But obviously principles and rules and duties don’t always hold. Even as I described them, I hedged with the old “they’re not always good”. For example, if your grandma is sick, you’re not going to be focused on the speeding laws, or the consequences of missing your dinner date, you’re going to drive as fast as you can to help your grandma. That’s a care ethic—when we prioritise our loved ones over all else.[^8]
Virtue ethics are an attempt to:
shift the question from “what should I do?” to “what kind of person should I be?” The idea here is that understanding what the right ‘principles’ might be, or the extent of the consequences of our actions, is hard. We’re not likely to get it right all the time. So perhaps it’s better to try to become good people instead. We like good people, and we don’t mind when they make ethical errors because we know “their hearts are in the right place”. We think they’re much more likely to do good than bad. So perhaps it’s better to try and be one of these good people, rather than try to figure out what each of our actions should be, because then we’re more likely to do good than bad.
So rather than asking “what should I do”, you ask “what kind of person should I be”, and then hopefully you, as a logical consequence, just do more good stuff.
More-or-less. you get it.
And hopefully, you’ve made the leap now from virtues to values.
Like virtues, organisational values are often the desirable qualities of people. They’re aspirational about character development, just like virtues are. They assume these things can be cultivated in the organisational culture. They are explicitly about what “good” means. And they are presented in this manner because they are really difficult to codify—they aren’t meant to be rules or principles, it’s not a code of conduct—you’re meant to embody organisational values.[^7]
Values are virtues. At least in this specific form.
So it’s a big problem because there are two massive issues with virtue ethics.
The first problem is the indeterminacy problem. Even back when Aristotle was formulating virtue ethics as we have them today, he pointed out that virtues sit between two extremes.
What’s courage? Hard to say, but we recognise cowardice,a nd we recognise recklessness. So not either of those.
What’s discipline? Well, it’s not chaos, and it’s not brittle rigidity either.
You get it?
Well if you get it, then you get the problem. At what point does cowardice get an upgrade to courage? What’s the line over which an act of bravery becomes negligent?
It’s not very clear
A recent, very influential virtue ethicist—MacIntyre—talks about this specifically in organisations. I’ll leave a link in the shownotes to his book. For MacIntyre virtues aren’t just different in degree, they’re different in kind. They have to be embedded in practices to make sense.
What courage means for a doctor has almost nothing to do with what courage means for a soldier or a teacher. When the British Army says it wants officers to be courageous, does it mean the courage of a frontline soldier? A logistics officer? The recruitment team?
These things aren’t the same! And these are all even in the same organisation! Different ideas and different standards of excellence. Of virtue
so that’s the first problem
Ok, so let’s move our attention to the second problem.
there’s this thing called the situationist critique of virtue ethics, which comes out of the person-situation debate. I’ll put links in the shownotes
But essentially These are a collection of people who pay particular attention to the fact that virtue ethics are all about character. That what we want to do is embody these virtues or values as traits of ourselves.
Then they notice the very troubling lack of evidence that traits are a thing. That the vast majority of empirical evidence points to the fact that there is very little, if anything, that is stable in the human, and rather, the situation seems to overwhelmingly drive behaviour.
And that’s not to say there is no traits. This could be an artefact of experimental design—how do you design a test how people behave under different circumstances without changing the circumstance. It’s kind of like the nature vs nurture question (shownotes link)—they’re so tightly intertwined that it’s hard to tease them apart.
And also there is some stability. Personality is kind-of stable, as is IQ, so we aren’t out of options for stable human behavioural attributes.
But there isn’t much else, so it’s kind of hard to imagine how something like “moral character” might be found within the ones we do have on hand.
On the other hand, there are these handful of experiments from the 60’s and 70’s that demonstrate that the situation seems to annihilate the individual capacity for virtue. I’m thinking of Milgram’s electro-shock experiments or the Stanford Prison Experiment: examples of catastrophic ethical leadership failure. Where the situation led average people to shock someone ostensibly to death, in the name of science; or led undergraduate students to brutalise each other while simulating a prison. And while most people get the basic facts of these wrong,
it’s because they are actually simplifying over detail that makes the influence of the situation on the participants’ behaviour even more obvious.
While everyone is asking “what virtues comprise the best moral character” “what values should we inculcate in our organisation or our rowing club”, the Situationists are asking “is there even such a thing as character?”
It’s not very heartening.
thankfully, I will leave you with some answers.
¶Implication and outro
What are the implications for the audience?
We haven’t painted a very flattering picture of this particular project.
I’ll summarise I We’ve known from the start that virtues themselves are a little vague. Somewhere between two extremes. But they are also located differently on that spectrum depending on what practice you’re engaging in. It’s not a moral continuum, it’s a moral landscape with many peaks and valleys. Very easy to get lost.
But it’s worse than that, because, even if you manage to locate the peaks you care about, people won’t reliably display it. The situation overwhelmingly drives their behaviour, no matter how committed they are to embodying the virtues you want them to embody.
Now, this entire article was prompted by an old colleague. He called wanting my advice on how to help his executive team with their new ‘values initiative’. A start-up, large enough to start looking to embed values into things, as all organisations eventually do.
I spent about seven minutes describing all this before realising that he probably didn’t really care about the background. No one ever does. It’s leadership consulting after all, not brain science. They want sexy-sounding solutions.
And the sexy sounding solution here is:
If context is all there is, then just design the context.
This isn’t a new idea. MacIntyre’s, from before, makes it lcear that institutions need to create structural opportunities for action, and John Doris talks about empirically informed context sensitive approaches, and Maria Merrit talks about the role of contexts and humility in the face of contexts when it comes to virtue-ethical beahviour.
But we don’t need to be so high-falutin’
Practical advice:
- Come up with values if you want, but articulate what that means across
different practices—it’s not enough to tell people to be courageous—it needs to be clear what that is.
- Google famously dropped ‘don’t be evil’—because this is silly. It might feel like stealing your attention is evil, but it’s not corresponding to any obvious poor outcomes (link in the shownotes—social media), and no one liked RSS—I bet most of you don’t even know what that is. So what’s evil for an engineer even mean?
- In contrast, surgical teams have clear “speak up” protocols—anyone can call a halt, with an exact phrase—courage operationalised for practice.
- Another easy way to do this is to hijack the human tendency toward conformity under uncertainty—something I talk about elsewhere (shownotes)—have influential people model the virtues you want.
- There’s a quote from an Aussie general that you hear around the halls of Sandhurst and Duntroon—the standard you walk past is the standard you accept. It’s true though. If senior people cut corners, so does everyone else
- Identify situational factors that motivate people and design the environment to encourage this. I’m critical of this elsewhere, but the choice architecture framework does work—put a cheap bottle of wine and an expensive bottle of wine on the menu next to the wine you want people to buy, and they’ll buy it. Few people take pride in being cheap, or blithely fancy, so they’re motivated to buy your bottle. There are heaps of models of motivation around—just use one to work out what will motivate people to play D&D properly. I mean, open plan offices, as it turns out, aren’t really that good, but they certainly force a certain kind of behaviour—force something other than private conversation. Or—putting hand sanitiser at eye level makes people use it heaps more.
- Or skip that, and concentrate on beliefs: people only pay attention to what they believe is
important.
- the checklist culture in aviation is a good example. pilots believe checklists save lives so they use them even when cocky or tired
- alternatively, if sales people only get messaging that numbers are the priority, they’re going to ignore customer first values.
- I talk about this elsewhere—shownotes—but there’s a strong argument from the kind of work I used to do. attention is belief-shaped—there’s a lot of evidence that we just don’t even notice things we don’t believe are important.
Choose your virtues, sure. But don’t spend too long waiting for people to adopt them. Design the context to help people along. Otherwise, the virtues will hardly matter.
¶Edited Transcript
Below is a lightly edited transcript. For the article that inspired this one, see Values Don’t Matter.
Welcome to the btrmt Lectures. My name is Dr Dorian Minors, and if there is one thing I’ve learned as a brain scientist, it’s that there is no instruction manual for this device in our head. But there are patterns—patterns of thought, patterns of feeling, patterns of action—because that’s what brains do. So let me teach you about them. One pattern, one podcast, and you see if it works for you.
Now, I want to play a little bit today. I teach ethics—or at least the behavioural science of ethics—here at the Royal Military Academy, Sandhurst. And it can get a little bit tricky because a study of ethics should tell you how to be, what you should do, what you ought to do, what’s good. But often it seems like it raises more questions than it answers, particularly at work. If you consider that the people I teach, their job involves going out and trying to work out who is a lawful target—from shooters to women and children—you have an ethical framework that helps. They have the Law of Armed Conflict, which helps define what a lawful target is. But it’s not going to be particularly encouraging when you get the go-ahead to engage a child soldier. This is precisely how moral injuries happen, because this kind of ethical dilemma raises more questions than it answers.
Now, that is more of a work conversation, and we consider things very seriously there. But this isn’t Sandhurst—this is just Dorian’s little podcast. And like I said, I want to play a little bit today. So let’s move away from that heavy talk and concentrate on something that I’ve noticed in my time teaching the behavioural science of ethics here.
¶Everyone Loves Values
There is this one ethical framework I’ve noticed that everyone loves, either implicitly or explicitly. And once I tell you about it, I think you’re going to see it everywhere. And the thing that I like the most about it is it’s almost as useless as it is ubiquitous as it’s ordinarily deployed. What I’m talking about is the concept of values.
I noticed this when an old colleague of mine called me up for help. His startup was big enough now to start thinking about what their company values were. And as I thought about it, I realised that company values are actually something that all places where people collect seriously to do things have—from institutions and organisations to sporting clubs, or even the house rules in a D&D game. Anytime people come together to set out what kind of person people should be trying to be in a group, they’re establishing values. That’s the project they’re engaged in.
I’m going to give you a few examples of what that looks like. But the point is, across the board it seems like people really love values and they instinctively try to inculcate them in their groups. But repeatedly, it seems like people really struggle to implement them. They want them to work, but values don’t seem to work. And there aren’t a lot of easy answers as to why. Most people don’t have the kind of cash my colleague does to get someone like me to help them fix the problem. But interestingly, teaching ethics, I found out that ethics does have something to say—and it’s pretty low-hanging fruit.
So let me give you a few examples of values to really ground it, show you how they sort of fail, show you what ethics says, and then show you how, if you care how people coming together in a group behave, you can address that problem.
The British Army has a very good set of values, as you might imagine from an organisation like that. The British Army claims courage, discipline, respect for others, integrity, loyalty, and selfless commitment as their values. The Australian Army has an almost identical set, except that we add in excellence—you can take that as you will.
This is sort of a standard case. You won’t find that organisations stray too far from a set of values like this. My colleague will end up doing something like this, probably. If it’s a particularly snappy new kind of organisation who notices that organisational values end up mostly just being decoration, they might make them into verbs. So instead of courage and discipline, you’d end up with “be courageous” and “be disciplined.” The hope, if you’re the kind of leader who reads the Harvard Business Review or whatever, is that by making them doing words, you’re upgrading this historical project of value-making by making them easier for your people to do.
An interesting counterpoint is Tesco. I like Tesco because it was once owned by Bermudians, and as a Bermudian myself, I have some sort of loyalty there. It’s a grocery store here in the UK and it uses values that sound more like this: “No one tries harder for customers.” “We treat people how they want to be treated.” “Every little help makes a big difference.” This is sort of starting to evidence what I mean about how everybody wants values, but everybody also worries that nobody’s going to actually do them. Tesco’s basically gone for SMART goals here—look how specific these things are.
When you’re doing values in your non-organisational groups—your rowing club, your chess group—you’re going to go more for the Tesco kind of thing. You’re probably going to go for something that’s closer to rules than values, if we’re being honest. But the intent is the same. The house rules in a D&D group are going to include stuff like “don’t be a rules lawyer” or “be a turn-taker.” You’re talking about how people should be. You’re talking about values.
People love values. They put them everywhere they collect in groups. So the question becomes: if we have this instinct towards values, why don’t they work? And that is what I want to talk about today.
¶Values Are Just Virtue Ethics
Values are really just virtue ethics in disguise. Let me tell you what that means.
Ethics is one of the main branches of philosophy. I have sort of an ethics primer that explains things a bit more substantively, but essentially it’s the philosophy of how to be good. We all want to be good, but what even is good? What does good mean? And as such, how should we go about things to achieve that goodness?
Virtue ethics is a particular approach to these questions. And I think that virtue ethics actually make the most sense, at least to me, when I explain the other kinds first, to help you understand what kind of problem they’re trying to solve.
I think what you’d find is most people intuitively are consequentialists. We like to judge whether we’re being good or not by the consequences of our actions. You go about the place and you make decisions by deciding: is this going to hurt somebody? Is this going to help them? How do I do the least harm and the most good? Focus on the consequences. I think very close to our hearts, we hold this sort of consequential calculus.
But the problem with consequentialism is that consequences aren’t all commensurate. They don’t all have the same sort of cash value, so to speak. We’ll take one of the examples that you use in the lecture room—you’ll get this in an Ethics 101 course. Let’s say you’ve got a surgeon, and this surgeon just straight up murdered somebody, harvested their organs, and used those organs to save five other people. Numerically, this is a pretty good deal, right? One dead person, five people who would have died who are now alive. But we’re not really interested in the consequential calculus here. There’s something a little off about comparing the one to the five in that circumstance.
For a more realistic example, I like to think about the kind of ethical altruism groups that pop up all over university campuses. There’s this sort of way of thinking called longtermism that says if we just increase overall economic wealth, everybody’s going to be better off. Sort of like if you bring the average up, then you’re going to bring everybody up. You just concentrate on raising GDP, then just like the standards of living are better now than they were in the Middle Ages, in the future everybody’s going to be much better off. And the problem with this is there seems something very weird about preferring future hypothetical people with a really good economic profile over the suffering of real current people that we have to make policy decisions that hurt now to achieve that future state.
So this is consequentialism. I think it’s intuitive right up until it’s not. And then it’s really hard to figure out what consequences actually matter.
So you might not just rely on consequences—and you almost certainly don’t. The next one that people will bring up in an Ethics 101 class is something that we could call principle-based ethics or rules-based ethics. Take our surgeon before. Killing people might save lives, one for five. But on the principle of things, it’s not really that sweet to kill people, so maybe we should just not kill people as a rule. And even in cases where killing somebody could save five people, we just treat it as a blanket rule that killing is not appropriate. This is a principle-based ethical approach to behaviour.
Another example of this is laws. We don’t follow laws because they’re always perfect. We don’t follow laws because they’re always right. We follow them because on principle we believe in lawful societies. You don’t speed not because you think speeding on this highway surrounded by nobody is going to harm anybody, but because you believe in the principle of the law. You believe in the rules. This is called deontology, and it’s another approach to ethics—one of the main three, along with virtue ethics, that people will teach you in a basic Ethics 101 course.
There are actually a lot of others and they all try to fill the gaps where the others fail. We’ve already talked about how consequences fail—not all consequences have the same value, so it’s hard to measure them against one another, particularly in edge cases. And littered throughout my example of principle-based ethics, we had how laws may or may not be right, but we follow them on principle, so we know that these things fall down.
People have come up with other approaches to try and fill the gaps. The one that I like to use here at Sandhurst is called care ethics. I like to use care ethics because it’s a feminist ethics, and I get a sort of satisfaction teaching feminist ethics at an institution like Sandhurst. But I think it’s very poignant because care ethics talks about the ethics of care. If your grandma’s sick, you’re not going to be focused on speeding laws. You’re not going to be focused on the consequences of ditching your dinner date to go and be with your grandma. You’re going to drive as fast as you can to help her. And that’s a care ethic—because here you are prioritising your loved ones over everything else. That is a type of ethic. It’s a value that you hold close.
Virtue ethics, back to our main topic, are an attempt to shift the question from what these other frameworks are trying to answer. Virtue ethics aren’t asking “what should I do?” What are the consequences of this? What do the laws say about what I should be doing? Do I love this person enough to take these actions? It’s shifting the question from focusing on the moment-to-moment decisions to the kind of person you should be trying to be.
The idea here is that understanding what the right principles might be or the extent of the consequences of our actions might be—that’s hard and we’re not likely to get it right all the time. So maybe we should try and focus on being good people instead. We like good people. We don’t mind when good people make mistakes because we know that their hearts are in the right place and we think they’re much more likely to do good than bad. So maybe it’s better to try and be one of these good people rather than try and figure out what each of our actions should be, because we’re much more likely to do good than bad.
So we’re not asking what we should do, like principle-based ethics or consequentialist ethics or even care ethics do. We’re asking: what kind of person should I be? And then hopefully you, as a logical consequence, are just going to do more good things. More or less. That’s virtue ethics. I think you get it.
And hopefully, if you get it, you’ve made the leap now from virtues towards values. Because like virtues, organisational values are often the desirable qualities of people. They’re aspirational about character development, just like virtues are. They assume that these things can be cultivated in the organisational culture and they’re explicitly about what good means. They’re presented in this manner because they’re difficult to codify. You can have codes of conduct that tell people what they should be doing—rules. You can have rewards and punishments that help people pay attention to the consequences. But it’s very difficult to account for every situation that’s going to make you act in the best interests of the customer. So maybe instead you should concentrate on trying to inculcate that as a value. The kind of people our employees should be, the kind of people our D&D players around the table should be trying to be.
Values are virtues, at least in this specific form. So there’s a little primer on ethics—virtue ethics trying to fill gaps, in fact trying to approach the whole problem of ethics from another angle. And this is the thing that we are so intuitively drawn to when we try and collect people together to do things. We want to put values in that help people understand what kind of people they should be in groups.
And this is a huge problem because there are two massive issues with virtue ethics.
¶The Indeterminacy Problem
The first problem with virtue ethics is what’s known as the indeterminacy problem. Even back when Aristotle was formulating virtue ethics as we know them today, he pointed out that virtues sit between two extremes.
What is courage? It’s kind of difficult to say what courage is, but we certainly recognise cowardice and we also recognise recklessness. So it’s not either of those. It’s somewhere in between the two things. Similarly with discipline—well, discipline isn’t just chaos, and it’s also not brittle rigidity. That’s not what we mean by discipline. It’s somewhere in the middle of these two things.
You get it, and if you get it, you might have already gotten the problem, which is: what is the point at which cowardice gets an upgrade into courage? What’s the line over which some act of bravery stops being brave and courageous and starts becoming negligent? It’s not very clear.
There was this recent—I mean, recent, the last 50 years or so—very influential virtue ethicist, Alasdair MacIntyre, who talks about this specifically in organisations. I’ll link to the book in the show notes because I think it’s interesting. For MacIntyre, virtues aren’t just different in degree, they’re also different in kind. So the problem I just identified is this sort of continuum—courage is somewhere between cowardice and recklessness. MacIntyre is saying that’s even worse because that differs depending on what it is that you’re doing. They have to be embedded in practices to make sense.
What courage means for a doctor, versus recklessness and cowardice, has almost nothing to do with what courage means for a soldier or for a teacher. When the British Army says it wants officers to be courageous—when I’m trying to teach them what that means—do we mean the courage of a frontline soldier, or do we mean the courage of a logistics officer? Or do we mean the courage of the officers that populate the recruitment team? Courage in these circumstances isn’t the same thing. And all of that is within even the same organisation. You have different ideas and different standards of excellence. You have these different standards of virtue based on the practices you’re engaged in.
So indeterminacy: a virtue is something that sits between two extremes, but those extremes and that middle differ depending on what it is that you’re doing. That’s the first problem, and it’s not the worst problem.
¶The Situationist Critique
I think the next problem is the worst problem, which is called the situationist critique of virtue ethics. This comes out of a broader area of behavioural science called the person-situation debate. I’ll put links to the Wikipedia in the show notes—I think this is one of the great Wikipedia reads.
Essentially, there are these people who pay particular attention to the fact that virtue ethics are all about character. What we want to do is embody virtues or values as traits of ourselves. And then they also notice that there’s this sort of troubling lack of evidence that traits are a thing. The vast majority of empirical evidence points to the fact that there’s very little, if anything, that is stable in the human. And rather, what seems to overwhelmingly drive human behaviour is the situation.
Now, that’s not to say that there are no traits. This could be something that’s an artefact of experimental design—how do you design a test to demonstrate how people behave under different circumstances without changing the circumstance? It’s very similar to the nature versus nurture question. The argument’s basically the same. These things are so tightly intertwined that it’s very hard to tease them apart.
And we also know that there is some stability. Personality is kind of a stable thing. And IQ is a pretty stable thing—not entirely stable, they can change, but they are stable enough that we like to measure them. They wouldn’t be interesting if they weren’t at least a little bit stable. So we’re not out of options for stable human behavioural attributes, but outside of these few things, there isn’t much else. And as a consequence, it’s kind of hard to imagine how something like moral character might be found nested within these stable kinds of traits, like personality or like IQ.
And then contrasted against that, there’s this handful of experiments started in the 60s and 70s—but they extend until now—that demonstrate that the situation can be made to annihilate the individual capacity for virtue. I’m thinking of Milgram’s electroshock experiments or the Stanford Prison Experiment. These are examples of catastrophic ethical leadership failings in which the situation led average people to—for example, in the Milgram electroshock experiments—shock somebody ostensibly to death in the name of science, or in the Stanford Prison Experiment, led undergraduate students to brutalise each other while simulating a prison.
And while most people—and I complain about this elsewhere—get the basic facts of these experiments wrong, it’s actually because they’re simplifying details that makes very clear just how influential the situation can be if we try really hard.
So a lack of stable traits in humans, measured against evidence that the situation really overwhelmingly drives human behaviour. While everybody is asking “what virtues comprise the best moral character?” or “what values should we be inculcating in our rowing club?”, the situationists are asking: is there even such a thing as character?
It’s not very heartening, and it should make you very worried if you’re the kind of person who’s trying to think about what values you want in your organisation. Thankfully, I wouldn’t be doing this little lecture if I didn’t have answers for you.
¶Design the Context
We haven’t painted a very flattering picture of the project of values or virtues or virtue ethics. I’ll summarise. We’ve known from the start that virtues themselves are a little vague—they sit somewhere between two extremes. But it’s not just that. They’re also located differently on that spectrum depending on what practice you’re engaging in. So it’s not this continuum from cowardice to recklessness. It’s this sort of moral landscape with many peaks and valleys where courage means different things depending on what you’re doing. And it’s very easy to get lost in this hilly terrain.
And it’s worse than that, because even if you manage to locate the peaks that you care about, people aren’t going to do anything about it. They’re not reliably going to display those peaks. The situation overwhelmingly drives their behaviour, no matter how committed they are to embodying the virtues you want them to embody. That’s what all the empirical evidence seems to suggest.
Now, like I said, this entire lecture was essentially prompted by an old colleague who called me asking how he could help his executive team with their new values initiative. And I spent about seven minutes describing what is looking like it’s going to be maybe 15 or 20 minutes for you before realising that he—possibly like you—probably didn’t really care about the background. Nobody ever does. It’s leadership consulting after all, not brain science. And you are probably listening to this on a drive to work. Instead you want sexy-sounding solutions that can help you in your group enterprises.
So I’ll give you the sexy-sounding solution, which is: if the situation is all that matters, if the environmental context is all there is, then just design that. The situation. The context.
This isn’t a new idea. MacIntyre, who I was telling you about before, makes it clear that institutions need to create structural opportunities for action. We have other ethicists too. John Doris talks about empirically informed, context-sensitive approaches to ethical behaviour. Maria Merritt talks to something similar, but she speaks more to how having an attitude of humility can help you identify that context dependency.
I don’t actually think that you need to be so highfalutin as all that. You could read those people—you’ll get a lot of good ideas—but you could also just be very straightforward about it.
Come up with values. But if you want people to do them, first of all, you have to solve the indeterminacy problem. You have to come up with values that you can then articulate to show people what it means. It’s not enough to tell people to be courageous. It needs to be clear to them what that means.
Google, for example, famously dropped “don’t be evil” from their manifesto. That seemed like a real problem, but actually having it wasn’t useful at all. If you’re a software engineer working at Google, you know that everybody’s sort of upset about all this attention-stealing algorithmic behaviour that’s going on, but it’s not really clear why it’s bad other than people don’t like it. It’s not corresponding to any particularly obvious trends and negative outcomes for people. So it seems like stealing your attention is evil. But equally, nobody likes the non-algorithmic alternatives. Nobody liked RSS, and that was around for as long as Google has been around. I bet most of you listening right now don’t even know what RSS is. So what does it mean for a software engineer working for Google to not be evil? It doesn’t make any sense to include it if it’s not clear what that means.
In contrast, surgical teams have this very clear. They call them “speak up” protocols. Anybody in the surgical theatre can call a halt, often with an exact phrase. And it’s often operationalised at certain points—you’re supposed to call it at a certain point in the operation if you notice something’s wrong. So here courage is actually operationalised in practice. You have to show people what it means.
You can also do that by hijacking the human tendency for conformity. This is something that I teach at Sandhurst. People don’t like the idea of conformity, but it’s actually very useful because when we don’t know how to behave in groups, we conform to solve that problem. So just have influential people model those virtues.
There’s this sort of quote from an Aussie general that you hear around the halls of the military academy here at Sandhurst and also back home at Duntroon: “The standard you walk past is the standard you accept.” It sounds kind of trite, but it’s true. Because if senior people cut corners—if the leaders of a group cut corners—everybody else is going to as well. So figure out how you want people to behave and show people how to do it. They’ll conform to you if they don’t know how they’re supposed to behave.
That’s one thing you can do, but the other thing you can do is identify what situational factors need to be present to motivate people and design the environment to encourage this. Now this is classically called choice architecture, and I’m kind of critical about this elsewhere. But it does work in certain circumstances.
For example, if you put a cheap bottle of wine and an expensive bottle of wine on your wine menu next to the bottle of wine that you want people to buy, then they’re going to buy that—because most people don’t take any pride in being needlessly cheap and they don’t take any pride in being stupidly reckless with their money. So they’re motivated to buy the medium-priced bottle.
There are heaps of examples of this and heaps of examples of models of motivation that you can use to help you work out what will motivate people to behave a certain way in your group. Open-plan offices is a good example—it’s a good example executed poorly, because I think there is evidence to suggest that it’s not really very good for productivity or people’s wellbeing, but they certainly force a certain kind of behaviour. They force something other than private conversation. Or maybe a better example is putting hand sanitiser at eye level. It makes people use it heaps more. So you’ve got to put the structure in for people to behave how you want them to behave.
And then the last thing you can do is just skip all of that and concentrate on what people believe. Because people really only pay attention to what they believe is important. This comes out of work that I used to do before coming to Sandhurst as a brain scientist. Attention is fundamentally belief-shaped. There’s a lot of evidence that we just don’t even notice things—and it’s not that we’re ignoring them, it’s that we don’t even perceive them if we don’t believe they’re important, if we’re not expecting them to be important.
There’s a fantastic example I’ll link in the show notes. It’s a game where you have to watch basketball players play basketball and count the number of times they pass the ball. I don’t want to spoil it, but go watch it and you’ll see what I mean. So you can concentrate on their beliefs and help them pay attention to the things that you care about.
A good example of this is checklist culture in aviation, because pilots really believe that checklists save lives. So they use them even when they’re feeling particularly cocky or even when they’re tired, because they believe in it as an enterprise. Alternatively, it’s not going to have any effect having “customer first” values if the only messaging your salespeople get is that numbers are the priority. So you’ve got to help people understand and believe that the virtues on the table actually matter.
Design not so much the virtues but the context in which those virtues sit. Choose them. Choose your values. But don’t spend too long waiting for people to adopt them. You have to design the context to help people along. Otherwise your virtues, your values, they’re hardly going to matter.
I’ll leave it at that.
¶Transcript
[00:00] Welcome to the btrmt
[00:12] Lectures. My name is
[00:13] Dr Dorian Minors, and if there is one thing I’ve learned as a brain scientist, it’s that there is no instruction manual for this device in our
[00:20] head. But there are patterns, patterns of thought, patterns of feeling, patterns of action, because that’s what brains
[00:27] do. So let me teach you about
[00:29] them. One pattern, one podcast, and you see if it works for
[00:32] you. Now, I want to play a little bit
[00:34] today. I teach ethics, or at least the behavioural science of ethics, here at the Royal Military Academy,
[00:41] Sandhurst. And it can get a little bit tricky because, you know, a study of ethics should tell you how to be, you know, what you should do, what you ought to do, what’s
[00:51] good. But often it seems like it raises more questions than it answers, particularly at
[00:56] work. You know, if you consider that the people I teach, their job involves going out and trying to work out who is a lawful target, you know, from shooters to women and
[01:08] children. And you have an ethical framework that
[01:11] helps. They have the law of armed conflict, which helps define what a lawful target
[01:15] is. But it’s not going to be particularly encouraging when you get the go ahead to engage a child
[01:20] soldier.
[01:21] Right? This is precisely how moral injuries happen, because this kind of ethical dilemma raises more questions than it
[01:30] answers. Now, that is more of a work conversation, and we consider things very seriously
[01:36] there. But, you know,
[01:37] this. This isn’t
[01:38] Sandhurst. This is just Dorian’s little
[01:40] podcast. And like I said, I want to play a little bit
[01:43] today. So let’s move away from that heavy talk and concentrate on something that I’ve noticed in my time teaching the behavioural science of ethics
[01:54] here. And that’s that there is this one ethical framework I’ve noticed that everyone loves, either implicitly or
[02:00] explicitly. And once I tell you about it, I think you’re going to see it
[02:03] everywhere. And the thing that I like the most about it is it’s almost as useless as it is ubiquitous as it’s ordinarily deployed in the
[02:14] context. And what I’m talking about is a concept of
[02:18] values. Now, I noticed this when an old colleague of mine called me up for
[02:24] help. So his startup was big enough now to start thinking about what their company values
[02:29] were. And as I thought about it, I realised that company values actually are something that all places where people collect seriously to do things, from institutions and organisations to sporting clubs or, you know, even the house rules in a D and D game, anytime people come together to set out what kind of person people should be trying to be in a Group, they’re establishing
[02:59] values. That’s the project they’re engaged
[03:01] in. And I’m going to give you a few examples of what that looks
[03:04] like. But the point is, across the board it seems like people really love values and they instinctively try and inculcate them in their
[03:11] groups. But repeatedly, it seems like people really struggle to implement
[03:16] them. They want them to work, but values don’t seem to
[03:20] work. And there’s not a lot of easy answers as to
[03:23] why. And most people don’t have the kind of cash my colleague does to get someone like me to help them fix the
[03:28] problem. But interestingly, teaching ethics, I found out that ethics does have something to say and it, and it’s pretty low hanging
[03:36] fruit. So let me give you a few examples of values to really ground it, show you how they sort of fail, show you what ethics says, and then show you how, if you care how people coming together in a group behave, how you can sort of address that
[03:52] problem. So let me start by giving you a couple of
[03:55] examples. Now, the British army has a very good set of values, as you might imagine from an organisation like
[04:01] that. So the British army claims courage, discipline, respect for others, integrity, loyalty and selfless commitment as their
[04:09] values. The Australian army has an identical set almost, except that we add in
[04:14] excellence. You know, you can take that as you will, but this is sort of a standard
[04:18] case. You won’t find that organisations stray too far from a set of values like
[04:23] this. My colleague will end up doing something like this
[04:26] probably. If it’s a particularly sort of snappy new kind of organisation who notices that organisational values end up mostly just being decoration, they might make them into
[04:37] verbs. So instead of courage and discipline, you’d end up with be courageous and be
[04:43] disciplined. You know, this sort of
[04:45] thing. And the hope, if you’re the kind of leader who reads the Harvard Business Review or whatever, is that by making them doing words, you know, you’re upgrading this historical project of value making by making them easier for your people to
[04:59] do. They’re already doing
[05:00] words. An interesting counterpoint is
[05:04] Tesco. And I like Tesco because it was once owned by
[05:07] Bermudians. So as a Bermudian myself, I have some sort of loyalty
[05:10] there. But it’s a grocery store here in the UK and it uses values that sound more like
[05:17] this. So no one tries harder for
[05:19] customers. That’s one
[05:20] value. Or we treat people how they want to be treated, or every little help makes a big difference, you know, and this is sort of starting to evidence what I mean about how everybody wants values, but everybody also worries that nobody’s gonna actually do
[05:35] them. Tesco’s basically gone for smart goals here,
[05:38] right? Look how specific these things
[05:39] are. When you are doing values in your non organisational groups, your rowing club, for example, or your chess group, you’re gonna go more for the Tesco kind of
[05:49] thing. You’re probably gonna go for something that’s closer to rules than values, if we’re being
[05:53] honest. But the intent is the
[05:54] same. You know, the house rules in a D and D group is going to include stuff like don’t be a rules lawyer or be a turn
[06:01] taker. You’re talking about how people should
[06:05] be. You’re talking about
[06:05] values. People love
[06:08] values. They put them
[06:09] everywhere. They collect in
[06:11] groups. So the question becomes, if we have this instinct towards values, why don’t they
[06:16] work? And that is what I want to talk about
[06:20] today. Now, values are really just virtue ethics in
[06:30] disguise. I hinted at this before, but let me tell you what that
[06:32] means. So ethics is one of the main branches of
[06:35] philosophy. I have sort of an ethics primer that I’ll link to in my articles online that explains things a bit more
[06:43] substantively. But essentially it’s the philosophy of how to be
[06:46] good. We all want to be good, but what even is
[06:49] good? What does good
[06:50] mean? And as such, how should we go about things to achieve
[06:56] that? Goodness and virtue ethics is a particular approach to these
[07:00] questions. And I think that virtue ethics actually make the most sense, at least to me, when I explain the other kinds first, to help you understand what kind of problem they’re trying to
[07:10] solve. So I think what you’d find is most people intuitively are
[07:15] consequentialists. We like to judge whether we’re being good or not by the consequences of our
[07:21] actions. You know, you go about the place and you make decisions by deciding, is this going to hurt
[07:26] somebody? Is this going to help
[07:27] them? You know, how do I do the least harm and the most
[07:30] good? Focus on the
[07:32] consequences. I think very close to our
[07:35] hearts. We hold this sort of consequential
[07:37] calculus. But the problem with consequentialism is that consequences aren’t all
[07:42] commensurate. They don’t all have the same sort of cash value, so to
[07:45] speak. So if you, you know, we’ll take one of the examples that you use in the lecture
[07:50] room. You’ll get this in sort of an Ethics 101
[07:53] course. Let’s say you got a surgeon, and this surgeon just straight up murdered somebody, harvested their organs and used those organs to Save five other
[08:03] people. Numerically, this is a pretty good deal,
[08:06] right? One dead person, five people who would have died, who are now
[08:10] alive. But we’re not really interested in the consequential calculus here,
[08:15] right? There’s something a little off about comparing the one to the five in that
[08:20] circumstance. For a more realistic example, I like to think about the kind of ethical altruism groups that pop up all over university
[08:30] campuses. So you’ll hear people talk about this a lot in groups of people who are really trying to be very rational and reasonable about their ethical
[08:40] approaches. There’s this sort of way of thinking, it’s called long termism, that says, you know, if we just increase overall economic wealth, everybody’s going to be better
[08:48] off. You know, sort of like if you bring the average up, then you’re going to bring everybody
[08:53] up. You just concentrate on raising GDP then, just like the standards of living are better now than they were in the Middle Ages, in the future, everybody’s going to be much better
[09:02] off. And the problem with this is there seems something very weird about preferring future hypothetical people with a really good economic profile over the suffering of real current people that we have to make policy decisions that hurt now to achieve that future
[09:18] state. So this is
[09:20] consequentialism. I think it’s intuitive right up until it’s
[09:24] not. And then it’s really hard to figure out what consequences actually
[09:28] matter. So you might not just rely on consequences, and you almost certainly
[09:33] don’t. The next one that people will bring up in an Ethics 101 class is something that we could call principle based ethics or rules based
[09:42] ethics. So, you know, take our surgeon
[09:44] before. Killing people might save lives one for
[09:47] five. But, you know, on the principle of things, it’s not really that sweet to kill people, you know, so maybe we should just not kill people as a
[09:56] rule. And even in cases where killing somebody could save five people, we just treat it as a blanket rule that killing is not
[10:03] appropriate. This is a principle based ethical approach to
[10:08] behaviour. And so another example of this, and I hinted to it before, is
[10:13] laws. We don’t follow laws because they’re always
[10:17] perfect. We don’t follow laws because they’re always
[10:18] right. We follow them because on principle we believe in lawful societies, you don’t speed not because you think speeding on this highway surrounded by nobody is going to harm anybody, but because you believe in the principle of the
[10:34] law. And you should probably just try and do the right
[10:36] thing. You believe in the
[10:37] rules. So this is called deontology, and it’s another approach to
[10:41] ethics. It’s one of the sort of main three, along with virtue ethics that people will teach you in a basic Ethics 101
[10:49] course. There are actually a lot of others and they all try and fill the gaps where the others
[10:55] fail. So we’ve already talked about how consequences
[10:58] fail. Not all consequences have the same value, so it’s hard to measure them against one another, particularly in edge cases and littered throughout my example of principle based ethics, we had how laws may or may not be right, but we follow them on principle, so we know that these things fall
[11:17] down. So people have come up with other approaches to try and fill the
[11:20] gaps. And the one that I like to use here at Sandhurst is called care
[11:24] ethics. And I like to use care ethics because it’s a feminist
[11:26] ethics. And I get a sort of satisfaction teaching feminist ethics at a institution like
[11:33] Sandhurst. But I think it’s very poignant because a care ethics talks about the ethics of
[11:41] care. I’ll give you an
[11:42] example. If your grandma’s sick, you’re not going to be focused on speeding laws,
[11:48] right? You’re not going to be focused on the consequences of ditching your dinner date to go and be with your
[11:52] grandma. You’re going to drive as fast as you can to help
[11:55] her. And that’s a care ethic because here you are prioritising your loved ones over everything
[12:02] else. And that is a type of
[12:05] ethic. It’s a value that you hold
[12:06] close. So that’s another
[12:09] example. Virtue ethics, back to our main topic are an attempt to sort of shift the question from what these other frameworks are trying to
[12:20] answer. So virtue ethics aren’t asking what should I
[12:24] do? You know, what are the consequences of
[12:26] this? What do the law say about what I should be
[12:29] doing? You know, do I love this person enough to take these
[12:32] actions? It’s shifting the question from focusing on the moment to moment decisions to the kind of person you should be trying to
[12:40] be. So the idea here is that understanding what like the right principles might be or the extent of the consequences of our actions might be, that’s hard and we’re not likely to get it right all the
[12:50] time. So maybe we should try and focus on being good people
[12:53] instead. We like good
[12:54] people. We don’t mind when good people make mistakes because we know that their hearts are in the right place and we think they’re much more likely to do good than
[13:02] bad. So maybe it’s better to try and be one of these good people rather than try and figure out what each of our actions should be, because we’re much more likely to do good than bad,
[13:11] right? So we’re not asking what we should do like principle based ethics or consequentialist ethics, or even care ethics
[13:18] do. We’re asking what kind of person should I
[13:20] be? And then hopefully you, as a logical consequence, are just going to do more good things,
[13:25] right? More or
[13:26] less. That’s virtue
[13:27] ethics. I think you get
[13:28] it. And hopefully if you get it, you’ve made the leap now from virtues towards
[13:34] values. Because like virtues, organisational values are often the desirable qualities of
[13:39] people. They’re aspirational about character development, just like virtues
[13:43] are. They assume that these things can be cultivated in the organisational culture and they’re explicitly about what good means, you know, and they’re presented in this manner because they’re difficult to
[13:55] codify. You can have codes of conduct that tell people what they should be
[13:59] doing.
[13:59] Rules. You can have rewards and punishments that help people pay attention to the
[14:04] consequences. But it’s very difficult to account for every situation that’s going to make you act in the best interests of the
[14:12] customer. So maybe instead you should concentrate on trying to inculcate that as a
[14:16] value. The kind of people our employees should be, the kind of people I keep going to D and D because I’m looking forward to an upcoming
[14:25] session. But the kinds of people that our D and D players around the table should be trying to be, values are virtues, at least in this specific
[14:34] form. So there’s a little primer on ethics, virtue ethics, trying to fill
[14:41] gaps. In fact, trying to approach the whole problem of ethics from another
[14:45] angle. And this is the thing that we are so intuitively drawn to when we try and collect people together to do
[14:53] things. We want to put values in that help people understand what kind of people they should be in
[14:59] groups. And this is a huge problem because there are two massive issues with virtue
[15:06] ethics. All right, so the first problem with virtue ethics is what’s known as the indeterminacy
[15:16] problem. Even back when Aristotle was formulating virtue ethics as we know them today, he pointed out that virtues sit between two
[15:24] extremes. What is
[15:26] courage? It’s kind of difficult to say what courage is, but we certainly recognise cowardice and we also recognise
[15:33] recklessness. So it’s not either of
[15:35] those. It’s somewhere in between the two
[15:37] things. Similarly, you know, with a
[15:39] discipline. Well, discipline isn’t just chaos,
[15:43] right? And it’s also not brittle
[15:45] rigidity. That’s not what we mean by
[15:46] discipline. It’s somewhere in the middle of these Two things you get in, and if you get it, you might have already gotten the problem, which is what is the point at which cowardice gets an upgrade into
[15:59] courage? You know, what’s the line over which some act of bravery stops being brave and courageous and starts becoming
[16:06] negligent? It’s not very
[16:08] clear. There was this recent, I mean, recent, the last 50 years or so, this very influential virtue ethicist, Alasdair MacIntyre, who talks about this specifically in
[16:17] organisations. I’ll link to the book in the show notes because I think it’s interesting because for MacIntyre, virtues aren’t just different in degree, they’re also different in
[16:26] kind. So the problem I just identified is this sort of
[16:29] continuum. Courage is somewhere between cowardice and
[16:34] recklessness. MacIntyre is saying that’s even worse because that differs depending on what it is that you’re
[16:40] doing. They have to be embedded in practices to make
[16:43] sense. So, for example, what courage means for a doctor, you know, versus recklessness and cowardice has almost nothing to do with what courage means for a soldier or for a
[16:55] teacher. You know, when the British army says it wants officers to be courageous, when I’m trying to teach them what that means, do we mean the courage of a frontline soldier, you know, or do we mean the courage of a logistics
[17:08] officer? Or do we mean the courage of the officers that populate the recruitment
[17:12] team? Courage in these circumstances isn’t the same
[17:16] thing. And all of that is within even the same organisation,
[17:19] right? You have different ideas and different standards of
[17:22] excellence. You have these different standards of virtue based on the practices you’re engaged
[17:26] in. So indeterminacy, a virtue is something that sits between two extremes, but those extremes and that middle differ depending on what it is that you’re
[17:37] doing. So that’s the first problem, and it’s not the worst
[17:41] problem. I think the next problem is the worst problem, which is called the situationist critique of virtue
[17:47] ethics. And this comes out of a broader area of behavioural science called the person situation
[17:54] debate. And I’ll put links to the Wikipedia in the show
[17:56] notes. I think this is a
[17:57] great. One of
[17:58] the. One of the great Wikipedia
[18:00] reads. Very interesting because essentially there are these people who pay particular attention to the fact that virtue ethics are all about
[18:06] character. So what we want to do is we want to embody virtues or values as traits of
[18:14] ourselves. And then they also notice that there’s this sort of troubling lack of evidence that traits are a
[18:21] thing. The vast majority of empirical evidence points to the fact that there’s very little, if anything, that is stable in the
[18:29] human. And rather what seems to overwhelmingly drive human behaviour is the
[18:35] situation. Now, that’s not to say that there are no
[18:38] traits. You know, this could be something that’s an artefact of experimental
[18:41] design. You know, how do
[18:43] you. How do you design a test to demonstrate how people behave under different circumstances without changing the
[18:50] circumstance? It’s very similar to the nature versus nurture
[18:54] question. I’ll link to
[18:55] that. The argument’s basically the
[18:57] same. These things are so tightly intertwined that it’s very hard to tease them
[19:01] apart. And we also know that there is some
[19:03] stability. So we know that personality is a kind of a stable
[19:08] thing. And IQ is a pretty stable thing, you know, not entirely
[19:13] stable. They can change, but they are stable enough that we like to measure
[19:18] them.
[19:18] Right. They wouldn’t be interesting if they weren’t at least a little bit
[19:21] stable. So I’ll link to some articles that explain
[19:25] that. So we’re not out of options for stable human behavioural attributes, but outside of these sort of few things, there isn’t much
[19:33] else. And as a consequence, it’s kind of hard to imagine how something like moral character might be found nested within these stable kinds of disciplines, like personality or like
[19:45] IQ. And then contrasted against that, there’s this handful of experiments started in the 60s and 70s, but they extend until now, and they sort of demonstrate that the situation can be made to annihilate the individual capacity for
[20:02] virtue. So I’m thinking of Milgram’s electroshock experiments or the Stanford Prison
[20:07] experiment. These are examples of catastrophic sort of ethical leadership failings in which the situation led average people to, you know, for example, in the Milgram electroshock experiments, it led average people to shock somebody, ostensibly to death in the name of science or in the Stanford Prison Experiment, that undergraduate students to sort of brutalise each other while simulating a
[20:32] prison. And while most people, and I complain about this elsewhere, get the basic facts of these experiments wrong, it’s actually because they’re simplifying details that makes very clear just how influential the situation can be if we try really
[20:46] hard. So a lack of stable traits in humans measured against evidence that the situation really overwhelmingly drives human
[20:57] behaviour. So while everybody is asking, you know, what virtues comprise the best moral character, what values should we be inculcating in our rowing club, the situationists are asking, is there even such a thing as
[21:10] character?
[21:12] It’s. It’s not very heartening and it should make you very
[21:17] worried. If you’re the kind of person who’s trying to think about what values you want in your
[21:21] organisation. Thankfully, I wouldn’t be doing this little lecture if I didn’t have answers for
[21:26] you. So I will leave you with some answers before I let you
[21:31] go. So we haven’t painted a very flattering picture of the project of values or virtues or virtue
[21:49] ethics. I’ll
[21:50] summarise. We’ve known from the start that virtues themselves are a little
[21:54] vague. You know, they sit somewhere between two
[21:56] extremes. But it’s not just
[21:58] that. They’re also located differently on that spectrum depending on what practice you’re engaging
[22:04] in. So it’s not this continuum from cowardice to
[22:08] recklessness. It’s this sort of moral landscape with many peaks and valleys where courage means different things depending on what you’re
[22:15] doing. And it’s very sort of easy to get lost in this hilly
[22:19] terrain. And it’s worse than that because even if you manage to locate the peaks that you care about, people aren’t going to do anything about
[22:27] it. They’re not reliably going to display those
[22:31] peaks.
[22:31] Right? The situation overwhelmingly drives their behaviour, no matter how committed they are to embodying the virtues you want them to
[22:38] embody. That’s what all the empirical evidence seems to
[22:42] suggest. Now, like I said, this entire lecture was essentially prompted by an old colleague who called me asking how he could help his executive team with their new values
[22:54] initiative. And I spent about seven minutes describing what is looking like it’s going to be maybe 15, 20 minutes for you before realising that he possibly, like you, probably didn’t really care about the
[23:12] background. Nobody ever does, you know, it’s leadership consulting after
[23:15] all. It’s not brain
[23:16] science. And you are probably listening to this on a drive to
[23:20] work. Instead you want sexy sounding solutions that can help you in your group
[23:26] enterprises. So I’ll give you the sexy sounding solution, which is if the situation is all that matters, if the environmental context is all there is, then just design
[23:35] that. The situation, the
[23:37] context. This isn’t a new
[23:39] idea. You know, MacIntyre, who I was telling you about before, makes it clear that institutions need to create structural opportunities for
[23:46] action. We have other ethicists
[23:48] too. John Doris talks about empirically informed, context sensitive approaches to ethical
[23:54] behaviour. Maria Merritt talks to something similar, but she speaks more to how having an attitude of humility can help you identify that context
[24:03] dependency. I don’t actually think that you need to be so highfalutin as all
[24:08] that. You could read those people, you get a lot of good ideas, but you could also just be very Straightforward about it, you know, come up with
[24:16] values. But if you want people to do them, first of all, you have to solve the indeterminacy
[24:23] problem. You have to come up with values that you can then articulate to show people what it
[24:28] means. You know, it’s not enough to tell people to be
[24:31] courageous. It needs to be clear to them what that
[24:34] means. So Google, for example, famously dropped don’t be evil from their
[24:38] manifesto. And that seemed like a real problem, but actually having it wasn’t useful at
[24:44] all. You know, if you’re a software engineer working at Google, you know that everybody’s sort of upset about all this attention stealing algorithmic behaviour that’s going on, but it’s not really clear why it’s bad other than people don’t like
[24:59] it. You know, it’s not corresponding to any particularly obvious trends and negative outcomes for
[25:05] people. I’ll link to an article that talks to that in more
[25:07] detail. But it’s really not clear what this is doing to us in terms of mental health outside of a couple of
[25:14] trends. So seems like stealing your attention is
[25:18] evil. But equally, nobody likes the non algorithmic alternatives,
[25:22] right? Nobody liked RSS and that was around for as long as Google has been
[25:27] around. I bet most of you listening right now don’t even know what RSS
[25:29] is. So what does it mean for a software engineer working for Google to not be
[25:34] evil? It doesn’t make any sense to include it if it’s not clear what that
[25:37] means. In contrast, surgical teams, for example, have this very
[25:42] clear. They call them like speak up
[25:44] protocols. So anybody in the surgical theatre can call a halt, often with an exact
[25:50] phrase. And it’s often operationalised at certain
[25:53] points. You know, you’re supposed to call it at a certain point in the operation if you notice something’s
[25:58] wrong. So here courage is actually operationalised in
[26:01] practice. You have to show people what it
[26:03] means. You can also do that by hijacking the human tendency for
[26:07] conformity. And this is something that I teach at
[26:09] Sandhurst. People don’t like the idea of conformity, but it’s actually very useful because when we don’t know how to behave in groups, we conform to solve that
[26:18] problem. So just have influential people model those
[26:21] virtues. There’s this sort of quote from an Aussie general, actually that you hear around the halls of the military academy here at Sandhurst and also back home at
[26:31] Duntroon. The standard you walk past is the standard you
[26:34] accept. And it sounds kind of trite, but it’s
[26:37] true. Because if senior people cut
[26:38] corners. If the leaders of a group cut corners, everybody else is going to as
[26:42] well. So figure out how you want people to behave and show people how to do
[26:47] it. And they’ll conform to you if they don’t know how they’re supposed to
[26:50] behave. That’s one thing you can do, but the other thing you can do is identify what situational factors need to be present to motivate people and design the environment to encourage
[27:02] this. Now this is classically called choice architecture and I’m kind of critical about this elsewhere and I’ll link to an article on
[27:11] that. But it does work in certain
[27:12] circumstances. So for example, if you put a cheap bottle of wine and an expensive bottle of wine on your wine menu next to the bottle of wine that you want people to buy, then they’re going to buy that because most people don’t take any pride in being needlessly cheap and they don’t take any pride in being stupidly reckless with their
[27:33] money. So they’re motivated to buy the medium price
[27:36] bottle. Now there are heaps of examples of this and heaps of examples of models of motivation that you can use to help you work out what will motivate people to behave a certain way in your
[27:49] group. You know, Open Plan Offices is a good
[27:52] example. It’s a good example executed poorly because I think there is evidence to suggest that it’s not really very good for productivity or people’s well being, but they certainly force a certain kind of behaviour,
[28:05] right? They force something other than private
[28:07] conversation. Or maybe a better example is putting hand sanitiser at eye
[28:12] level. It makes people use it heaps
[28:14] more. So you got to put the structure in for people to behave how you want them to
[28:17] behave. And then the last thing you can do is just skip all of that and concentrate on what people
[28:22] believe. Because people really only pay attention to what they believe is
[28:26] important. And this comes out of work that I used to do before coming to Sandhurst as a brain
[28:32] scientist. Attention is fundamentally belief
[28:36] shaped. There’s a lot of evidence that we just don’t even notice
[28:40] things. And it’s not that we’re ignoring them, it’s that we don’t even perceive them if we don’t believe they’re important, if we’re not expecting them to be
[28:48] important. So there’s a fantastic
[28:51] example. I’ll link in the show
[28:53] notes. And it’s, it’s a game where you have to watch basketball players play basketball and count the number of times they pass the ball and I don’t want to spoil it, but go watch it and you’ll see what I mean so you can concentrate on their beliefs and help them pay attention to the things that you care
[29:14] about.
[29:14] Right? A good example of this is checklist culture in aviation because pilots really believe that checklists save
[29:21] lives. So they use them even when they’re feeling particularly cocky or even when they’re tired because they believe in it as an
[29:28] enterprise. Alternatively, it’s not going to have any effect having customer first values if the only messaging your salespeople get is that numbers are the
[29:38] priority. So you got to help people understand and believe that the virtues on the table actually
[29:45] matter. Design not so much the virtues but the context in which those virtues
[29:52] sit. Choose
[29:53] them. Choose your
[29:54] values. But don’t spend too long waiting for people to adopt
[29:56] them. You have to design the context to help people
[29:59] along. Otherwise your virtues, your values, they’re hardly going to
[30:04] matter. I’ll leave it at
[30:05] that.
Anthologies: Betterment, Spiritual Architecture, Narrative Culture, Psychologia, Karstica, On Culture, On Ethics, On Leadership