Leah Libresco has responded to my recent post on objective morality. I’ll pull out some of her comments that need responses.
What I don’t understand is why Bob sees his conscience as worth listening to.
Leah imagines that I have a choice. My mind is programmed to give much weight to the moral evaluation that comes from the conscience. It’s not the only input—for example, I might not aid that old person who dropped a package if I’m carrying something fragile or important myself and can’t risk dropping it—but it’s a major input.
Leah goes on to wonder about mechanical brain implants or drugs that would override or mimic the conscience. Sure, that’s increasingly possible.
Here’s the parallel that comes to mind for me. Suppose I’m communicating with Leah using public key cryptography. I get a message from Leah that’s encoded with her secret key. What else can I do but assume that it’s really from her? Once I hear of a security breach (maybe some hacker is out there, mimicking other people), I will no longer trust signed messages like this. But until then, I have no choice but believe that it’s from Leah.
This brain-implant thought experiment would work the same way. “What’s that? My conscience says that I ought to hit that cute little baby? All righty!” If it looks and quacks like a conscience, I’ll assume that it’s a conscience. As you can imagine, I can’t see any way to verify what my conscience says against an external, objectively true answer. (But of course this comparison would be ridiculous. If I had access to an infallible source, I’d use that and not bother with my imperfect conscience.)
Maybe my view of how the mind works is more machine-like or more rigid than Leah’s. Am I missing how the brain is configured?
Leah imagines another experiment.
“Hey, Bob,” I say. ”I’ve got a pretty nifty computer program here. It can give you advice about what to do when you’re not sure about a moral problem. In long-duration clinical trials conducted here in the present, people who did what the black box told them whenever they asked it a question were more likely to have children than people who ignored the black box’s advice, people who weren’t given a copy of my black box, and people who were just given a magic eight ball hidden in a black box. (I had a devil of a time getting an IRB to approve all those control groups, but I wanted to be thorough). Would you like a black box of your own?”
I’m not sure why Bob should turn me down
Meh. Having more children doesn’t have much appeal. My DNA may have more interest in your offer, but I don’t care what it thinks. What shapes DNA and what motivates the mind are different things.
The box I’m offering him is optimized according to pretty similar criteria as the conscience he trusts because it was shaped by evolution.
My conscience has my mind on a pretty short leash—it’s just how the brain is wired. My mind listens to my conscience but doesn’t worry much about the origin of things. Improving fertility has little appeal.
Leah responds to one of my points by referencing some of the words I used.
“Rise above” presupposes some dimension of height. “Hone” implies some form that we’re getting closer to by paring away extraneous material. If you have a sense that more is possible, then you must have some expectation that an external standard exists, and that you have some kind of access to it (even if it’s as limited as our access to physical laws, which we have to painstakingly deduce).
Hmm—am I appealing to an external standard? Let’s think about this.
Morality obviously changes—slavery was moral (that is, acceptance was widespread) and now it’s not, legal alcohol was immoral and now it’s not, and so on. But Leah asks if I see not change but improvement. Sure, morality changes, but can we claim that it’s improving?
Society always sees the change as improvement—otherwise, why would it make the change?—but by what standard do we claim it’s an improvement? We look back with mild horror at what passed for acceptable morality in society in the past, but why think that what we see today is more than simply change?
Here’s another parallel. We’ve all seen jiggle puzzles (also called dexterity puzzles) like the one in the photo above. It’s a handheld box with a picture and a few small ball bearings. The picture has tiny wells that can each hold one ball bearing, and the goal is to carefully move the box to put certain ball bearings (they sometimes have different colors) into the correct wells.
Consider a popular model of morality that parallels a jiggle puzzle. Once we’ve correctly figured out a moral issue (say: concluding that slavery is wrong), we’ve placed that ball bearing in the correct well. That problem is resolved once and for all, the ball bearing isn’t going anywhere, and we can move on to worry about placing those other ball bearings.
But why imagine that this is a valid analogy? Why imagine that we were objectively wrong on slavery before and we’re right about it now? Sure, we think we’ve got it figured out … but different societies in centuries past thought that they had it figured out too, but they came to very different conclusions. “Morality” is a moving target.
My ongoing challenge to those who imagine objective morality: resolve an as-yet-unresolved moral conundrum (abortion, stem cell research, etc.). They can’t do it, and yet they hold on to their claim. One of us is missing something. Am I phrasing the challenge correctly?
The definition we’re using for objective morality is “moral values that are valid and binding whether anybody believes in them or not.” If these values exist and are reliably accessibly to almost all adults, we should all be singing from the same songbook. Since we aren’t, I think the problem is that we’re not using the same definition of “objective.”
Any thoughts?
The true measure of a man
is how he treats someone
who can do him absolutely no good.
— Samuel Johnson