We’re responding to an imaginary dialogue that explores Soft Theism, which is basically Christianity without the unpleasant baggage. Can jettisoning Christianity’s crazy bits make it acceptable? Read part 1 here.
This is post 14 in this series, and the question is whether humans are equivalent to robots or if a spiritual explanation is necessary.
Humans as moist robots
Atheist: Just because we can be reduced to molecules in motion, that doesn’t take anything away from our experience as human beings.
Soft Theist: I think it does! Philosophically, It totally sabotages the meaning of life, if we are, in essence . . . just molecules.
Cross Examined Blog: Meaning in life for humans is what humans define it as. If you’re alluding to an objective meaning in life—that is, meaning grounded in something outside humans—then show that such a thing exists. (I respond to William Lane Craig’s confusion about this here.)
The brain is an organic computer. Nothing more. It is matter that processes data. We have the impressions we have because of the processes going on inside of our brains. There’s no need for there to be anything more than that in terms of explaining what we experience, and what our brains are capable of. As to the issue of matter having intelligence . . . yes, it does. I don’t see what the problem is.
The problem is . . . that if human beings are organic computers . . . “nothing more” than complex robots, then we are expendable, and replaceable. But human beings are not expendable. When a father loses a child, he doesn’t go to Best Buy to get a new one!
I agree that humans aren’t expendable, but I’d say that other life forms on earth aren’t expendable, either. And this belief that humans aren’t expendable, though deeply felt, is a shared feeling, not an objective fact. There’s no law in the universe or book in God’s library that says this. There’s no external grounding for this feeling or any other moral principles. Human worth is what we define it as, nothing more. But that’s enough.
Humans as computers
At some point we will create computers that have the same sort of intelligence that we have and they will also be . . . simply matter.
Ray Kurzweil wrote about this in The Age of Spiritual Machines (1999). He predicted a time in the near future when technology will be able to do two things. First, we’ll be able to scan a living brain with sufficient resolution to know the state and interconnections of every neuron. And second, we’ll have a brain simulation that can make sense of this data and continue the simulation in real time. From the standpoint of the simulation, “people” would suddenly wake up in the environment created by the computer. That environment might be like the one they’d been experiencing in their carbon-based bodies or be very different, like a permanent virtual reality simulation. These wouldn’t be clumsy mental caricatures but perfect copies. We see this in the TV series Upload (2020).
In writing my response to this question about humans as robots, I’ve used a few references to TV and film. Popular media is a great place to find speculative ideas about the future. If these ideas did not ring true for us, they wouldn’t be plausible in fiction.
Soft Theism denies humans as computers, but a corollary that comes out of this claim, which I think Soft Theism must also defend, is that computers could never duplicate or be equivalent to humans. I found Kurzweil far too optimistic about when computers will be able to mimic authentic human intelligence, but that’s not the point here. Soft Theism must claim that re-creating (or creating) authentic humans as digital simulations would be impossible, not just twenty years in our future but also twenty million. That sounds unlikely.
Oh man! I think you are way wrong. There is a difference between humans and robots. We are not moist robots. We have feelings; robots do not have feelings! We give birth; robots do not give birth!
Oh, I can see robots replicating themselves someday.
Giving birth is a red herring. If we agree that mammals are conscious, the platypus isn’t excluded just because it comes from eggs.
Real emotions?
Aaaahhh, . . . yeaaah but [these robots] still will not have REAL emotions. They are not really alive.
Let’s judge whether these robots have real emotions based on whether they invoke emotions in the experts in human emotions, humans.
Consider the 2008 film WALL-E, a “computer-animated science fiction romantic comedy” (Wikipedia). The primary thread through the film is a romance between two robots. Ask these expert judges if the emotions are believable and if they trigger an authentic emotional response.
You . . . “believe” that humans are more than organic computers, but, you have no evidence.
I think you are severely stunted, philosophically. You’re so conditioned to the mechanistic mindset of science, you cannot think outside of that box.
Am I also stunted sartorially because I can’t appreciate the emperor’s new finery? Let’s first make sure that what you’re pointing to actually exists, whether it be diaphanous clothes or the uniqueness of human emotions.
Are humans more than molecules?
There is nothing irrational about facing up to the fact that we ARE simply a collection of molecules.
Well, yes, of course, physically that’s what we are, but I think it IS irrational to think that is ALL we are, with no spiritual component. Robots have no hearts, no intrinsic identity. . . . If, a thousand years from now, I had an amazing dog robot, I still couldn’t love him the way I would love an imperfect, but REAL . . . dog.
Yes, it would be irrational to overlook the spiritual side of reality if reality actually had a spiritual side. You must show that.
Your reference to a dog brought to mind the mechanical dog in the film Sleeper (1973) that could do little more than wag its tail and say, “Woof, woof. Hello, I’m Rags.” We’ve come a long way with AI (artificial intelligence) in the fifty years since then. I think your lovable AI-driven dog will exist in another fifty years, but you’re saying it won’t in a thousand years. I strongly doubt that.
Robots as sympathetic characters
I think you overestimate the complexity of human emotions or what our real triggers are. Why was the robot in WALL-E a sympathetic character? He had camera-ish eyes and scooper hands, and he moved with tank tracks instead of legs. But those eyes were big, like a baby’s, he made cute sounds, and he paused in admiration or wonder like we would. Would no one call him lovable by the end of the film? Or even five minutes into the film?
R2D2 in Star Wars was even less anthropomorphic. Little more than a cylinder, I’m sure some people found it lovable. It could convey emotions like excitement or despair with a limited range of movements and whistling sounds.
People convinced by artificial: ELIZA and the Turing Test
AI has a long way to go, but it’s already becoming somewhat indistinguishable from humans. ELIZA was a program created by Joseph Weizenbaum in 1966 that mimicked a Rogerian psychologist—the kind who mostly ask questions to explore the patient’s emotions. He said,
I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.
While people today are more aware of what computers can do and would be more skeptical of an online therapist, AI today is vastly better at holding a conversation and is improving quickly.
The Turing Test is another example of computers pretending to be people. There are many variations of the test, but the general idea is that a human judge converses through a keyboard and screen with two entities, one of which is a person and the other a computer program. The test is run with many different judges, and the program wins if the judges correctly identify the hidden entities no better than chance.
The goal of a Turing-winning program isn’t to be as quick and accurate as possible but as human-like as possible. Today it’s fairly easy to give the program traits that a century ago would have seemed to be quintessentially human such as quick math answers and a deep knowledge in every area of human inquiry. The problem is that being too good would give the program away. Humans make mistakes, defend their bruised egos, type typos, and have other frailties.
If a program that reliably wins a Turing test doesn’t already exist, it can’t be far into the future. But to return to the topic of this post, would such a program think? Is it really intelligent? That’s debatable, but that’s not the point. The point is that it fools humans into thinking it’s human.
Killing a robot
Neither of us feel that killing a human being is the same thing as terminating a robot. The fact that we have that kind of intuitive feeling, is evidence to me that there IS a critical distinction.
Suppose someone had to die, and you had to choose between killing your favorite fictional robot (assume it really existed) and your cranky 80-year-old neighbor whose only interaction with you has been to criticize where you put your trash cans. You’re saying that your intuitions would argue that the neighbor was a human so you’ll terminate your robot without a second thought?
Next time: a little more on whether human brains are equivalent to computers
The Christian says “Jews and Muslims are wrong.”
The Muslim says, “Jews and Christians are wrong.”
The atheist says, “You’re all correct.”
— seen on the internet by commenter RichardSRussell
.
Image from Andy Kelly (free-use license)
.