Moral machines?

 
SHARE
image
The-lightwriter—Getty Images

By now you may be used to asking your phone, Siri or Alexa questions and expecting a reasonable answer. The 1950s dream of British mathematician Alan Turing that one day computers might be powerful enough to fool people into thinking they were human is realised every time someone dials a phone tree and thinks the voice on the other end is a human when it’s actually a computer.

The programmers setting up artificial-intelligence virtual assistants such as Siri and human-sounding phone trees aren’t necessarily trying to deceive consumers; they are simply trying to make a product that people will use, and so far they’ve succeeded pretty well. Considering the system as a whole, the AI component is still pretty low-level, and somewhere in the back rooms there are human beings keeping track of things. If anything gets too out of hand, the backroom folks stand ready to intervene.

But what if it was computers all the way up? And what if the computers were, by some meaningful measure, smarter overall than humans? Would you be able to trust what they told you if you asked them a question?

image
BlackJack3D—Getty Images

This is no idle fantasy. Military experts have been thinking for years about the hazards of deploying fighting drones and robots with the ability to make shoot-to-kill decisions autonomously, with no human being in the loop. Yes, somewhere in the shooter robot’s past there was a programmer, but as AI systems become more sophisticated and even the task of developing software gets automated, some people think we will see a situation in which AI systems are doing things that whole human organisations do now: buying, selling, developing, inventing and, in short, behaving like humans in most of the ways humans behave. The big worrisome question is: Will these future superintelligent entities know right from wrong?

Nick Bostrom, an Oxford philosopher whose book Super­intelligence has jacket blurbs from Bill Gates and Elon Musk, is worried that they won’t. And he is wise to do so. In contrast to what you might call logic-based intellectual power, in which computers already surpass humans, whatever it is that tells humans the difference between what is right and wrong is something that even we humans don’t have a very good handle on yet. And if we don’t understand how we can tell right from wrong, let alone do right and avoid wrong, how do we expect to build a computer or AI being that does any better?

In his book, Bostrom considers several ways this could be done. Perhaps we could speed up natural evolution in a supercomputer and let morality evolve the same way it’s done with human beings? Bostrom drops that idea as soon as he’s thought of it, because, as he puts it, “Nature might be a great experimentalist, but one who would never pass muster with an ethics review board—contravening the Helsinki Declaration and every norm of moral decency, left, right, and centre.” (The Helsinki Declaration was a document signed in 1964 that sets out principles of ethical human experimentation in medicine and science.)

But to go any farther with this idea, we need to get philosophical for a moment. Unless Bostrom is a supernaturalist of some kind (eg, a Christian, Jew, Muslim, etc.), he thinks that humanity evolved on its own, without help or intervention, and is a product of random processes and physical laws. And if the human brain is simply a “wet computer”, as most AI proponents seem to believe, one has to say it programmed itself, or, at most, that later generations have been programmed (educated) by earlier generations and life experiences. However you think about it in this context, there is no independent source of ideal rules or principles against which Bostrom or anyone else could compare the way life is today and say, “Hey, there’s something wrong here.”

And yet he does. Anybody with almost any kind of a conscience can read the news or watch the people around them, and see stuff going on that we know is wrong. But how do we know that? And more to the point, why do we feel guilty when we do something wrong, even as young children?

To say that conscience is simply an instinct, like the way birds know how to build nests, seems inadequate somehow. Conscience involves human relationships and society. The experiment has never been tried intentionally (thank the Helsinki Declaration for that), but a baby given adequate water, nutrition and shelter, but reared in total isolation from other human beings typically dies. Occurrences close to this have happened by accident in large emergency orphanages. We simply can’t survive without human contact, at least right after we’re born.

And dealing with other people allows for the possibility of hurting others, and I think that is at least the practical form conscience takes. It asks, “If you do that terrible thing, what will so-and-so think?” But a well-developed conscience keeps you from doing bad things even if you are alone on a desert island—it doesn’t let you live at peace with yourself if you’ve done something wrong. But if conscience is simply a product of blind evolution, why would it bother you if you did something that never hurt anybody else, but was wrong anyway? What’s the evolutionary advantage in that?

Bostrom never comes up with a satisfying way to teach machines how to be moral. For one thing, you would like to base a machine’s morality on some logical principles, which means a moral philosophy. And, as Bostrom admits, there is no generally accepted system that most moral philosophers agree on (which means most moral philosophers are wrong about morality).

Those of us who believe that morality derives not from evolution, experience or tradition, but from a supernatural source that we call God, have a different sort of problem. We know where conscience comes from, but that doesn’t make it any easier to obey it. We can ask for help, but the struggle to accept that help from God goes on every day of life, and some days it doesn’t go very well. And as for whether God can teach a machine to be moral, well, God can do anything that isn’t logically contradictory. But whether He’d want to or whether He’d just let things take their Frankensteinian course, is not up to us. So we’d better be careful.

 

Karl Stephan is a professor of electrical engineering at Texas State University in San Marcos, Texas. This article has been republished, with permission, from his blog Engineering Ethics.

Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies was published in 2014 by Oxford University Press.

image Subscribe to our eNewsletter