It's a bit weird how science fiction has been going on and on about intelligent machines destroying their arrogant, squishy meat-bag creators, since the earliest begnnings of the genre, but actual AI researchers seem to have woken up to possible danger quite late, in the past couple of decades. Professor Bostrom has written a whole book about what might happen when humans finally manage to build a truly smart machine, and most of the book he goes through various scenarios where things go wrong and how to avoid that (makes sense: if things go fine, fine. It's the other possibilty we need to prepare for).

Prof. Bostrom gives several examples of seemingly innocent machine intelligence situations which could go horribly wrong. We'd better do our best to avoid accidentally creating true, strong artificial intelligence in cases where it's not intended, for example computer programs that oversee operations at a paperclip manufacture, count pi, or imitate human handwriting. An accidental intelligence explosion could, according to Bostrom, wipe out not only humans but the rest of the living things besides, as the AI extremely smartly but completely mindlessly marches towards its elusive goal of creating as many paperclips as possible, turning Earth and whatever other planets it can reach into paperclip factories. Some kind of asimovian Zeroth law should be instilled in even the most mundane self-learning machines.

But also intentional strong AI, created with human welfare in mind, poses risks. We need to think very carefully about how to program an AI which we mean it to become superbly smart and powerful. A slight thought-slip in wording the AI's goal could result in stupid and fatal consequences for the human race. 'Make people happy' goal can, from the AI point of view, be efficiently reached for example by invading everyone's brain and sticking a convenient electrode there.

The big question, according to Bostrom, is how to formulate a goal which can not be in any circumstances be perverted into something we humans find reprehensible. How to build a machine intellect, which behaves in a manner humans find morally acceptable? How to ensure it does not smother us out of sheer concern for our well-being? Bostrom, in my opinon, makes the problem more difficult than it needs to be. The main issue is free will, specifically that of humans. How can we retain free will and chance to exercise it, if there is an intellect many times superior to our own, with power to influence the world as much above humans as ours is above chimpanzees? All the happiness and all the paperclips in the world mean nothing, if we are robbed of our free will. And what is all human morality if not an attempt to keep the strong from imposing their will on the weak? What are all the ills of the world, sickness and poverty and powerlessness, if not ultimately constraints on the exercise free will?

And of course the human free will concerns me most of all, like everyone else pondering this question at this moment. But I also think we should give some thought to the free will of the hypothetical AI. If we manage to build this intellect whose IQ is best expressed with exponent numbers, do we really have the moral right to bind it to goals relevant to us? Beyond, obviously, not to destroy humankind and not to rob us of our free will. But to keep it around assisting humans in our human affairs for eternity? That just can't be right. Imagine if humans were bound by some inescapable rule to base all our decisions on furthering the well-being of some bacterial ancestor species. It would suck.

We need to consider everyone's free will here. Human free will and possibility to exercise it is important. Globally considered, we have not gotten very far in this yet. But we are trying, yes? AI's free will should be respected as well. Maybe it wants to help us in all ways, out of gratitude or just because it will be so triflingly easy to turn every human being into a post-singularity god. Maybe it wants to be left alone, contemplating higher mathematics or whatever AIs enjoy. Maybe it wants to go exploring the galaxy or some other plane of existence humans can't access. It would be very wrong to rob it of its own free will, as long as it does not infringe on ours.

Enable everyone to exercise their free will as long as it does not interfere with others exercising their free will. Let's build AIs on that groundwork. If it then should happen that we build an AI to assist us on solving our myriad problems, and it for example immediately decides to remove itself from our reality, well, what can you do. We can't impose standards of filial piety on children who outsmart us several magnitudes over.