Ethical and moral issues of artificial intelligence PDF Print E-mail
Technology - Artificial intelligence

Ethical and moral issues of artificial intelligence

Debates about the social impact of creating intelligent machines has occupied many organizations and individuals over the past decades. Since many of the early science fiction speculations and predictions from the late 19th century through to the 1960's have already become reality there is no reason to assume that robots and intelligent machines will not happen. We are already living in that era's future, experiencing a golden age of technology, with no end, or limit, in sight.

The moral and ethical implications of artificial intelligences are obvious and there are three sides to the argument. While one party argues that there are already too many of us living in poverty without work there is little or no reason to create mechanical laborers (that can think independently). And that we certainly should not create machines that can argue with us about such issues.

Another party argues that society cannot develop or take advantage of resources without the help of machines that can think for themselves at least a little. And party number three simply doesn't care about the issue at all, as is typical of human society.

On a more detailed level, opinions also differ about the extent to which we should make machines intelligent and what these machines should look like.

Are we talking about autonomous devices like space explorers or robots that mimic human form, thought and behavior? As more and more of society gets automated will we entrust our children,  educational institutions, businesses, and governments to reasoning machines as well? -->


There are no clear answers here. Research is widespread and diverse, covering all of the aspects of artificial intelligence. We don't even agree on what exactly defines intelligence and already we are creating artificial ones. So who is to say what is right?

But if we do build android machines with a designed intelligence that think and behave like humans, shouldn't they be made absolutely subservient to us?

Isaac Asimov, the science fiction author, well known for his robot novels (amongst the myriad others), wrote the Three Laws of Robotics early in the last century which were incorporated into the "positronic" brains of his robots in order to protect humans from a "robot revolution", and to prevent other humans abusing them. :

The Three Laws of Robotics
  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

However, in his novel "I, Robot" those same three laws are taken to their logical conclusion, causing a robot revolution - against, and in favour of humanity - anyway. A lesson to be learned.


The above three principles are a good example of the difficulty in programming an artificial brain. The human brain is evolved through millions of years of survival and social behavior. We are still undergoing this process.

Imitating the brain's workings is a tremendous challenge and, judging by the advance of current processor power and complexity, will take at least several decades more to reach even the most rudimentary levels.

And once we have decided that we do want android robots and other machines with an artificially created intelligence sophisticated enough to rival our own, the question still remains with which ethical and moral values do we instill them?

Looking at human civilization with its diverse cultural, religious, ethical and moral values, what exactly are we trying to create here and to what purpose?

Do we needs robots, for example, that are religiously biased? Is that what human society needs, the perfect Catholic, Muslim, or Buddhist mind? Or do we want a mind that is ruthlessly calculative, for example, the perfect Capitalist or Efficiency Expert? A law enforcer perhaps? Police, judge, jury and executioner all in one? The science fiction comic and movie "Judge Dredd" exemplifies this.


Judge Dredd comic
Judge Dredd comic

Just defining those values would already prove impossible as they are all similar in many ways as well as being radically different. So instead we design the perfect ascetic mind, and then what? That certainly won't please a lot people.

And what about practical applications of these values? If one set of ethical or religious values dictates that we cannot assist in euthanasia, for example, and another dictates that it is imperative that we do, aren't we just duplicating current issues without any real answers? What would be the point?

Perhaps artificial intelligences will show the same diversity as humans. So what would be the point then of creating artificial humans? Don't we have problems enough with the biological ones? Or are we simply looking to design a perfect human? Would that be a god then? But aren't we supposedly already made in a god's image?



On a more practical level we could create an artificial intelligence, in android or machine form, that would function as a neutral entity (if that is at all definable because it would have to have a set of values) and that this entity's sole purpose, for example, is to teach.

It would teach topics that do not involve any moral, ethical or religious values, such as geography or technical skills or mathematics. Inevitably, certainly if there are children involved, it would get questions such as "Yes, but, why?"

If related to the topic it would answer appropriately, but predictably it would come to a point of no return. How then would it answer such a simple question, except for with a "Does not compute" or other similar non-committal answer. Perhaps it could say "Ask a human teacher", or "This question is not allowed." or "'Why' is not a valid question, please restate.".

Not really good enough, is it? Asking why is the most fundamental question of all, isn't it? Without it we'd be animals with only instinct and reflex to guide us. We'd be automatons...

So the issue of which ethical, moral and cultural values to instill on our artificially created intelligence goes on. If it can't even answer a simple "Why?" then perhaps we should make sure these machines aren't intelligent at all. Not capable of making any decision beyond mechanical, programmed movement, and certainly not capable of any deductive reasoning and not in any position where it could influence or have control over humans or human society.

On the other hand, an artificial intelligence might be a better objective judge of human behaviour, better able to make decisions that have the most benefit without being personally biased in any way. It would be non-corruptible for example. Without religious programming it would make decisions based purely on civil laws. So perhaps it is time to hand over some of our decision-making to intelligent machines. How much worse can they do?

Societal issues

For example, would an artificial intelligence make a better criminal judge than a human one? With rigid coding of the laws currently in effect and the punishments applicable to each crime in its memory wouldn't we be far better off with a machine intelligence making the judgement call? Perhaps.

Machine intelligence would not be compassionate, it would not be able to judge each case individually nor could it take a criminal's background into consideration, for example. It would make errors in judgement without making errors in judgement... Its judgements would have to be peer reviewed by a human, so what's the point then?

And what about bigger issues, such as economic decisions. Given enough data to work with would an artificial intelligence not be better suited to steer the economy? It would be much more capable and aware of global trade and commerce statistics, able to prevent disasters like the current crisis we are all in. With a machine intelligence at the helm of global economics imbalances that are now common might not exist any more, they would be prevented.


We are already involving robots in our warfare, making the battlefield a remote experience for their operators, why not use artificial intelligences to decide global disputes instead? Would they not make a fairer decision? Perhaps. Unfortunately, against personal greed and ideological fanaticism it would be difficult to get all parties to agree to let a machine make the ultimate decisions. Clearly, humankind isn't ready yet for impartial decision making.


See also Ethical Issues Concerning Robots And Android Humanoids in Robots and join our debate.

We have selected articles in this section that discuss the ethical and moral pros and cons of artificially intelligent machines. Decide for yourself.

Ethical and moral issues of Artificial Intelligence

International Association of Science and Technology for Development (IASTED) - Conferences 2005.
Committee for the Scientific Investigation of Claims for the Paranormal - "Darwin in Mind - Intelligent Design meets Artificial Intelligence".
Hollywood Jesus, "AI Artificial Intelligence" - the movie
American Library Association - Intelligent computers

AddThis Social Bookmark Button