Robo-ethicist - new specialty for lawyers?

Wired Gadget Lab has an article entitled Robo-Ethicists Want to Revamp Asimov’s 3 Laws.  Seems that some think that Isaac Asimov's Three Laws of Robotics are too simplistic.  Those 3 laws say: 1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.  A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The article starts by saying:

"Two years ago, a military robot used in the South African army killed nine soldiers after a malfunction. Earlier this year, a Swedish factory was fined after a robot machine injured one of the workers (though part of the blame was assigned to the worker). Robots have been found guilty of other smaller offenses such as an incorrectly responding to a request.

So how do you prevent problems like this from happening? Stop making psychopathic robots, say robot experts.

“If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath,” says Josh Hall, a scientist who wrote the book Beyond AI: Creating the Conscience of a Machine."

The article refers to a paper entitled Toward the Human-Robot Co-Existance Society: On Safety Intelligence for Next Generation Robots recently published in the International Journal of Social Robotics.

Take a look at the article to see what they have in mind - after all, we need to get this sorted out before we have a Cylon problem.