Assimilating the friendly machines, part 1: Killer Robots
Will robots ever be an accepted and integrated part of our society? The question is both complex and controversial. For the past 50 years the use of robots within manufacturing has been steadily increasing and today some production lines are made up completely by robots. We have become so dependent of industrial robots that we couldn’t easily remove them as we have created production environments and work activities based on their capability, accuracy, durability and non-organic bodies. Their intelligence has been limited and basic as their line of work has been repetitious and contained no complex problem solving. But there has always been scientists striving towards a more intelligent machine, so intelligent is could finally be mistaken for a human and therefore integrated into our society.
Science fiction writers have been fantasizing about this for a long time. Isaac Asimov’s life long fascination in robotics and its potential future impact on human life has acted as a outspoken inspiration for many scientists. But Asimovs obsession was not one-sided. His contribution was as much a philosophical approach to the complex issues awaiting us in a future society inhabited by humans and robots alike. He coined the three laws of robotics that was to be integrated in every robot to secure the power balance between man and machine:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
To some extent these laws would be sufficient but as Asimov continuously showed in his writing they could also lead to very complex situations and cause confusion and loopholes.
To no surprise one of the future implementations of robots are military. Military Robot Development resulting in intelligent ”killer robots” – armed robots that can select and kill targets without human command – has recently been debated intensely in the UN and also resulted in a huge organized anti-camdaaa to ban land mines. A United Nations expert has called for a global moratorium on the testing, production and use of armed robots that can select and kill targets without human command. United States, Britain, Israel and South Korea (Samsung Techwin) already use technologies that are seen as precursors to fully autonomous systems and the specific development of ”killer robots” have been officially confirmed in the United States, Britain and Russia.
Is there hope for an enforceable ban on death-dealing robots or have we gone so far it will be hard to back up? These new types of autonomous robotic weapon are so technically advanced that there is still a lot of problems to overcome before they become available. The most crucial issues lies within the trustworthiness wether the weapons will be able to do something they’re not directly programmed to do. There are a lot of factors and variables to calculate before making an accurate decision in combat. Noel Sharkey, Professor of Artificial Intelligence and Robotics and one of the more public figures behind the anti-campaign, has given examples where engaging or attacking self analyzed targets could be disastrous as the robot never could consider all aspects of an attack and which chain of event it could trigger. One moral example Sharkey has mentioned is taken from real life combat in Afganistan where a US squad identified hostile armed troops apparently near a villages and saw an opportunity to engage. The problem was that the hostiles were currently attending a funeral so the squad decided not the engage with respect for the mourning, a decision a robot hardly would or could make.
With drones used in over 50 countries is there really a difference between pulling a trigger by remote or by pulling a trigger by setting rules within software? The difference is the human factor, the ability to feel compassion and to change ones mind if a decision was not correct. A machine will probably increase the accuracy of finding an neutralizing targets but the question is wether they are the right targets and at what cost. But the major risk of an autonomous robotic weapon is – as always with robots and programmed systems – he risk of malfunction where a deadly killer robot suddenly could become an irrational psycho robot on a killing spree without any directives to hold it in check. No three laws. And who will be accountable if there is an accident? The programmer? The manufacturer?