Assimilating the friendly machines, part 2: The sublime propaganda

 

bender-futurama-3305909-1280-960

There is a silent and secret campaign going on. It’s been active for years. It’s on the TV, in books, in video games, on billboards, on the web. It’s not for a specific product, more of a line of products. I am of course talking about robots.

The theme of robots has been picked up by many science fiction writers. Probably most notably by Isaac Asimov. Early fiction containing robots were mostly depicting them as either evil, weird or stupid but this has gradually moved towards describing them as intelligent, helpful and even compassionate. In todays media landscape – with robotics being more hyped than ever – one can find robotics attributed in everywhere from children’s tv-shows(Rob the Robot, Cars, Bob the Builder) to music (Daft Punk) to actual robots looking after our children to weaponized military drones.

 

”Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him. It would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine was the only one that measured up. In an insane world, it was the sanest choice”

– Sarah Connor, Terminator 2

 

Why are a lot of us suddenly embracing robots more than before? What made us change our mood from paranoid skepticism (Hal 9000, Terminator, Ash in Alien, Blade Runner) to over the top optimistic depictions bordering on praises? As commercial media has claimed more power over the public than actual research the opinions and news are often one sided and without nuance. Yes, there are voices warning us about embracing new technology to quickly but they easily disappear in the ocean of how cool, uncomplicated, time saving (a huge factor) and convenient this technology is. The psychological or moral discussions seldom reach the surface.

image_medium_large_verge_medium_landscape

It is strange that we apparently are trying to fulfill our own fictional prophecies almost word by word. A lot of science fiction has acted as inspiration to inventions but the future strategies for robot development almost seem carbon copied from a book by Asimov. Fiction is fiction and we have to make up our own moral compass, rules and laws before trying to transform fiction into reality.

 

Assimilating the friendly machines, part 1: Killer Robots

 

x-47b-11

 

Will robots ever be an accepted and integrated part of our society? The question is both complex and controversial. For the past 50 years the use of robots within manufacturing has been steadily increasing and today some production lines are made up completely by robots. We have become so dependent of industrial robots that we couldn’t easily remove them as we have created production environments and work activities based on their capability, accuracy, durability and non-organic bodies. Their intelligence has been limited and basic as their line of work has been repetitious and contained no complex problem solving. But there has always been scientists striving towards a more intelligent machine, so intelligent is could finally be mistaken for a human and therefore integrated into our society.

Science fiction writers have been fantasizing about this for a long time. Isaac Asimov’s life long fascination in robotics and its potential future impact on human life has acted as a outspoken inspiration for many scientists. But Asimovs obsession was not one-sided. His contribution was as much a philosophical approach to the complex issues awaiting us in a future society inhabited by humans and robots alike. He coined the three laws of robotics that was to be integrated in every robot to secure the power balance between man and machine:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

To some extent these laws would be sufficient but as Asimov continuously showed in his writing they could also lead to very complex situations and cause confusion and loopholes.

To no surprise one of the future implementations of robots are military. Military Robot Development resulting in intelligent ”killer robots” – armed robots that can select and kill targets without human command – has recently been debated intensely in the UN and also resulted in a huge organized anti-camdaaa to ban land mines. A United Nations expert has called for a global moratorium on the testing, production and use of armed robots that can select and kill targets without human command. United States, Britain, Israel and South Korea (Samsung Techwin) already use technologies that are seen as precursors to fully autonomous systems and the specific development of ”killer robots” have been officially confirmed in the United States, Britain and Russia.

47600464-BYB-trends-in-robots-i-robot-packbot-slide-1.600x400

Is there hope for an enforceable ban on death-dealing robots or have we gone so far it will be hard to back up? These new types of autonomous robotic weapon are so technically advanced that there is still a lot of problems to overcome before they become available. The most crucial issues lies within the trustworthiness wether the weapons will be able to do something they’re not directly programmed to do. There are a lot of factors and variables to calculate before making an accurate decision in combat. Noel Sharkey, Professor of Artificial Intelligence and Robotics and one of the more public figures behind the anti-campaign, has given examples where engaging or attacking self analyzed targets could be disastrous as the robot never could consider all aspects of an attack and which chain of event it could trigger. One moral example Sharkey has mentioned is taken from real life combat in Afganistan where a US squad identified hostile armed troops apparently near a villages and saw an opportunity to engage. The problem was that the hostiles were currently attending a funeral so the squad decided not the engage with respect for the mourning, a decision a robot hardly would or could make.

With drones used in over 50 countries is there really a difference between pulling a trigger by remote or by pulling a trigger by setting rules within software? The difference is the human factor, the ability to feel compassion and to change ones mind if a decision was not correct. A machine will probably increase the accuracy of finding an neutralizing targets but the question is wether they are the right targets and at what cost. But the major risk of an autonomous robotic weapon is – as always with robots and programmed systems – he risk of malfunction where a deadly killer robot suddenly could become an irrational psycho robot on a killing spree without any directives to hold it in check. No three laws. And who will be accountable if there is an accident? The programmer? The manufacturer?

More in depth:
Smart Drones (NY Times)
ICRAC (International Committee for Robot Arms Control)