Assimilating the friendly machines, part 2: The sublime propaganda

 

bender-futurama-3305909-1280-960

There is a silent and secret campaign going on. It’s been active for years. It’s on the TV, in books, in video games, on billboards, on the web. It’s not for a specific product, more of a line of products. I am of course talking about robots.

The theme of robots has been picked up by many science fiction writers. Probably most notably by Isaac Asimov. Early fiction containing robots were mostly depicting them as either evil, weird or stupid but this has gradually moved towards describing them as intelligent, helpful and even compassionate. In todays media landscape – with robotics being more hyped than ever – one can find robotics attributed in everywhere from children’s tv-shows(Rob the Robot, Cars, Bob the Builder) to music (Daft Punk) to actual robots looking after our children to weaponized military drones.

 

”Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him. It would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine was the only one that measured up. In an insane world, it was the sanest choice”

– Sarah Connor, Terminator 2

 

Why are a lot of us suddenly embracing robots more than before? What made us change our mood from paranoid skepticism (Hal 9000, Terminator, Ash in Alien, Blade Runner) to over the top optimistic depictions bordering on praises? As commercial media has claimed more power over the public than actual research the opinions and news are often one sided and without nuance. Yes, there are voices warning us about embracing new technology to quickly but they easily disappear in the ocean of how cool, uncomplicated, time saving (a huge factor) and convenient this technology is. The psychological or moral discussions seldom reach the surface.

image_medium_large_verge_medium_landscape

It is strange that we apparently are trying to fulfill our own fictional prophecies almost word by word. A lot of science fiction has acted as inspiration to inventions but the future strategies for robot development almost seem carbon copied from a book by Asimov. Fiction is fiction and we have to make up our own moral compass, rules and laws before trying to transform fiction into reality.

 

Assimilating the friendly machines, part 1: Killer Robots

 

x-47b-11

 

Will robots ever be an accepted and integrated part of our society? The question is both complex and controversial. For the past 50 years the use of robots within manufacturing has been steadily increasing and today some production lines are made up completely by robots. We have become so dependent of industrial robots that we couldn’t easily remove them as we have created production environments and work activities based on their capability, accuracy, durability and non-organic bodies. Their intelligence has been limited and basic as their line of work has been repetitious and contained no complex problem solving. But there has always been scientists striving towards a more intelligent machine, so intelligent is could finally be mistaken for a human and therefore integrated into our society.

Science fiction writers have been fantasizing about this for a long time. Isaac Asimov’s life long fascination in robotics and its potential future impact on human life has acted as a outspoken inspiration for many scientists. But Asimovs obsession was not one-sided. His contribution was as much a philosophical approach to the complex issues awaiting us in a future society inhabited by humans and robots alike. He coined the three laws of robotics that was to be integrated in every robot to secure the power balance between man and machine:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

To some extent these laws would be sufficient but as Asimov continuously showed in his writing they could also lead to very complex situations and cause confusion and loopholes.

To no surprise one of the future implementations of robots are military. Military Robot Development resulting in intelligent ”killer robots” – armed robots that can select and kill targets without human command – has recently been debated intensely in the UN and also resulted in a huge organized anti-camdaaa to ban land mines. A United Nations expert has called for a global moratorium on the testing, production and use of armed robots that can select and kill targets without human command. United States, Britain, Israel and South Korea (Samsung Techwin) already use technologies that are seen as precursors to fully autonomous systems and the specific development of ”killer robots” have been officially confirmed in the United States, Britain and Russia.

47600464-BYB-trends-in-robots-i-robot-packbot-slide-1.600x400

Is there hope for an enforceable ban on death-dealing robots or have we gone so far it will be hard to back up? These new types of autonomous robotic weapon are so technically advanced that there is still a lot of problems to overcome before they become available. The most crucial issues lies within the trustworthiness wether the weapons will be able to do something they’re not directly programmed to do. There are a lot of factors and variables to calculate before making an accurate decision in combat. Noel Sharkey, Professor of Artificial Intelligence and Robotics and one of the more public figures behind the anti-campaign, has given examples where engaging or attacking self analyzed targets could be disastrous as the robot never could consider all aspects of an attack and which chain of event it could trigger. One moral example Sharkey has mentioned is taken from real life combat in Afganistan where a US squad identified hostile armed troops apparently near a villages and saw an opportunity to engage. The problem was that the hostiles were currently attending a funeral so the squad decided not the engage with respect for the mourning, a decision a robot hardly would or could make.

With drones used in over 50 countries is there really a difference between pulling a trigger by remote or by pulling a trigger by setting rules within software? The difference is the human factor, the ability to feel compassion and to change ones mind if a decision was not correct. A machine will probably increase the accuracy of finding an neutralizing targets but the question is wether they are the right targets and at what cost. But the major risk of an autonomous robotic weapon is – as always with robots and programmed systems – he risk of malfunction where a deadly killer robot suddenly could become an irrational psycho robot on a killing spree without any directives to hold it in check. No three laws. And who will be accountable if there is an accident? The programmer? The manufacturer?

More in depth:
Smart Drones (NY Times)
ICRAC (International Committee for Robot Arms Control)

Interaction and interfaces, part 2: The Future

Kinect-Minority-Report-UI-2
In my last post I ranted a bit about Apple, their new iOS design and their place within the changing interaction ecosystem. In this post I want to focus on the future of interaction and interfaces: Where are we headed and why and will it be better than today?

 

”If you want to know where technology is headed,
look at how artists and criminals are using it.”

William Gibson

 

If you look at current and past science fiction movies some elements regarding the interaction with computers are reoccurring: voice commands, hand gestures and 3D navigation. The first two elements are well on their way in todays interaction environment but the third is remarkebly non present in itself, though todays multitasking layer environment in computers can be looked upon as semi-3D.

But let’s examine the first two elements in some more depth:

1. Voice Controlled Devices (VCD)
The past 20 years have introduced everything from washing machines that allow consumers to operate washing controls through vocal commands and mobile phones with voice-activated dialing. The new and modern VCDs are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences (instead of thoroughly analyzing one voice through different test sentences). They are also capable of responding to several commands at once, separating vocal messages, and providing ”appropriate” feedback, trying to imitate a natural conversation. VCDs can be found in computer operating systems (Windows, Mac OSX, Android), commercial software for computers, mobile phones (iOS, Windows Phone, Android Phone, BlackBerry), cars (Ford, Chrysler, Honda, Lexus, GM), call centers ”agents”, and internet search engines such as Google.

Among the future cross platform players are Google who has created a voice recognition engine called Pico TTS and Apple that has released Siri. Apple’s use of Siri in iPhone and Googles use of speech-recognition in for example Google Glass has not been received without sarcasm or frustration. Both give you the possibility to give a set of commands: dictate, google/search for information, get direction, send email/message/tweet, open apps and set reminder/meeting.

Siri hasn’t been as big a success as anticipated, mostly because of issues with Siri not understanding your commands correctly. But Siri’s technical solution is not an easy one. It is built up by two parts: the virtual assistant and the speech-recognition software (made by Nuance). The assistant actually works pretty good while the speech-recognition engine works…occasionally. This has got to do with how the different parts are interacting and also the quality and speed that the actual sound file can be delivered to the online speech-recognition engine that then will have to send the text back to your phone for the virtual assistant to act on. Sound complicated? Basically if you articulate well while you’re connected to Wi-Fi you should be well off. In the future – apart from improving Siri – Nuance has mentioned developing advanced voice recognition software for use in cars (Dragon Drive) for getting direction, searching for nearby restaurant, but also within TVs (Dragon TV).

Among other prominent devices the introduction of voice commands was given a lot of room when Microsoft revealed the new Xbox One. Voice is used for starting, ending and switching between different services but also for giving specific commands within games.

So this is the present situation but wherein lies the future for voice commands? Vlad Sejnoha, chief technology officer of Nuance Communications, believes that within a few years, mobile voice interfaces will be much more pervasive and powerful. “I should just be able to talk to it without touching it,” he says in an article in Technology Review. “It will constantly be listening for trigger words, and will just do it — pop up a calendar, or ready a text message, or a browser that’s navigated to where you want to go.”

This future scenario sounds both intriguing and disturbing. A silent spying assistant that is always on call, ready to do your bidding even before you ask for it. Hopefully the privacy and security settings will be as well developed and intelligent as the voice-recognition itself.

Eric McIntire

2. Gestures UI

Kevin Kelly, founder of Wired Magazine but also technical consultant to the fictional interfaces designed by Jorge Almeida for the iconic movie Minority Report, recently gave a speech where he described the future impact of different disruptive technologies. Among them where gesture based interaction, Gestures UI, something that was featured in Minority Report when Tom Cruise orchestrated rather than navigated and clicked to find information within a computer (Tom Cruise interaction was by the way a lot more realistic than Keanu Reeves quite ridiculous 3D/VR-glove attempts in Johnny Mnemonic). Kelly states that as screens and displays can be anything and everywhere – something he also managed to put into Minority Report – an easy and accessible way would be using a type of sign language rather than typing your commands. Kelly gives Eye Tracking as an example of current existing technology that scans your body language to get information. Eye tracking could also be used to identify your mood and level of interest and adapt the presentation according to that, for example noticing that you don’t understand a word and subtly explain it to you. Iris identification software could also be used to identify persons at a larger extent than today, possibly even for advertising purposes.

From the gaming industry PlayStation and Xbox has introduced some gesture-based features and developed them further with their next-generation consoles which contain even more possibilities for commands, navigation and in-game interaction through gestures.

The touch based revolution that Apple initiated has been a great building block to prepare the public for even more physical future interaction patterns.
The two disruptive interaction technologies described above will change the design of interfaces massively. For example: when using voice based interaction an effective, intelligent and attentive servant could remove the need of an actual interface or menu. Gestures on the other hand would get us closer and more active which would require a totally different approach. Xbox One combines both features and will be an interesting experience.

Both technologies could restore some of our humanity within the digital environment when we’ll use human language – voice-based or body-based – as the primary tool for interaction with machines.

Keeping technology at bay

Watching-Together
As long as I can remember I’ve been fascinated by technology. I can still remember the thrill of trying to catch a glimpse of the first video game ever in a room full of boys. There on the smallest TV screen possible an older kid was playing Bazooka Bill. Everyone was excited and anxious to don’t miss a thing although no one other than the one playing could actually see what was happening.

After this the game consoles followed, and later the Commodore home computer series, and with that my first go at coding. What a magic thrill it was to be able to instruct the computer where to look and where to go and what to display.

But I was then as I am now very careful to not let technology swallow me whole, to fully baptise me into it’s eternal religion. It is important for me to be able to keep technology at bay, to watch it from afar and to try to grasp its influence and the way it transforms our behaviour, communication and social life. Though most of my work has required me to understand, construct and develop different complex technical implementations I have always tried to keep my head above the seducing waves of technology and steer the ship towards the users experience instead: What is what we’re creating really good for? Who is going to use it, and why?

In my view technology should never be a goal in itself. If no one wants to or can’t use it efficiently it immediately becomes pointless. There is no room for bad technology in todays tech savvy society.

Don’t get me wrong. I am truly fascinated by future scenarios of society and culture, science fiction and tech-centred subjects as singularity, transhumanism, nanotechnology and the future of interaction, but I want to analyse both what we can gain from this technological inventions and processes and what we could lose by adopting them and also which power players influence humanity towards different goals.

I believe that the future of communication will be more integrated in our humanity (something I will try to describe further in my next post), more seamless and less disruptive of human culture. Technology can be a powerful tool. If you use it wisely.

Recommended read regarding this: A Silicon Valley School That Doesn’t Compute