The simple way

A lot of companies have issues with prioritizing which content/products/information to show the user when he arrives at their site. Everybody within the company wants to contribute: Sales, Marketing, PR, Product Owners.

But what it really comes down to is the following:

  • Can we identify our users/visitors/customer segments? 
  • What do these identified users usually want to achieve?
  • Which is the simplest way of offering them what they want?

You can do this by analyzing site data, customer segments, customer behavior, customer focus groups, usability testing, sales statistics, user flows, customer service input. By going through all of this information you get what the customers want to achieve when visiting your site or using your app. It is not unusual that this differs from what you as company want the user to achieve. So how do you combine these to goals?

  1. Make it simple for users to achieve what they came to do (if they have a goal).
  2. If they don’t have a set agenda guide them through a focused UI without many distraction towards your goal.

This means daring to not show every possible offer all the time and to deliberatly focus on one thing at a time. This focus can of course be adjusted to change according to time of day, time of broadcast of commercials,  if we have a logged in user profile.

Dare to be simple.

Interaction and interfaces, part 2: The Future

In my last post I ranted a bit about Apple, their new iOS design and their place within the changing interaction ecosystem. In this post I want to focus on the future of interaction and interfaces: Where are we headed and why and will it be better than today?


”If you want to know where technology is headed,
look at how artists and criminals are using it.”

William Gibson


If you look at current and past science fiction movies some elements regarding the interaction with computers are reoccurring: voice commands, hand gestures and 3D navigation. The first two elements are well on their way in todays interaction environment but the third is remarkebly non present in itself, though todays multitasking layer environment in computers can be looked upon as semi-3D.

But let’s examine the first two elements in some more depth:

1. Voice Controlled Devices (VCD)
The past 20 years have introduced everything from washing machines that allow consumers to operate washing controls through vocal commands and mobile phones with voice-activated dialing. The new and modern VCDs are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences (instead of thoroughly analyzing one voice through different test sentences). They are also capable of responding to several commands at once, separating vocal messages, and providing ”appropriate” feedback, trying to imitate a natural conversation. VCDs can be found in computer operating systems (Windows, Mac OSX, Android), commercial software for computers, mobile phones (iOS, Windows Phone, Android Phone, BlackBerry), cars (Ford, Chrysler, Honda, Lexus, GM), call centers ”agents”, and internet search engines such as Google.

Among the future cross platform players are Google who has created a voice recognition engine called Pico TTS and Apple that has released Siri. Apple’s use of Siri in iPhone and Googles use of speech-recognition in for example Google Glass has not been received without sarcasm or frustration. Both give you the possibility to give a set of commands: dictate, google/search for information, get direction, send email/message/tweet, open apps and set reminder/meeting.

Siri hasn’t been as big a success as anticipated, mostly because of issues with Siri not understanding your commands correctly. But Siri’s technical solution is not an easy one. It is built up by two parts: the virtual assistant and the speech-recognition software (made by Nuance). The assistant actually works pretty good while the speech-recognition engine works…occasionally. This has got to do with how the different parts are interacting and also the quality and speed that the actual sound file can be delivered to the online speech-recognition engine that then will have to send the text back to your phone for the virtual assistant to act on. Sound complicated? Basically if you articulate well while you’re connected to Wi-Fi you should be well off. In the future – apart from improving Siri – Nuance has mentioned developing advanced voice recognition software for use in cars (Dragon Drive) for getting direction, searching for nearby restaurant, but also within TVs (Dragon TV).

Among other prominent devices the introduction of voice commands was given a lot of room when Microsoft revealed the new Xbox One. Voice is used for starting, ending and switching between different services but also for giving specific commands within games.

So this is the present situation but wherein lies the future for voice commands? Vlad Sejnoha, chief technology officer of Nuance Communications, believes that within a few years, mobile voice interfaces will be much more pervasive and powerful. “I should just be able to talk to it without touching it,” he says in an article in Technology Review. “It will constantly be listening for trigger words, and will just do it — pop up a calendar, or ready a text message, or a browser that’s navigated to where you want to go.”

This future scenario sounds both intriguing and disturbing. A silent spying assistant that is always on call, ready to do your bidding even before you ask for it. Hopefully the privacy and security settings will be as well developed and intelligent as the voice-recognition itself.

Eric McIntire

2. Gestures UI

Kevin Kelly, founder of Wired Magazine but also technical consultant to the fictional interfaces designed by Jorge Almeida for the iconic movie Minority Report, recently gave a speech where he described the future impact of different disruptive technologies. Among them where gesture based interaction, Gestures UI, something that was featured in Minority Report when Tom Cruise orchestrated rather than navigated and clicked to find information within a computer (Tom Cruise interaction was by the way a lot more realistic than Keanu Reeves quite ridiculous 3D/VR-glove attempts in Johnny Mnemonic). Kelly states that as screens and displays can be anything and everywhere – something he also managed to put into Minority Report – an easy and accessible way would be using a type of sign language rather than typing your commands. Kelly gives Eye Tracking as an example of current existing technology that scans your body language to get information. Eye tracking could also be used to identify your mood and level of interest and adapt the presentation according to that, for example noticing that you don’t understand a word and subtly explain it to you. Iris identification software could also be used to identify persons at a larger extent than today, possibly even for advertising purposes.

From the gaming industry PlayStation and Xbox has introduced some gesture-based features and developed them further with their next-generation consoles which contain even more possibilities for commands, navigation and in-game interaction through gestures.

The touch based revolution that Apple initiated has been a great building block to prepare the public for even more physical future interaction patterns.
The two disruptive interaction technologies described above will change the design of interfaces massively. For example: when using voice based interaction an effective, intelligent and attentive servant could remove the need of an actual interface or menu. Gestures on the other hand would get us closer and more active which would require a totally different approach. Xbox One combines both features and will be an interesting experience.

Both technologies could restore some of our humanity within the digital environment when we’ll use human language – voice-based or body-based – as the primary tool for interaction with machines.

Interaction and interfaces part 1: Apple





The launch of redesigned interfaces always generate a lot of discussions, especially in fast paced social media channels. For the past week the iOS7 interface (designed by Jonathan Ive) has caused quite a stir. The choice to move away from skeuomorphism towards a flatter, simpler and more modern looking design has been looked upon as both controversial and thoughtless. Harsh accusation to be based mostly on HD screenshots and a short introductory film. CoDesign’s John Pavulus excellent article and Cliff Kuang’s comment in Wired and developers direct access to the iOS7 has and will further provide some more nuance and insight into the process behind Apples choices. Apple is brave to dare to start moving towards new unproven interface grounds both I believe they could have been even braver. Why? Because besides the 200 new features iOS7 contains it’s mostly just a change of design. Most of the familiar styles of interaction will remain – as they have proven to be extremely successful – unchanged.


”I believe I can see the future
Because I repeat the same routine”

– Nine Inch Nails, Every day is exactly the same


When Apple launched the iPhone it was the first step on a beautiful new journey for all mobile customers. It has changed how and why we think about and use mobile phone. It was genuinely groundbreaking, as it managed to overcome the final problems and finally brought our phones over the smartness barrier. Most types of services that Apple offered wasn’t unique but they offered them in a pristine shining environment that although it was new and maybe even frightening immediately felt like home. Steve Job’s thoughts on skeuomorphism might not have been in line with a minimalist designer’s wet dream but it gave the iPhone its human aura and feel. But the real underlaying reason it became successful was because the iPhone managed to introduce a different way to interact with our phones. We came closer to them and closer to the information we now easily could consume and distribute faster and more efficiently than ever. A closeness which translated into an even larger touch based product – the iPad. Tablets might pretty soon overtake the sales of laptops and one can see in research report after research report how Apple has thoroughly changed the way we consume media, communicate with each other and interact with technology. So if they have our consumer loyalty, product worship and user experience in a firm grip why won´t they dare to take an even greater risk when changing the iOS interface?

First of all the landscape where Apples products exists has changed dramatically since 2007. There is a lot more serious competition within the digital ecosystem of mobile devices. Android has taken significant market shares, but previous huge players like Microsoft should not be underestimated. But the competition is not only coming from within the mobility sphere. Next-generation game consoles, Smart TVs and an increasing array of Internet of Things is affecting the way and position of the iPhone and iPad as they introduce both other screens of consumption but also new possible ways of interaction. At the digital game conference E3 smart phones and tablets have been  frequently used as supplements to consoles and computer games which has widen their ecosystem and range significantly (Read this for more info). Apple has a given within this ecosystem but they must think more about the interaction environment within the ecosystem and less about the interaction between themselves. The optimal – for all producing companies – would of course be that the complete ecosystem only contained products from one company. Service and app developer think and work  this way: the service should be recognizable and used in the same way independent of platform. A total Apple dominated product environment is an utopian dream which is both wonderful and troublesome at the same time and as competing players are growing their market shares Apple should focus on making a few friends and try living in fruitful symbiosis as a humble leader instead of an all-knowing pundit. For example they could be better at meeting the demands and wishes of cross platform apps and services – for example the new and improved Facebook Home – at least halfway.