INTIMATE COMPUTING: COMMUNICATION IN THE AGE OF WEARABLES AND SMARTGLASSES

Von Nils Berger, CEO and owner Viewpointsystem

As computers become more ubiquitous and intelligent than even the most enthusiastic science fiction author could have imagined, our interactions with them have become steadily more intimate. That makes sense: computers have become a thing that we carry with us or wear close to or even within our bodies.

We have entered an age not of personal computing, but of intimate computing. And just as the way we interacted with computers changed when machines changed from being the size of rooms to the size of shoeboxes, the interaction model is changing once again. Computing devices are no longer external to our bodies. They are all around us and on us. For computers to become as invisible to operate as they have become physically, they need to understand our context and desires.

WANTED: THE INTERFACE OF THE FUTURE

The earliest computers – the ones that filled rooms and buildings – were programmed quite literally by running wires from one component to another. As computers became more flexible, the interface between people and computers was managed with cards punched on a keyboard-operated machine, reels of magnetic tape, and wide, noisy printers. Communicating with computers became a matter of flicking small switches to tell them what to do and watching flickering lights to see if they were responding correctly.

With the PC revolution came the keyboard, the mouse and later the touch pad. Although they allow you to give precise commands, interaction via buttons, touch pads or sweeping hand gestures is too cumbersome and impractical for an immersive user experience.

Voice, too, remains an immature interface. First, voice works best in a quiet environment, which can be difficult to find in industrial or private settings. Perhaps more important, voice-operated devices need to have a level of conversational awareness that systems have yet to accomplish. Too often, it is people who are forced to learn what a voice-powered device can understand, rather than the device learning what the person wants. That’s the exact opposite of intimate interaction.

IT STARTS WITH YOUR EYES

The best interface is no interface, one where your computer understands your surroundings and what you want to do without your explicit input. That intimate immersive interaction would be intuitive, portable, and invisible. It wouldn’t be so much an interface as it would be existence in an everyday reality.

I know this sounds very much like science-fiction and maybe even a little hallucinogenic. Yet this sort of seamless mixed reality is actually quite close.
The poets recognize that “the eyes are the window to the soul.” And indeed, the key lies in our most important sense, the eye. Watching a person’s eyes is critical to understanding what they want. Gaze jumps around, then settles on something. The widening of the pupils then indicates a visceral emotional reaction to our surroundings and mirrors the basic emotions of fear, anger, stress, excitement or uncertainty.
Precise eye hyper-tracking measures these small, but tell-tale bodily movements – and allows us to learn a lot about the subject, our level of attention and our current state of mind. If eye tracking is combined with the right software, as we do with our “Digital Iris” technology, it will soon be possible to show the user exactly the information on the display of his smart glass that he needs in a particular situation – without having to actively tell the device what he wants via the touch field or by voice command.

GAZE-BASED MIXED REALITY

Already today, the digitization of eye movements makes the interaction between human and device more intuitive and natural. Two examples:

Eye gestures:
The wearer of smartglasses can, so to speak, communicate with his eyes and select the information on the display or in the physical world by means of an eye gesture, instead of using his hands or voice. When repairing a machine, for example, a technician receives the required digital information displayed directly in front of his eye, leaving his hands free for work.

Perception tracking:
Remote experts connected to the system can use the wearer’s eye movements to identify what he or she perceives and what not. Thus, they can guide the wearer particularly precisely in the video stream during maintenance or repairs.

MAKING THE RIGHT DECISIONS

If we, in the near future, combine eye movement measurements with individual biometric information and machine learning, the system will know what interests me and what my current state is in a certain situational context, both rationally and emotionally.

An example: I’m in New York on my way to an important meeting. It’s hot and I don’t find my way around big cities very well. My smartglasses recognize my flickering gaze and notice how my pulse is rising. But they also know that I only have a four-minute walk, but still 15 minutes left. Hence the suggestion, ‚Take off your jacket and walk slowly to the meeting‘. I don’t get into a stressful situation at all because my system advices me, based on my personal parameters and indicators.

Through our ability to track people’s view and perception, we open the door to their subconscious decision-making process. That’s a powerful change in the way people interact with computers – an intimate connection that’s suited for the move from personal computing to intimate computing.

Computers are smart enough to do what we want, if we can only tell them what it is. We are approaching the point where it’s easier than ever to do just that.