Apple's announcement on Monday regarding the iPhone 3G S, with voice control, represents more than just a way to manage your iPod state and dial phone calls in a hands-free manner. It's an important step in growing new flavors of user interface that are contingent on the intangible into the mainstream.
I'm not talking solely about Voice User Interfaces (VUIs), or the profession of Voice Interaction Design (VIxD), or the many small fiefdoms and associations currently blossoming around human-computer interaction governed by conversational speech systems. These are useful and important niches, but let's think big in our increasingly fractured and over-specialized profession of design.
Let me propose a somewhat radical alternative: roll Voice User Interfaces into a category that I'd like to dub the Intangible User Interface.
We have Graphical User Interfaces, which we know quite well from decades of struggling with operating systems. Our new friends, the Touch and Natural User Interfaces, rely on our physical bodies for operation beyond things like mice and keyboards. Intangible User Interfaces, however, would be a branch of interface that relies on everything but using your physical body in motion as an input mechanism. There's some wobbly semantics around the word "intangible," as it's often used to describe the attributes of a designed system that can't be visibly measured or quantified when observing users. However, it's that specific quality that I want to focus on: input and output contingent on what cannot be seen.
With those parameters, speech is only one of a wide variety of ways to interface with a computer. What if you were provided with a ballcap that had electrodes placed at the temples, so you could transmit your thoughts to the iPhone 9G in order to control your iPod? Can an interface prompt you with a scent when dinner's ready? If you're feeling sad, will the room brighten up and the coffee maker brew you a cup of tea?
This stuff isn't so crazy. We already have the technology, in a rough form, to move beyond linguistics into how brainwaves can stimulate simple interactions. Like any nascent innovation, this field will move from novelty to the stuff of science fiction in our lifetime.
And these experiences will need to be designed, not just dictated by scientists. Exploring device control beyond considerations of visual stimuli—the senses that inform our brain, and the signals that our brains create which cause us to behave in certain ways—falls squarely into our domain as designers and communicators when to comes to how the experience unfolds. But we will have to partner with neuroscientists and technologists to try and understand such bold questions as: What does speech look like to a computer when you are voicing a thought? How does a computer understand the non-linguistic human mind without us simply recreating ourselves within the machine? (Though we may need to...) Will us forcing ourselves to think in specific ways to interface with machines radically change the shape of human thought, for better or for worse? And my favorite question of the bunch: How do we manage detailed, unique thoughts and emotions in different cultures and languages?
This is the next level of radical exploration beyond the Voice User Interface. Light will always be faster than sound, and the speed of thought will always beat out the time necessary to transmit the verbal "handshake" that voice interfaces require. Systems that mimic human-grade communication, via technologies employed in such a humble manner as "Speak or press one for movie times in your area," are now tottered beyond baby steps into a happy adolescence. But this method of interaction is only one branch of a field that will move beyond the idea of the shared gesture, whether physical or verbal, and into a more esoteric realm.
Therein lies the rub for this next great frontier: Before we can design for the intangible experience, we must first dismantle ourselves. Understanding the human mind, to date, has been beyond the human mind's comprehension. And this whole discussion verges on the idea of people becoming bionic, which is a whole other area of inquiry. (Leading us to robotics, killer machines dominating the world, and so forth.)
Until that bright future, I'm fascinated to see how we can start thinking about designing for thought and emotional input in computing. Who knows—maybe an alien race will provide us with the details, thereby saving us thousands of years of research...