YOUser Experience
The Fourth No

Phone Knows Best

Trust Me

Can a phone be your trusted best friend? Your personal trainer? Your confidante? Can it cheer you up when you're stressed? Can it know what you're feeling, and why you're feeling it? Can it go away when you just want to sit in the corner and cry?

If we're serious about pushing the utility of mobile devices to their absolute limit, then we will have to create software so sophisticated that it can discern the difference between the perceived intent of user actions and the actual intent contained in our brains and bodies. Computers will need to make us feel like they're reading our minds—not just our words, where our eyeballs are pointing, and where our body is positioned in physical space. And when we behave in a irrational manner (meaning like human beings) these same computers will need to withhold judgment on what does not compute.

We will call these design challenges "HAL 9000 problems," because this leap in technological evolution brings up some very gnarly dilemmas for designers and developers—though not because an AI in a spaceship is preparing to kill us. (Yet.)

Let me paint a culinary scenario of the near future...

A week ago, I installed an Augmented Reality Diet App. Since the installation, I've been eating healthy. Until yesterday. I was a bit peckish and went to Burger King and had a Double Whopper with extra pickles. Now my phone's going crazy.

Tonight, as I amble down the block, looking at my augmented reality Dinner Locator for what might meet my stomach's fancy, the application dims out places where I really shouldn't eat—that is, if I want to stay within my caloric intake for the week. I recently upgraded to the iPhone 7G SX, which is able to track how many calories I've eaten on a daily basis, by meal, and how many calories have been burned by my activity. It knows that I'm skinny and unlikely to binge out on a regular basis. But the app is making some assumptions based on the likelihood that I'll consume something that conflicts with my stated diet goals. (I filled those in when I set up the app, and it measured my fat levels by sending a little electrical current through my body when I was holding the phone in my hand.) A few stray variables have been screwing with my Diet App, which perceives my intent as follows: "Dinnertime + hungry + low blood sugar + surrounded by bad dietary options = likelihood of making bad choice."

If I started to walk towards one of those places the app deemed a dangerous choice, my iPhone won't let me pay. (The app is connected to my credit card.) Thankfully, I'm carrying cash, so I just turn off my phone and buy some fried chicken. But my iPhone can detect how my stress level is rising due to limiting the choices that I want to make, and throws a final wrinkle into my evil master plan. It lets me go where I want to eat, but whines and barks as I reach the upper limit of my possible caloric intake. And if I go over that limit, I'll risk being mildly shocked. After all, I did click "Agree" on those terms of use in the App Store...

*

I crafted this scenario to sound farfetched and silly, and I think I've succeed. But most of the technology I referenced exists, in one fashion or another. Give us time, and we'll be talking with our phones in hushed tones, asking for advice, rather than shouting at them in frustration when they crash.

Which brings us back to HAL 9000. Poor Dave had to shut HAL down because HAL thought that his mission was being jeopardized. HAL could detect human stress levels in a number of ways, intuiting stress and emotional upset with his auditory and visual sensors. "I can tell from your voice harmonics, Dave, that you're badly upset," speaketh the machine. Of course, HAL was overreacting just a tad, in his highly logical fashion. But isn't that what humans are supposed to do? Overreact?

I lifted that HAL quote from an article about how researchers have fitted a Roomba to detect the stress level of people around it. Grow too tense, and the area around you will stay dirty. (So even though I was worried about getting this blog post done, I did want the carpet under the coffee table to get clean. Damn you Roomba!)

Discerning the intent of users is a tricky business, even with metrics tracking your every gesture online, people being paid to observe your actions as part of research assays, and the never-ending quests of e-commerce websites to ease you through purchasing flows to acquire that fancy blender you've always coveted. UX designers aren't going to be put out of business because of system intelligence—at least not for another fifteen to twenty years.

But in the next few years, we'll be seeing technology that evaluates your emotions as part of how they relate to you. And at the same time, we'll be finding ways to make our interactions with our phones even more private. Can you imagine trying to decide where to eat with your A.R. diet app, pointing it at McDonalds—and then red, flashing warning lights appear on the screen, visible to those around me…

Technology is no longer the great enabler, but a potential source of public shame. We no longer have the luxury of the computer screen to shield us from our users, or each other. In the most advanced mobile technology, our intent can be observed and recorded in more ways than we can imagine.

It will be our task to provide the right shape to these kinds of experiences, so we do not reduce the humanity of those who use them. And it will be the role of users to smack down those artifacts of novel technology that reach far beyond the bounds of what a computer should provide a human being—emotionally, and maybe even spiritually.

Until the upcoming robot invasion. At that point, all bets are off.

Comments

twitter.com/brandybingham

I don't usually comment, but had to mention that I watched that movie for the first time about an hour before this was posted... what a weird coincidence!

The comments to this entry are closed.