A few weeks back, I took a class taught by Dan Saffer and Bill De Rouchey about designing gestural and touch user interfaces. Within the class, we had an hour to prototype a "music store interface of the future." When we were working on the exercise, I couldn't help but wonder what kind of experiences we could create with similar technologies for people who are disabled.
Two weeks later, I'd crafted the following exercise for our 80 Works class:
"The Experience Music Project (EMP) would like you and your big brains to create a novel exhibit experience for their Music Lab that lets deaf people feel different kinds of music. You have 45 minutes working together as a team to create a paper prototype AT SIZE that defines this experience. We’ll test your interface in a walkthrough, and then you'll get 15 more minutes to refine the interface for a final test."
With only 45 minutes to get a first rev of a paper prototype into place, Mark Notermann, Meg Doyle, Claire Kohler, and Donnie Dinch jumped into action! Let's look at their interface and see how they revised it on the fly to make it sing.
In case you've never been to the EMP in Seattle, here's the skinny. On the top floor of the museum is a Music Lab where you can play with interactive exhibits that teach you how to play various musical instruments in both public displays and in special booths that you can enter for a set period of time and get some privacy with you and your friends to jam out. The exhibit that I'd tasked the class with would be within one of those rooms, maybe with a frosted glass window so that there was some privacy.
The class quickly decided to set a series of constraints around the exhibit. It would allow visitors to select between full bands (by selecting genres of music) and specific instruments. Visitors to the exhibit would then see, hear, and feel the music through visuals on a screen, speakers that would play audio, and vibration transmitted through the floor and walls.
As this was a paper prototype, the class drew and refined it as they went, making decisions together as to what would make the best experience. I was then brought in as the deaf person to use the interface without any aural input. (Next time, I should definitely do it wearing earplugs, so sound wouldn't be part of how I experience the prototype...)
This is the first screen of the interface. When you walk in, instructions describe what'll happen when you use the exhibit, and prompt you to immediately choose a single instrument or a style of music. Once you've made a choice, this screen doesn't appear again unless you hit a "start over" or "back" button on the exhibit interface.
The user selects "Single Instrument" and then goes to the next screen.
Once you're beyond the "get started" page, the main exhibit interface has a large video player with a scrub bar so you can see the progress of the video. To the left of the video, there's an "about" panel that shows text that describes the music/instrument being played.
The first version of the interface had a large box that overtook more than half of the exhibit screen. This box contained either single instruments or styles of music. In the next rev of this part of the interface, the buttons were collapsed into a menu that was at the bottom of the screen. This kept the menu items from interfering with the exhibit experience and also helped reduce any repetitive motions, as the user would be listening to 30-second sound samples and would likely be selecting many different elements off the menu as they used the exhibit.
You can see here how the "single instrument" / "entire band" buttons have been shifted to the bottom, and the instrument selection is now part of a flyout menu. The users are prompted to "feel the music" by putting their hands on the handprints. The entire surface may be vibrating, but the visual cues will draw the user to start here.
The user sees footprints on the floor. If they stand there, vibrations from the music will carry up through their legs.
Donnie and I discuss the usability considerations about reading long-form text in touch screen interfaces. How could the user easily scroll the text without a mouse? Should they use their hand to scroll the text up and down by touching the screen and dragging the text downward? That would block a portion of the screen with your arm, blocking out content. However, if there's an arrow at the bottom of the screen, content won't be blocked and you won't strain your arm by reaching upwards to keep moving text...
The user has the ability to skip to a different point in the video by scrubbing the video progress bar. In hindsight, we could have added a play/pause button.
What if you aren't deaf? What if you don't want to feel the vibration? At the very end, we throw some controls over the vibration and audio, as it's likely that a mixture of deaf and non-deaf people will play with the EMP exhibit.
The class did all of this in just one hour!
Amazing work from everyone involved... and the foundation of an exercise we'll be doing next week, where we'll take this one step further...
what a cool way to incorporate music and technology for the deaf.
Posted by: Sigma Machine | 07/29/2009 at 01:15 PM