The Glass interface is still “interruption-driven” and difficult to use for extended periods of time, Michal Levin, a senior user experience designer at Google who recently joined the Glass project, said today in a mobile technology conference in San Francisco.
“It’s not a device that you can use for a long time, it gets pretty tiring pretty fast,” she said. The device wakes up to give alerts, she said, interrupting the user’s other activities, for example.
Levin also indicated that the interface charted new territory in human sensory processing and psychology. She initially didn’t understand why the device limited video playback to 10 seconds, but later concluded that “watching more than 10 seconds on this device is mentally exhausting; the brain is not even wired to do that.”
Levin’s self-critical take was in sharp contrast to the unbridled enthusiasm that other Google employees and beta testers have offered. Most criticism of Glass to date has come from people who aren’t using it themselves.
Levin, who will soon publish a book on user experience using multiple devices, indicated that Google will first need to assess which role Glass will play in a multi-device ecosystem.
“What is this device about? What problem does it solve? Does it replace the phone?” are among the questions that Google will need to answer in its field testing of the product in order to create a very clear use case for the device.
“Users have to understand which role it plays, when it’s better than the phone and when it’s not,” she said.
Steph Habif, a behavioral psychologist who teaches in Stanford’s d.school, backed Levin’s view.
The best user experience for apps and interfaces that require new behaviors comes when accessing the new technology is easy and the technology makes it easier for the user to do something s/he already needs or wants to do, she said.
Users generally agree that Glass is initially somewhat difficult to use. Users can speak commands, beginning with “OK, Glass”; they can also swipe and tap a small touch screen along the ear guard portion of the eyeglass interface.
Levin said that the challenges presented by the interface aren’t limited to the small screen, nor are they resolved by the option to command the device by voice.
“There’s a lot of things that you have to learn, but the mental model behind [the different means of interaction] is not very clear,” she said.