Saturday 8th February, 2014
11:45am to 11:45am
When handed an iPad in landscape orientation, users in a Poynter study swiped through photo albums horizontally 90% of the time. When handed the same interface in portrait orientation, users still tried swiping horizontally over 80% of the time. This interaction isn't random behavior. It is both learned and shaped by the design of an interface.
With the rise of gestural interfaces and ubiquitous computing experiences, users encounter systems with few physical affordances for interaction. Lately, designers have tried overcoming these barriers to use by offering multi-page instruction screens upon application startup, introductory courses for first time device users, and significant feedback for allowed and non-allowed interactions.
How do you introduce users to new gestures and ways of interacting without extensive help modules or person-to-person assistance? How do people discover that a four-finger swipe is an interaction with purpose, not an accident? Where is the sweet spot between an overly assistive interface and one that leaves the user grasping for a lifeline?
This talk reviews some of the latest assistance methods in touch, gesture-based and mediated interaction, with examples from the introduction and refinement of gestures in Google Now, the trials of complex photo editing on phones, Apple's hidden gestural language, early Xbox discoveries, challenges faced by the Google Glass team, and the almost-ready-for-use Leap Motion controller.
Designing access to 18 billion pieces of human knowledge, Sketching Coach, Comics-maker
Sign in to add slides, notes or videos to this session