Your current filters are…
Well, We've Done All This Research, Now What? Steve Portigal, "Design Research Luminary," and Julie Norvaisas, show you how designers and researchers can work with user research data to create action for businesses. One of the most persistent factors limiting the impact of user research in business is that projects often stop with a cataloging findings and implications rather than generating opportunities that directly enable the findings. As designers increasingly become involved in using contextual research to inform their design work, they may find themselves holding onto a trove of raw data but with little awareness of how to turn it into design.
The emphasis in this workshop (including a pre-work exercise in the days and weeks leading up to User Research Friday) will be on strengthening the creative link between "data" and "action." By the end, participants will have developed a range of high-level concepts that respond to a business problem and integrate a fresh, contextual understanding of that problem.
by Indi Young
Indi Young, author of Mental Models, presents the second half-day workshop on becoming an amazing facilitator. Ever fumbled for the next thing to ask when you’re interviewing someone? We have. Learn how to listen deeply, instead of just tracking a script, to really master non-directed interviews. Essentially, you want the person to tell you what's important to them, not find out how they respond to what you think is important. When conducting generative interviews (not evaluative interviews), it is important to relax and let the conversation flow, exploring sub-topics that your interlocutor brings up and seems to emphasize or hesitate about.
This workshop will re-iterate the importance of a focus on behavior, reaction, and beliefs. It will teach you how to identify and avoid statements of fact and passive action. You will have a chance to conduct your own interviews and see how it feels to run a more relaxed session wherein you gather information that is more useful for your purposes.
Jared talks about unmoderated tools and gives a few example stories that illustrate why you should be *really* careful with interpreting data from these kinds of methods.
by Beverly Freeman
Many research methods enable a "snapshot in time" view into the user experience, whether it's a usability test of the first-encounter experience or a post-transaction satisfaction survey. People's interactions with and attitudes toward products and services, however, change over time. Longitudinal research involves collecting data from the same group of people at multiple points in time. Increasingly, online technologies are making it possible to conduct longitudinal studies that are scalable for researchers and engaging for participants. This talk will touch on a few case studies and best practices for conducting online longitudinal studies.
Admit it. Your organization wants to build in Social. But how do you measure whether it's doing what you want it to do? You know that conducting usability tests can tell you where people get frustrated with interactions in the user interface. What will it tell you about frustrations with interactions with other people online? The truth is, being online has always meant "social." But scale makes a huge difference. And ensuring that the user is in control has never been as important as it is when there's personally identifying information is involved.
When interaction is fluid, ambient, and content-driven, how do you derive a task scenario? What are the success criteria? When you have large-scale social, user hacks turn into functionality and social norms. Etiquette evolves organically. What's that test look like?
Whether it's because of technological limitations or plain, old inexperience, screwing up research data should be a mandatory part of every UX professional's career development. The more tools you use, the greater the variety of eff-ups. However, a lot of blunders can be quickly boiled down into a few practical morals. In 20 minutes, you'll hear about some pretty big mistakes I've made with a variety of tools, what I took away from those experiences, and how I turned those mistakes into wins for my team.
by Brian Krausz
A/B testing, interrupt studies, and similar tests are shifting from in-house software to more general, and increasingly powerful, online solutions. This talk will explore the surge in UX tools being offered as online services. Brian will discuss the advantages of this trend as more UX research tasks and tools are offloaded to external companies.
by Derek Pearcy
What users say will generally be different from what they do -- this is true, but what's a good strategy when you can't get to enough of your users? What if you could answer some really big questions by performing simple research on ALL of your users? This is the same style of approach taken by companies like Google and Zynga, to target user research efforts which have made them what they are today. Log analysis, done well, can seem like mind-reading. If you haven't done it before: there's nothing to fear.
by Jono Xia
Test Pilot is a research program launched in 2009 by Mozilla Labs to learn about how people use Firefox. It's strictly anonymous, opt-in, and unlike most data-collection programs, it gives users the chance to see exactly what data has been collected before they choose whether to upload it. Aggregated and sanitized data samples are released to the public under a Creative Commons license to encourage community participation in analysis. Over half a million users have submitted log data about how they use tabs, menus, toolbars, bookmarks, searches, and other browser features. Jono will talk about how Test Pilot works, share interesting and surprising things we've learned so far, and answer questions about our methodology.
Most everything built at Twitter is designed by watching how people use the service in new and interesting ways and incorporating those ideas. #NewTwitter, Twitter.com's biggest change in its four-year history, called for iterative research that allowed designers and developers to quickly experiment with wide-ranging changes. Trammell & Karina will share how existing patterns affected the current design of Twitter, what worked (and what didn't work) with #NewTwitter's early research methodology, and what they're doing to better understand how folks discover what's happening.
18th–19th November 2010