Your current filters are…
Jared talks about unmoderated tools and gives a few example stories that illustrate why you should be *really* careful with interpreting data from these kinds of methods.
There are hundreds of new UX research tools on the web. Are any of them useful and what are the categories that they fall into? This is what Nate & Cyd break down.
by Beverly Freeman
Many research methods enable a "snapshot in time" view into the user experience, whether it's a usability test of the first-encounter experience or a post-transaction satisfaction survey. People's interactions with and attitudes toward products and services, however, change over time. Longitudinal research involves collecting data from the same group of people at multiple points in time. Increasingly, online technologies are making it possible to conduct longitudinal studies that are scalable for researchers and engaging for participants. This talk will touch on a few case studies and best practices for conducting online longitudinal studies.
Admit it. Your organization wants to build in Social. But how do you measure whether it's doing what you want it to do? You know that conducting usability tests can tell you where people get frustrated with interactions in the user interface. What will it tell you about frustrations with interactions with other people online? The truth is, being online has always meant "social." But scale makes a huge difference. And ensuring that the user is in control has never been as important as it is when there's personally identifying information is involved.
When interaction is fluid, ambient, and content-driven, how do you derive a task scenario? What are the success criteria? When you have large-scale social, user hacks turn into functionality and social norms. Etiquette evolves organically. What's that test look like?
Whether it's because of technological limitations or plain, old inexperience, screwing up research data should be a mandatory part of every UX professional's career development. The more tools you use, the greater the variety of eff-ups. However, a lot of blunders can be quickly boiled down into a few practical morals. In 20 minutes, you'll hear about some pretty big mistakes I've made with a variety of tools, what I took away from those experiences, and how I turned those mistakes into wins for my team.
by Susan Dybbs
When designing complex systems for highly specialized users, traditional research methods may not do the best job uncovering details of the user's mental model and related information. In this talk, I'll discuss how I used participatory design methods with surgeons to design an inter-operative telemedicine system. I'll highlight best practices and offer a healthy dose of blood, guts and gore (rated PG 13).
by Nick Finck
Nick Finck, The Director of User Experience for Blink Interactive, will talk about his process for field user research through user interviews, contextual inquires, and moderated guerilla usability testing. The presentation will cover examples from his work on Fandango, First Tech Credit Union, University of Washington, and Oprah.com along the way.
by Brian Krausz
A/B testing, interrupt studies, and similar tests are shifting from in-house software to more general, and increasingly powerful, online solutions. This talk will explore the surge in UX tools being offered as online services. Brian will discuss the advantages of this trend as more UX research tasks and tools are offloaded to external companies.
by Derek Pearcy
What users say will generally be different from what they do -- this is true, but what's a good strategy when you can't get to enough of your users? What if you could answer some really big questions by performing simple research on ALL of your users? This is the same style of approach taken by companies like Google and Zynga, to target user research efforts which have made them what they are today. Log analysis, done well, can seem like mind-reading. If you haven't done it before: there's nothing to fear.
by Jono Xia
Test Pilot is a research program launched in 2009 by Mozilla Labs to learn about how people use Firefox. It's strictly anonymous, opt-in, and unlike most data-collection programs, it gives users the chance to see exactly what data has been collected before they choose whether to upload it. Aggregated and sanitized data samples are released to the public under a Creative Commons license to encourage community participation in analysis. Over half a million users have submitted log data about how they use tabs, menus, toolbars, bookmarks, searches, and other browser features. Jono will talk about how Test Pilot works, share interesting and surprising things we've learned so far, and answer questions about our methodology.
Most everything built at Twitter is designed by watching how people use the service in new and interesting ways and incorporating those ideas. #NewTwitter, Twitter.com's biggest change in its four-year history, called for iterative research that allowed designers and developers to quickly experiment with wide-ranging changes. Trammell & Karina will share how existing patterns affected the current design of Twitter, what worked (and what didn't work) with #NewTwitter's early research methodology, and what they're doing to better understand how folks discover what's happening.
18th–19th November 2010