Usability testing is, quite simply, one of the most vital steps in product design. Observing a user’s behavior with an early prototype (or live product for that matter) is invaluable in the lean startup model of build-measure-learn. In a world filled with big data, we as designers sometimes become obsessed with “statistical significance”, meaning that a sample size must be large enough to uncover all possible points of failure. Sometimes, patterns emerge right away. So why not respond?
Moderated Usability Testing
Recently we conducted a moderated usability test: a test where facilitated by asking questions and directing tasks. The goal was to validate that the primary task flow we had designed was easy to execute by our target persona. Our plan was to repeat the test with five individuals to be certain we had identified all of the pain points in our prototype before moving into visual design.
Working in real-time with users allowed us to better understand their needs by talking to them directly rather than just taking our client’s word for it. More importantly, witnessing the exact points where they struggled spoke to unarticulated opportunities to reduce friction. After completing just two tests we had learned enough that it was time to go back to the proverbial drawing board and make changes to the UI to help users complete their tasks more quickly and with less frustration.
How We Conducted Our Usability Study
Digital Telepathy has found a sweet spot with tech companies like New Relic, Elasticsearch, and Vigor Systems. This client has also built a SaaS application which allows network administrators to move software applications between clouds. So our first step was recruiting senior-level IT Managers familiar with provisioning servers and deploying apps to AWS and other cloud services. To do that, we tapped into our LinkedIn network to reconnect with people with whom we had previously worked; within a week we had five users lined up and ready to go!
Their task was to move an existing SaaS project management tool, built on Ruby on Rails, in the cloud. The participants were not located in San Diego so we met via GoToMeeting. Video conferencing enabled us to monitor (and record) the participants’ screen and audio. This complete view provided a wealth of data for our project team to synthesize. After just two tests we called our client and shared the raw footage – without hesitation they approved a postponement of the remaining appointments so that we could iterate on the design before conducting any more usability tests.
Why Statistical Significance Didn’t Matter
The idea behind statistical significance is to identify recognizable patterns in user behaviors. In our case, it had become clear that the user flow we had designed didn’t work. Our first two tests failed in the same spot, and for the same reason. Perhaps we needed to move a button, or adjust a layout. We weren’t sure yet…but we knew there was no point in continuing without revisions – it would only lead to more failures in the same spot.
Instead, we postponed our remaining three tests, buying ourselves time to iterate the design and hopefully resolve the issue at hand. This helped mitigate a schedule impact so we didn’t waste our time testing something that would ultimately fail. Postponing also meant that we didn’t need to recruit more users to validate our revised hypothesis.
Don’t Waste Time
Time is a precious commodity; don’t waste it. The next time you conduct a usability test, put in the effort up front to ensure you’re testing the product with the target user. Don’t be afraid to revise your testing schedule if patterns become apparent early on.
Have you learned any lessons quickly from usability testing? Share them in the comments!