Photo by Tess Kelly

Running in-house UX research

User experience or usability testing has a bad rap with in-house digital teams. It has a reputation as being hard and time consuming and therefore the domain of specialist agencies.

Running in-house research feels like it takes a lot of preparation, is challenging to run and requires a significant amount of post analysis. Unless your design team is a vast crack taskforce, it makes sense to outsource, right?

Wrong. Lean UX is the norm nowadays and gives digital teams the unique opportunity to own design and product development end to end. If you have the money to outsource, you could spend a large chunk of your budget on each study in exchange for some nicely formatted reports. But would that lead to fast improvements to your systems and uplift customer satisfaction? And is it a sustainable model across large digital ecosystems where the customer touch points (and pain points) are many? Can you fund nicely-formatted reports for every change they need?

Recently I got the chance to put our lean methods to the ultimate test. A significant flow problem in a customer recruitment system was discovered and ideas for potential solutions came thick and fast during an emergency project meeting. The next morning I tested these ideas with customers in a lab environment, ultimately leading us toward significant revisions that weren’t foreseen internally. What follows are some tips to enable agile feedback and research, as often as every week. Even if you’re a team of one. 

Plan: Know your purpose

It is critical to have a clear research plan, stating exactly what it is you are trying to understand. At the heart of this could be a hypothesis about a problem or potential new feature. 

  • Create the test script – Write the script you will use to extract the information you need from customers. Don’t forget to include your standard experience measures in here (e.g. NPS and SUS).
  • Prepare the experience – Create a simulation of the experience to be tested. The more realistic the prototype, the better the feedback you’ll get, but don’t overcook it and waste precious time. Tools like InVision are great for throwing jpeg designs into clickable sequences.
  • Recruit participants – Get your recruiting going at least two days ahead of the testing, targeting the right customers. Five is the classic number needed for qualitative studies and you can run these back to back inside one day. 
  • Offer incentives – Small incentives, such as coffee vouchers and movie passes are often all you need if you work in an environment where the customers are close at hand. Offer something proportional to the duration of the sessions being planned. I have run sessions as short as 15 minutes.

Get Ready: Labs are simpler/easier than you think

Technology has come a long way in the past five years. You no longer do you need to have a custom-built lab space, complete with one-way viewing windows. If you have a decent spec laptop, external monitor and quiet room, you’re good to go. 

  • Filming the user – Use your laptop’s camera. Fire up something like Skype and place the image in the bottom right corner of the external monitor.
  • Recording the session – You may need to refer back the the sessions or like to create a short show reel of key findings. Use a tool like QuickTime to record the screen, which will include the interface, user interactions, the user and her comments. QuickTime is built in to all MacBooks and is a free download for PCs.
  • Broadcasting the session – Whether it’s for your observers or interested stakeholders, tools like Google Hangouts work a treat.

Get set: The team sport of observation and analysis

Avoid analysis paralysis by making this phase a team sport. Not cricket, though. Something more like table tennis. This effort needs to be focussed and timely. There’s no value in a nicely produced report that lands on desks weeks later.

  • The team – Invite one or two of your colleagues to help with note-taking and analysis. Including a stakeholder or product owner for testing the solution being testing is ideal. Build up those empathy hours
  • The sport – This is about quiet solo observation followed by collaboration. Sitting in a nearby room, ensure that observers take simple, clear notes of only the key observations – broken things, positive things, interesting quotes – and post them on a board (A grid with columns of users and rows of tasks).

Go: Translating evidence into actions

Once the final session is complete, it’s important to close off the analysis as quickly as possible. Your observers will have have made incremental progress on the observations wall during the day. Now you need to summarise the data and pull out key findings.

  • Cleanse data – Clean up each column of user data, removing duplicates and low-level feedback.
  • Pull out key findings – Work through each task as a team, looking for patterns. Discuss the insights that appear multiple times across the users. 
  • Summarise – Spend an hour or two writing a summary document that clearly points to evidence but ultimately places the focus on actions. What needs to change (Interface design, interaction design, content, IA)? What are the priorities? Share the recommendations widely and ensure that fixes make their way into your development backlog.

In the space of just a couple of days, this approach lets you to move from hypotheses to actions and be repeated weekly. It needs a leader/facilitator, someone to process and moderate the sessions and two people to observe and take notes. The leader – typically someone from the design team – will need to devote about two days’ effort and the observers a single day. The end result is a tight set of clear recommendations for iterating the product being investigated, rather than a nicely formatted report that no-one has time to read. Having struck a blow for a better user experience, you can now move onto the next problem to solve.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *