Testing digital products in a world where good user experience is not just expected, but mandatory for success.

From my experience, helping clients understand the value of user testing has been one of the most challenging aspects of working as a UX designer. It takes time, money, and effort to put together groups of testers and have them walk through a series of clickable prototypes or an existing website.

The most difficult part of convincing a client to allow me to conduct user testing is that it requires resource gathering that may be outside of my own personal abilities. With a freelance client, I’m often only able to get access to their user base with their permission as well as their help. Sure, there are workarounds — you could find free testers through social media, for example. Often, usability testing surfaces fundamental issues about a digital product.

Jakob Nielsen, a Danish usability expert I have referenced before, says it best:

“there is value in fine tuning the details in the user interface, but the impact on the user experience is not as great as the impact from the fundamental changes made early in the design.”

Nielsen also recognizes the benefits for clients who choose to employ user testing early and often:

“the benefits from early usability data are at least ten times bigger than the benefits from late usability data; it is 100 times cheaper to make a change before any code has been written than if the same change has to be made after the code has been completed.”

While some UX designers will conduct usability testing even in early stages through paper sketches or basic wireframes, it’s been my experience that the most valuable feedback comes from high fidelity, clickable prototypes. By creating a prototype as close to the real thing as possible, users will have as close to the real experience of interacting with a fully developed digital product as possible. Testing at this stage allows for users to provide feedback on not just broad concepts and overall functionality, but specific features, visual elements, and emotional responses.

One argument I have heard against using high fidelity clickable prototypes in usability testing is the concern that much of the feedback will be related to visual design, rather than functionality. However, shouldn’t a well designed application’s visual design aid the user and add to the overall user experience in a neutral or positive way? Comments on visual design can also be indicative of deeper issues with the application, such as a button that is not only hard to find, but is a color that is too low contrast for some users to read properly.


For a typical round of usability testing, I break up testing into three parts. Tests should begin with a user providing consent to be recorded during the session.

Day 1
A tester will conduct an interview (remotely or in person) each lasting about 15 minutes in length with three representative users.

Users are asked to be active observers (think out loud) and click through different tasks given by the tester. A screen recording will be taken to help capture data collected.

The tester will take user feedback collected and use the information to create the current user journey. A user journey is a series of steps that represents how users currently interact with the system, and identify areas for improvement.

An example of a journey map.

Day 2
New mockups will be created to reflect user feedback during Day 1.

A clickable prototype will be created with the new designs.

The task list used on Day 1 will be modified to reflect the new designs. Designs will be shown to developers to verify technical feasibility.

Team will walk through the new task list and clickable prototype to prepare for the second round of user testing.

Day 3
A tester will conduct an interview (remotely or in person) each lasting about 15 minutes in length with three representative users using the clickable prototype.

Users are asked to be active observers (think out loud) and click through different tasks given by the tester. A screen recording will be taken to help capture data collected.

A new user journey will be created from this new set of data to reflect the improved user experience of the application. If user experience was not improved to a satisfactory degree, steps 2 and 3 will need to be repeated.

Preliminary Questions

Preliminary questions give the tester a chance to collect data on the user’s background as it pertains to the application. A tester may ask about familiarity with technology, occupation, and age.


Testers will then give a user a series of tasks to complete. When creating tasks for a user to complete, a tester will write a short description for each task. As a user completes the task, the tester will record the following data: level of success for each task, (0 = not completed, 1 = completed with difficulty/help, or 2 = easily completed) the time to complete each task, and number of errors. Qualitative data should also be captured during this time, particularly notes and observations regarding the reason why the user was successful/not successful, such as wrong pathways, confusing page layout, navigation issues, or terminology. Testers should allow users to complete tasks for as long as it is providing valuable information on the digital product’s usability.

System Usability Scale

A valuable post-interview exercise a tester may provide to the user is a system usability scale. Users are asked a series of questions and asked to rank their experience with a digital product on a scale of 1–5, with 1 meaning the user strongly disagrees with a given statement, while 5 means a user strongly agr ees with a statement. Typical questions a tester may ask of a user are provided in the following sample worksheet.


Creating a script to be used by the tester for each session not only helps the test stay on track, but also provides a structure for each test so that each user is given a similar testing environment and experience. A sample script I have written is below.


The goal in usability testing is to understand how real users interact with your digital product and to make changes based on the results. The earlier in the process you identify usability problems, the earlier they can be fixed, and can cut a lot of unwanted costs along the way.


This usability testing process was created by studying the process of usability experts, such as Steve Krug (notably his book, Rocket Surgery Made Easy), Jakob Nielsen (notably his series of articles for the Nielsen Norman Group), and the UX design handbook, Design. Think. Make. Break. Repeat.

UX Designer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store