Improving Information Architecture Using Tree Testing

No comments yet

How Good Is Your Information Architecture?

The information architecture of a site or app is crucial to creating a good user experience. A good test of the information architecture is to evaluate how easy it is for visitors to find what they are looking for. However, many websites and apps have never tested the findability of items in their navigation structure (often called taxonomy) with real users.

This is a concern as how do you know whether your navigation structure is intuitive and easy to use unless you test it? Tree testing or reverse card sorting is a technique that can significantly reduce problems with an important element of your information architecture, your navigation structure.

What is Tree Testing?

Image of a tree

Source: Freeimages.com

Tree testing evaluates the findability, labelling and organization of topics on a website. Most websites are organised into a hierarchy (a “tree”) of topics and subtopics. Tree testing is a way of identifying how easy it is for users to find individual items in this hierarchy to improve the information architecture.

However, unlike normal usability testing, tree testing is not carried out on the website itself. Instead users browse a simplified text version of the site structure. This removes the effects of the design, including visual cues, and navigational aids (e.g the internal search box) and other factors that might influence how quickly visitors find what they are looking for.

How Does Tree Testing work?

There are 6 steps to complete a tree test:

  1. Users are given a find it task to complete (e.g. “find a portable DVD players for less than £20).
  2. Participants are shown a text list of the top-level topics of the website.
  3. Users select a heading, and then are given a list of the subtopics to choose from.
  4. Participants continue choosing topics in the tree. They can backtrack if necessary, until they find a topic that achieves their aim or they may abandon the process if they can’t find what are looking for.
  5. Users will then repeat the process a number of times with different find it tasks to test the findability of a range of items in the tree hierarchy.
  6. Test results will then be analysed once a sufficient number of users have completed the test.
Image of welcome screen for remote tree testing

Example of welcome screen for remote tree testing – Source: Userzoom.com

When Should You Use Tree Testing?

If you want to identify the root cause of navigation problems tree testing may be the best solution. It removes the effect of the design of your website and other navigational tools and aids from the equation. With no internal search to assist your user tree testing helps to isolate navigational deficiencies so that you can make the necessary improvements in your taxonomy. Tree testing is often used for:

  • Identifying which items, groups or labels are causing problems for your users and set a benchmark of “findability” before you update your navigation. This might then lead you to conduct a card sorting exercise to improve the usability of your taxonomy.
  • Measuring the impact of an improvement or change in the findability of items in your navigation structure. This will allow you to validate if the change you are making helps improve findability, makes no difference or actually creates a new problem.

Which Elements should You Test?

For a small website with less than a hundred items you may be able to test your whole navigational structure. However, for large ecommerce websites with literally thousands of items on the site this is not practical or cost effective. In this instance you should use your web analytics to identify less common paths that can be removed from the testing process.

To decide what to test you should start by defining user’s goals and the top tasks that they need to accomplish to meet their goals. This normally involves getting both users and stakeholders to rank the main tasks so that you can identify what both groups agree on. They also identify any low priority tasks that internal stakeholders wrongly believe are important to users. It may be useful to include some items that cross departments as these create their own issues for users and items that have been identified as problematic from open card sorting or Voice of Customer research.

What Sample Size Do You Need?

As Steve Krug points out, “Testing one user is 100% better than testing none.” Whilst this is true, we have to bear in mind that with tree testing we may be dealing with a complex navigation structure. It is important to conduct a reasonably robust test if we are to draw any reliable conclusions. The key outcome metric should be whether the user successfully found the item they were asked to locate. This simplifies the analysis to a “Yes/No” metric.

I have outlined below the sample size required to achieve a confidence level of 95% and assumed 50% of users find the item. I have assumed this because 50% generates the highest possible margin of error and so is the worst case scenario.

Image of sample size required for specific margin of error at 95% confidence level

Sample size required for specific margin of error at 95% confidence level.

Generally you should limit the number of tests each participant completes to 10 depending upon how long on average each task takes to complete. Otherwise participants may become fatigued and they will also become experienced users of your site structure which could influence the test results.

Should You Ask Participants Questions?

After each tree test it is useful to ask participants to rate the difficulty of the task. This can provide a guide to the usability of finding the item. Keep questions to a minimum but understanding how users perceive a task can add context to the test data. It can be useful for instance to compare task completion data with survey answers to identify any items where user perception does not align with task completion. This could highlight areas of particular concern.

3 Remote Tree Testing Solutions:

Tree testing may not be one of the most well-known forms of usability research. But it certainly offers the potential to help organisations resolve problems with their navigation structure and improve the overall information architecture. If you want to investigate tree testing further you can check out these solutions:

1. Treejack from Optimal Workshop:

One of the leaders in web-based usability testing for information architecture. Treejack is a popular solution for evaluating website navigation without the normal visual distractions.

2. Usability Sciences:

Offers a web-based solution and will analyse the findings to determine the effectiveness of your site structure. They will provide specific recommendations on changes to your labels, structure and placement of content within your navigation hierarchy.

3. UserZoom:

Provides a web-based service to identify navigational issues early in the design process. UserZoom will analyse any attempts where participants have trouble navigating to ensure this is resolved before your site goes live. It will also give you a measure how well users can find items in your hierarchy.

Does Usability Research Reflect Real Behaviour?

No comments yet

Does Usability Research Measure Reality?

Usability research is essential for checking whether a site or app is intuitive and easy to navigate to create a great customer experience. It helps inform our decisions about the choice architecture. Remote usability research solutions or face-to-face user interviews identify the main usability problems. Do these methods of research reflect real behaviour?

How many usability research proposals acknowledge that the process of undertaking usability research can influence the behaviour we observe? We may have taken users out their natural environment and set them objectives that lead them to behave in a certain way.

Behavioural scientists have found that many of our decisions are made automatically by our unconscious brain. The context and our underlying psychological goals heavily influence the choices we make. We also behave differently when we are aware that we are being observed.

Asking respondents direct questions is especially problematic as people over-think issues. They switch to their slow, rational brain when encountering a mentally demanding task. Unfortunately most of the time when we are browsing a website we rely on our fast, intuitive, unconscious brain to make decisions without really engaging our conscious thought process. The implication here is that we cannot even access the rationale behind much of our behaviour when interacting with a website.

Daniel Kahneman, Thinking, fast and slow

“People don’t have reliable insight into their mental processes, so there is no point asking them what they want”.

UserZoom.com prototype testing methods

Source: UserZoom: 

Context is important:

Avoid taking people away from their natural environment if at all possible. Certainly don’t use focus groups as this is about far away of a normal browsing behaviour as you can get. How often do you search the web with a group of people you have never met and discuss your likes and dislikes of the site?

This is why remote user testing methods have an advantage over some face-to-face methods. Participants can be in their normal environment, with their normal distractions and so their behaviour is less likely to be influenced by the testing process. Don’t get me wrong, there will still be some bias as a result of the testing method. But it may be substantially less than techniques which take the user out of their normal browsing environment.

Observe and listen rather than ask:

You will get more meaningful insights from simply observing and listening to your users during a usability test as past behaviour is a more reliable indicator of future behaviour. Try to avoid verbal interventions as much as possible. People don’t like to admit when they do something wrong and you are likely to influence how they then behave in any future tasks. If you do want some verbal feedback, just ask your testers to say what they are doing as they go through the task.

But always keep in the back of your mind that usability testing is about informing your judgement, and not to prove or disprove someone’s opinions. It is also an iterative process that should begin early on in the development of a design.

5 second test UsabilityHub.com

Source: UsabilityHub:

Implicit Research Methods:

Most of our daily choices are made by our fast, intuitive brain which means we don’t have time to rationalise why we are making those decisions. New implicit research techniques such as functional MRI, EEG, biometrics, eye tracking, facial decoding and implicit reaction time studies (IRTs) are allowing marketers to access the sub-conscious part of the brain to better understand how we respond to communications and designs.

Eye tracking research helps identify which specific elements of a page or message attract our attention, but also the communication hierarchy of messages. Heatmaps allows us to display this data to reveal the proportion of visitors who noticed each of the key elements on a page. Plus the frequency and duration of gaze on each element.

Click and mouse movement heatmaps from visual analytics solutions such as Hotjar and Decibel Insights can provide similar insights for existing pages. For true eye tracking research though solutions from Affectiva and Sticky allow for you to evaluate both new and existing web page designs.

Clicktale.com heatmaps

Source: Click Tale:

A/B Test Usability Testing Results:

In the final analysis the only way you will know if a change identified through usability research improved agreed success metrics is to conduct an online experiment in the form a A/B test. It is only when visitors are acting on their own impulses and with their own money that you will see how they behave.

Prioritise the insights you get from usability testing to decide which are worthy of A/B testing. A/B testing will give you the evidence to show exactly how much difference your usability testing has had on your conversion success metrics.