In this project we aimed to redesign the Information Architecture (IA) to improve navigation of the AbbeyCats Adoptions (ACA) website. We used interviews, card sorts, and tree tests to identify issues with the IA, and suggested changes to navigation, organization, and labelling systems. Ultimately we produced a redesigned IA schematic diagram and a medium fidelity prototype of a new site with these improvements in mind.
Within this project I was primarily responsible for the user research. I created the questions for our semi-structured interviews and usability test, ran our card sorts, refined the tasks for our tree tests, documented and consolidated results, and conducted analysis to create actionable design recommendations for our design team.
This project was completed in collaboration with A. Sim, C. Li, K. Dewan, S. Qiu, and Y. Zhang.
In our initial assessment of the ACA website we identified two major task flows: Cat adoption, and Volunteering. We also found that the site overall had a number of content organization related issues, such as menu labels being vague and unclear, content within pages being disorganized and text-heavy, and the task flow for adoption forcing users to jump back and forth between multiple pages to submit an application.
This evaluation helps us determine more specific goals for our project:
Our initial research aimed to identify how users currently interacted with the IA on the ACA website. We started with a usability test, asking users to follow a think-aloud protocol when walking through adoption, volunteering, and donation task flows. We documented first impressions, task timings, issues with task completion, and user feedback at the end of each task.
Results generally aligned with our initial evaluation, and users generally struggled with the following:
The image below shows the bottom portion of a cat's information popup, with no option to proceed to adoption:
After the usability test we conducted a follow-up interview, asking users for their overall impressions, any particular elements they liked and disliked, and delving deeper into their actions during the usability test. Overall users said the site was lacking, and that they would not use the site again.
The interview reinforced that users had the most trouble with submitting an application to adopt a cat. Users cited issues with page labelling and that information for adoption was disjointed and scattered across multiple pages, suggesting that the information hierarchy was not intuitive and content was not consolidated effectively .
One particularly useful insight was about the sidebar menu- in our initial evaluation our group believed it was redundant and limited in scope, but users said it was simple and quick to use. While a sidebar that shows all possible pages may not be useful for a more complex site, it is useful for ACA since there are only a handful of pages that need to be displayed.
The image below shows the sidebar and top navigation menus:
In order to redesign the labelling system for ACA we conducted a hybrid card sort. Users were given the existing labels and told to sort them into existing categories, with the option to write in new categories as they saw fit. These were unsupervised to reduce outside bias, and run online for convenience.
The image below shows an example of a user completing the card sort:
Results of the card sort show how users think the content should be sorted based only on the available labels and categories, and when we looked at the results this varied wildly from the existing labelling system.
We used three main tools to analyze our results; a standardization grid to see how frequently each label was in each category, agreement labels to see the level of agreement between different users in each category, and a similarity matrix to see how frequently each label was grouped with each other label regardless of category.
The results of the card sort are too extensive to discuss in detail, but our analysis helped us identify what labels should be grouped together, what category each label should be put under, and helped pinpoint which labels or categories might need to be renamed.
The image below with our similarity matrix showing what labels are frequently grouped together and highlighting which labels should be grouped or ungrouped:
Taking results from our card sort and usability test, we made changes to three categories in our IA schematic.
First we changed the depth of the IA structure. Rather than having all pages available from the main page we added another level under a subheading. For example we put "How to Adopt" and "Adoption Application" both under the "Adoption" label, which in turn was under the "Adopt a Cat" category.
Second we changed navigation categories and labelling. We started by renaming some ambiguous labels like "Exotics," and "Bonded Pairs," and consolidated categories like "Cats" with other labels into "Adopt a Cat."
Lastly we separated static and interactive pages. Existing interactive pages were very text heavy and difficult to parse, and separating the unnecessary information into another page would help users more easily understand the purpose of each page.
The image below shows our redesigned IA schematic:
In order to test the validity of our redesigned IA structure we ran a Tree Test. We gave users the categories and labels of our IA and asked them where they would go to complete certain tasks. Results of our Tree Test show if our labels and categories are intuitive, and what difficulties users face trying to complete tasks using our new IA. This included 4 tasks under "Adoption," and 2 under the "Volunteering" task flow.
The image below shows the tasks we asked users to complete for the Tree Test:
The results of our Tree Test were positive overall, and we measured success rate, how directly a user went to the correct label, the time taken, and how many users clicked the first category correctly on their first try.
Both "Volunteering" tasks had about a 70% success rate, and the most important takeaway was that users who failed the task did not look into the "Get Involved" category at all. Inversely, anyone who checked the "Get Involved" category managed to navigate to the correct tasks, and the issue is that users do not check the category at all. One potential fix to the issue could be renaming the category to make it more clear that volunteering tasks are under it.
This issue may only exist in our testing environment, as the software we used did not show what labels had sublabels, and users were unable to see contents of a label without clicking into it. As long as we ensure that our final prototype keeps the sidebar with clear differentiation between categories and labels, the problem might already be solved.
Looking at the "Adoption" tasks, every task but Task 5 had a 100% success rate. There was a similar pattern to both "Volunteering" tasks, where any user who clicked into the "Adoption" category managed to successfully select "How to Adopt," and this could be solved with the same solutions.
More importantly when looking at other options that users incorrectly selected, we realized after the test that the task was ambiguously worded. Some users may not feel ready to jump straight to adoption, and may want to look for other information before adopting a cat, prompting them to look at "About Us," or "Articles and Tips" to learn more about taking care of a cat, or more about the organization in general.
The image below shows the PieTree for Task 5, showing where users went on their journey to selecting their answer:
Since the results from our Tree Test were mostly successful, we only made minor changes to out IA schematic before creating our medium fidelity prototype. The largest of these is renaming the "AbbeyCats Home" category on level 2 to "About us," and renaming the "About us" label on level 3 to "What we do." These changes had the aimed to reduce ambiguity for what was under the "AbbeyCats Home" category, with the goal of disincentivising users from defaulting to this option when trying to complete other tasks.
Along with IA schematic changes, we also aimed to improve some of the content issues on the page. This included consolidating pages for different types of available cats (exotics, cats, kittens), and reducing excessive text on other pages.
The image below shows a before and after of the "Available cats" page. On the left is the existing page, and on the right is our medium fidelity prototype:
The biggest challenge of this project was managing time to be able to set up tests, run them, analyze results, produce insights, and then apply those insights to our redesigned IA. Most of my past research experience was in a formal setting, and in order to adapt to these tight timelines I needed to embrace the faster pace of UX research. For example I often created research proposals, presented them to my peers, and set up the test to be run later in the same day.
If I were to do this project again, I would lean further into more cycles of testing as we worked through our IA redesign. As it stands we don't have numbers to truly verify that our new IA is better than the old one- we don't know if our new IA performs better on a card sort, or a tree test, or what new problems might crop up in a usability test that we did not foresee- and that bugs me.
I think this has brought a new appreciation of the beauty of the rapid and repetitive UX cycle- there's no such thing as perfect research producing perfect ideas, and we can only aim to improve on what's in front of us.
As a thank you for reading this, here's another picture of my cat!