ARCH UX Testing: Designing for the Users
Building new platforms, systems, and applications can be a daunting task, especially as we consider the real impact our development and design choices have on how users think, feel, and interact with our nascent product.
“Usability is about people and how they understand and use things, not about technology.” — Steve Krug
Earlier this spring, we shared our roadmap for building a robust cloud-based interface to support web archival research at scale and enhance access to web archives. Months of development have resulted in our fully functional prototype: ARCH (Archive Research Compute Hub). As we move towards an official launch in Spring 2022, we are working with users to ensure it meets all of their web archive analysis needs!
How does ARCH Work?
ARCH allows users to delve into the rich data within web archival collections for further research. Users can generate and download over a dozen datasets within the interface, including domain frequency statistics, hyperlink network graphs, extracted full-text, and metadata about binary objects within a collection. ARCH also provides several in-browser visualizations that present a glimpse into collection content.


The design process for ARCH has involved a variety of interconnected stages from sketching a wireframe, to connecting back-end processes with a user interface design, and conducting a multi-staged user testing process to continually assess user sentiment and impact with functionality and interface improvements.
This user testing has been a vital component to understanding the user journey, intuitive workflows, and varying expertise and research needs for working with web archives at scale.
Stages of ARCH UX Testing
At its core, User Experience (UX) testing seeks to understand the impressions, experience, and feelings a user expresses while interacting with a product prototype. These insights are critical because it brings the creators and developers into closer alignment with their end-users.
Conducting UX Testing for ARCH has allowed our team to understand research behaviours and the user journey while assessing what works well, what challenges arise, and identifying needs that aren’t being met.
Testing encompassed five main stages:
- Define Objectives. To provide scoping goals for our UX Testing we determine evaluation criteria, methods, and testing protocols of each stage. The purpose of testing was to engage with selected Archives Unleashed / Archive-It users who would provide feedback and input on their initial ARCH impressions. Ultimately these insights informed issues that needed to be addressed regarding usability, workflow, and functionality.
- Recruit. We then identified and engaged with Archives Unleashed users and Archive-It partners. This stage also meant thematically grouping users based on their relationship to the Archives Unleashed Project (e.g. Concept Design Interviewees, Advisory Board members, and “Power Users”). There was also a conscious effort to ensure recruitment reflected a diverse range of institutional categories, as set by Archive-It (e.g. University & Colleges, National Institutions, Public Libraries & Local Governments, etc.).
- Test. We primarily tested ARCH through remote surveys, which collected qualitative and quantitative data to determine satisfaction on several key indicators including intuitiveness, ease of navigation, terminology, visualizations, processing time, and application to user research.
- Analyze. A variety of methods were used to extract qualitative and quantitative data collections. Statistical descriptors were used to provide a user profile that identified geographic and institutional representation, professional role, and comfort level in using data analytical software and tools. A five-point Likert scale was used to measure satisfaction of intuitiveness, navigation, terminology, visualizations, processing time, and application. To address qualitative feedback, thematic coding was applied to comments in the areas of user interface design choices, workflow, language, documentation, outputs, feature requests, and errors encountered.
- Implement Findings. Results were shared with the team, and feedback was translated into GitHub tickets to provide action-based tasks for future development cycles. Implementing user suggestions also provides a base for future iterative UX testing rounds.
- Verify with Users. As a multi-stage UX testing process, each subsequent round of testing served as another opportunity to review and refine impressions of prior development and improvements — improving our accuracy and capacity to match user needs at each stage
A Snapshot of ARCH UX Round 3 Results

The most recent round of UX testing was conducted throughout August and September 2021, connecting with past concept design interviewees, our project advisory board members, and selected Archives Unleashed and Archive-It “power” users. Participants shared their experience and feedback through a survey.
Tester Profiles
Profiling participants revealed users were primarily from North America and representative of two main institution categories: colleges/universities and national libraries. In addition, testers can be categorized into four main professional roles: researcher/professor, librarian/archivist, technologist, and managerial/leadership.
The survey asked testers to identify their comfort level in using data analytical software and tools to help assess their experience and feelings with the technical aspects of analyzing data. The majority of participants in this group reflected high confidence, while 25% of respondents described themselves as slightly comfortable, meaning they can use tools and software but need assistance.
Areas of Satisfaction
A second area of quantitative questioning used a five-point Likert scale to measure the satisfaction of intuitiveness, navigation, terminology, visualizations, processing time, and application.
Overall, testers identified a positive ARCH experience, noting the benefits in being able to access collections and conduct initial analysis. Using a combination of a heat map visualization and averaging satisfaction scores, it was easy to identify the highest and lowest scores, while also diagnosing areas for improvement. Statements that incurred few neutral or disagreement scores were further reflected in open comments.
Constructive Feedback
Open-ended survey comments offered a chance for testers to express their thoughts, feelings, and experience of using ARCH using their own words.
Overall, users were impressed with the new interface, noting that the integration between Archives Unleashed and Archive-It provides a familiar and dedicated environment for working with web archives, and offers opportunities for new research use cases.
Testers also appreciated the quick processing speeds, the variety of dataset and output options, and that no technical set-up was required (e.g. running a Spark shell).
Analysis of 94 comments identified six themes that conveyed detailed suggestions for improvements and considerations. These areas included interface design choices, workflow, language, accessibility, documentation, and output usability. Participants also identified errors encountered.
The majority of suggestions related to improving the UI workflow and navigation, which was supplemented by changes to language, design choices, and visualization features to improve recommendations around accessibility and usability.
Implementing UX Feedback
Our team uses GitHub to manage development and version control of the source code for the project’s ARCH platform, and uses tickets to monitor improvements and roadmap. As a result, 27 GitHub tickets were created from the open comments to provide direction for actionable items and team discussions.
Lessons Learned
Carrying out UX testing has afforded an opportunity to learn more about our users and carry lessons learned forward into future testing cycles.
Here are some of our insights:
- People are genuinely excited to help out! When you build a community, you also build a system of support, encouragement, and connection. In reaching out to individuals who are familiar with our project — both by following progress and using our tools — we found there was general enthusiasm and responsiveness to UX testing!
- Keep it simple for the user. Understanding time commitments can be a big ask, we purposefully kept our survey short and simple. We were conscious to ensure UX testing didn’t become a burdensome task, but rather presented an exciting opportunity for participants to impact platform development directly.
- Responses were higher for interviews than surveys. It was a bit of a (pleasant) surprise that participants were willing to schedule a longer call to discuss thoughts and impressions. In comparison, we had a 41% response rate and very thoughtful input via a feedback form. We had anticipated the asynchronous element of the survey form would have yielded a high response rate, but we also are very aware of how the element of human-interaction and the opportunity to discuss development with members of the product team may have felt more approachable and held a more personal, appealing touch.
- No matter how much you prepare, there will always be glitches. As they say, prepare for everything! During our UX testing, we had some unforeseen technical issues, which took our prototype offline. As a team, we triaged this issue, with some members focusing on the technical aspect while others connected with testers. Although all errors were remedied, this did cause a disruption to our testing process. This experience did reinforce within our team that no matter what issues arise, communication is critical!
Next Steps
Our final two rounds of UX testing will be conducted early in 2022. Participants will have a chance to interact with the latest version of ARCH, with implementations from our most recent testing. These final rounds will also provide an opportunity for a stress test of ARCH’s to monitor the back-end for any areas that are overwhelmed or in cases where processes fail.
We look forward to sharing ARCH with the public in the Spring of 2022!