Lessons Learned:  
A Primo Usability Study 

Kelsey Brett,  
Ashley Lierman,  

and Cherie Turner 
 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016   
  

7 

ABSTRACT 

The University of Houston Libraries implemented Primo as the primary search option on the library 
website in May 2014. In May 2015, the Libraries released a redesigned interface to improve user 
experience with the tool. The Libraries took a user-centered approach to redesigning the Primo 
interface, conducting a “think-aloud” usability test to gather user feedback and identify needed 
improvements. This article describes the method and findings from the usability study, the changes 
that were made to the Primo interface as a result, and implications for discovery-system vendor 
relations and library instruction.  

INTRODUCTION 

Index-based discovery systems have become commonplace in academic libraries over the past 
several years, and academic libraries have invested a great deal of time and money into 
implementing them. Frequently, discovery platforms serve as the primary access point to library 
resources, and in some libraries they have even replaced traditional online public access catalogs. 
Because of the prominence of these systems in academic libraries and the important function that 
they serve, libraries have a vested interest in presenting users with a positive and seamless 
experience while using a discovery system to find and access library information. Libraries 
commonly conduct user testing on their discovery systems, make local customizations when 
possible, and sometimes even change products to present the most user-friendly experience 
possible.  

University of Houston Libraries has adopted new discovery technologies as they became available 
in an effort to provide simplified discovery and access to library resources. As a first step, the 
Libraries implemented Innovative Interface’s Encore, a federated search tool, in 2007. When 
index-based discovery systems became available, the Libraries saw them as a way to provide an 
improved and intuitive search experience. In 2010, the Libraries implemented Serials Solutions’ 
Summon. After three years and a thorough process of evaluating priorities and investigating 
alternatives, the Libraries made the decision to move to Ex Libris’ Primo, which was done in May 
of 2014.  

The Libraries’ intention was to continually assess and customize Primo to improve functionality 
and user experience. The Libraries conducted research and performed user testing, and in May 

 

Kelsey Brett (krbrett@ua.edu) is Discovery Systems Librarian, Ashley Lierman 
(arlierman@uh.edu) is Instructional Design Librarian, and Cherie Turner (ckturner2@uh.edu) is 
Chemical Sciences Librarian, University of Houston Libraries, Houston, Texas. 

mailto:krbrett@ua.edu
mailto:arlierman@uh.edu
mailto:ckturner2@uh.edu


 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 8 
 

2015 a redesigned Primo search results page was released. One of the activities that informed the 
Primo redesign was a “think-aloud” usability test that required users to complete a set of two 
tasks using Primo. This article will present the method and results of the testing as well as the 
customizations that were made to the discovery system as a result. It will also discuss some 
broader implications for library discovery and its effect on information literacy instruction.  

LITERATURE REVIEW 

There is a substantial body of literature discussing usability testing of discovery systems. In the 
interest of brevity, we will focus solely on studies and overviews involving Primo implementations, 
from which several patterns have emerged. 

Multiple studies have indicated that users’ responses to the system are generally positive; even in 
testing of very early versions by a development partner users responded positively overall.1 
Interestingly, some studies found that in many cases users rated Primo positively in post-testing 
surveys even when their task completion rate in the testing had been low.2 Multiple studies also 
found evidence that, although users may struggle with Primo initially, the system is learnable over 
time. Comeaux found that the time it took users to use facets or locate resources decreased 
significantly with each task they performed,3 while other studies saw the use of facets per task 
increase for each user over the course of the testing.4 

User reactions to facets and other post-limiting functions in Primo were divided. In one of the 
earliest studies, Sadeh found that users responded positively to facets,5 and some authors found 
users came to use them heavily while searching,6 while others found that facets were generally 
underused.7 Multiple studies found that users tended to repeat their searches with slightly 
different terms rather than use post-limiting options.8 Thomsett-Scott and Reese, in a survey of 
the literature on discovery tools, reported evidence of a trend that users reacted more positively 
to post-limiting in earlier discovery studies,9 while the broader literature shows more negative 
reactions in more recent studies. This could indicate that shifts in the software, user expectations, 
or both may have decreased users’ interest in these options. 

A few specific types of usability problems seem common across tests of Primo and other discovery 
systems. Across a large number of studies, it has been found that users—especially undergraduate 
students—struggle to understand library and academic terminology used in discovery. Some 
terminology changes were made after users had difficulty in the earliest usability tests of Primo,10 
but users continued to struggle with terms like hold and recall in item records.11 Users also failed 
to understand the labels of limiters,12 and they also failed to recognize the internal names of 
repositories and collections.13 Literature reviews on discovery systems have found terminology to 
be a common stumbling block for searchers across a wide number of individual studies.14 

Similarly, users often struggle to understand the scope of options available to them when 
searching and the holdings information in item records. Users failed in multiple tests to 
distinguish between the article level and the journal level,15 could not interpret bibliographic 



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

9 

information sufficiently to determine that they had found the desired item,16 and chose incorrect 
options for scoping their searches.17 Many studies found that users were unable to distinguish 
between multiple editions of a held item when all item types or editions were listed in the 
record.18 In other cases, users had difficulty interpreting locations and holdings information for 
physical items.19 

Among the needs and desires expressed by and for Primo users in the literature, two in particular 
stand out. First, many users expressed a desire for more advanced search options; some wanted 
more complexity in certain facets and the ability to search within results,20 while other users 
simply wanted an advanced search option to be available.21 Secondly, a large number of studies 
indicated that instruction on Primo or other discovery systems was needed for users to search 
effectively. In some cases this was the conclusion of the researchers conducting the study,22 while 
in other cases users themselves either suggested or requested instruction on the system.23 

It is also worth noting that it has been questioned whether usability testing as a whole is a 
sufficient mechanism for evaluating discovery-system functionality. Prommann and Zhang found 
that usability testing has focused almost exclusively on the technical functioning of the software 
and not adequately revealed the ability of discovery systems like Primo to successfully complete 
users’ desired tasks.24 They proposed hierarchical task analysis (HTA) as an alternative, to 
examine users’ most frequent desires and the capacity of discovery systems to meet them. 
Prommann and Zhang acknowledged, however, that as HTA is completed by an expert on the 
system rather than by an actual user, some of the valuable information derived from usability 
testing (including terms and functions that users do not understand, however well-designed) is 
lost in the process; they concluded that a combination of the two systems of testing is ideal to 
retain the best of both. 

BACKGROUND 

At the University of Houston Libraries, the Resource Discovery Systems department (RDS) is 
responsible for the maintenance and development of Primo. However, it is important to RDS to 
gather feedback and foster buy-in from stakeholders in the Library before making changes to the 
system. To that end, RDS works with two committees to assess the system and make 
recommendations for its improvement. The Discovery Usability Group and the Discovery Advisory 
Group include members from public services, technical services, and systems; each member brings 
a unique perspective on discovery. The Discovery Usability Group is charged with assessing the 
discovery system through a variety of methods including usability testing, focus groups, and user 
interviews. The Discovery Advisory Group reviews results of user testing and makes 
recommendations for improvement. All changes to the discovery system are reviewed by the 
Groups before they are released for public use.  

 



 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 10 
 

In fall 2014, several months after the Primo implementation, the Discovery Usability Group 
conducted a focus group with student workers from the library’s information desk (a dual 
reference and circulation desk) to solicit feedback about the functionality of Primo and 
suggestions for its improvement. In the meantime, the Discovery Advisory Group was testing 
Primo and evaluating Primo sites at peer and aspirational institutions. The groups used the 
information collected through the focus group and research on Primo to make recommendations 
for improvement. RDS has access to a Primo development sandbox, and many of the 
recommended changes were made in the sandbox environment and reviewed by the two groups 
prior to public release.  

Changes to the search box can be seen in figure 1. Rarely used tabs were replaced with a drop-
down menu to the right of the search box to allow users to limit to “everything,” “books+,” or 
“digital library.” To increase visibility, links to “Advanced Search” and “Browse Search” were made 
larger and more spacing was added.  

Live site:

 

Development Sandbox: 

 

Figure 1. Search Box in Live Site (Above) and Development Sandbox (Below) at Time of Testing 

Changes were also made to create a cleaner and less cluttered search results page (see figure 2). 
More white space was added, and the links (or tabs) to “View Online,” “Request,” “Details,” etc., 
were redesigned and renamed for clarity. For example, the “View Online” link was renamed to 
“Preview Online” because it opens a box within the search results page that displays the item. The 
groups believed “Preview Online” more accurately represents what the link does.  

 

 

 

 

 



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

11 

Live Site:  

 

  

 

Development Sandbox: 

 

  
 

Figure 2. Search Results in Live Site (Above) and Development Sandbox (Below) at Time of 
Testing 

The facets were also redesigned to look cleaner and larger to attract users’ attention (see figure 3). 

 

 



 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 12 
 

Live Site: 

 

 

Development Sandbox: 

 

Figure 3. Facets in Live Site and Development Sandbox at Time of Testing 

Both groups were happy with the changes to the Primo development sandbox but wanted to test 
the effect of the changes on user search behavior before updating the live site. The Discovery 
Usability Group conducted a usability test within the development sandbox. The goal of the test 
was to find out if users could effectively complete common research tasks using Primo. With that 
goal in mind, the group developed a usability test and conducted it during the spring semester of 
2015.  



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

13 

METHODOLOGY 

The Discovery Usability Group developed a usability test using a “think-aloud” methodology, 
where users were asked to verbalize their thought process as they completed research tasks 
through Primo. Four tasks were designed to mirror tasks that users are likely to complete for class 
assignments or for general research. To minimize the testing time, each participant completed two 
tasks, with the facilitators alternating between two sets of tasks from one participant to the next.  

Test 1 

Task 1: You are trying to find an article that was cited in a paper you read recently. You have the 
following citation:  

Clapp, E., & Edwards, L. (2013). Expanding our vision for the arts in education. Harvard 
Educational Review, 83(1), 5–14. 

Please find this article using OneSearch [the public-facing name given to the Libraries’ Primo 
implementation]. 

Task 2: You are doing a research project on the effects of video games on early childhood 
development. Find a peer-reviewed article on this topic, using OneSearch. 

Test 2 

Task 1: Recently your friend recommended the book The Lighthouse by P. D. James. Use 
OneSearch to find out if you can check this book out from the library. 

Task 2: You are writing a paper about the drug cartels’ influence on Mexico’s relationship with the 
United States. Find a newspaper article on this topic, using OneSearch.  

Two facilitators set up a table with a laptop in the front entrance of the library. They alternated 
between the facilitator and note-taker roles. Another group member took on the role of “caller” 
and recruited library patrons to participate in the study. The caller set up a table visible to those 
passing by with library-branded T-shirts and umbrellas to incentivize participation. The caller 
explained what would be expected of the potential participant and went over the informed-
consent document. After signing the form, the participant performed two tasks. After the test the 
participant received a library T-shirt or umbrella, and snacks.  

The facilitators used Morae Usability Software to record the screen and audio of each test. 
Participants were asked for permission to record their sessions, but could opt out. During the 
three hour testing period, fifteen library patrons participated in the study, and fourteen sessions 
were recorded. Of the fifteen participants, thirteen were undergraduate students (four freshman, 
one sophomore, seven juniors, and two seniors), one was a graduate student, and one was a post-
baccalaureate student. The majority of the participants were from the sciences, along with two 
students from the College of Business and two from the School of Communications. There were no 
participants from the humanities. 



 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 14 
 

The facilitators took notes on a rubric (see table 1) that simplified the processes of coding and 
reviewing the recordings. After the usability testing, the facilitators reviewed the notes and 
recordings, coded them for common themes and breakdowns, and prepared a report of their 
findings and design recommendations. The facilitators sent the report, along with audio and 
screen recordings, to the Discovery Advisory Group, who reviewed them along with RDS. The 
Discovery Advisory Group made additional design recommendations, and RDS used the 
information and recommendations to implement additional customizations to the Primo 
development sandbox. 

Preliminary Questions 

ASK: What is your affiliation with the 
University of Houston? Year? Major? 

  

ASK: How often do you use the library 
website? For what purpose(s)? 

 

Task 1 

Describe the steps the participant took to 
complete the task S/U 

ASK: How did you feel about this task? 
What was simple? What was difficult? 

 

ASK: Is there anything that would make 
completing this task easier? 

 

Task 2 

Describe the steps the participant took to 
complete the task S/U 

ASK: How did you feel about this task? 
What was simple? What was difficult? 

 

ASK: Is there anything that would make 
completing this task easier? 

 

Follow-up Question 

ASK: What can we do to improve the 
overall experience using OneSearch? 

 

Table 1. Task Completion Rubric for Test 1 

 



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

15 

RESULTS 

Test 1, Task 1 

You are trying to find an article that was cited in a paper you read recently. You have the following 
citation:  

Clapp, E., & Edwards, L. (2013). Expanding our vision for the arts in education. Harvard 
Educational Review, 83(1), 5–14. 

Please find this article using OneSearch. 

Participant Time on Task Task Completion 

1 1m 54s Y 

2 4m 13s Y 

3 1m 26s Y 

4 1m 17s Y 

5 1m 26s Y (required assistance) 

6 1m 43s Y 

7 1m 27s Y 

8  1m 5s Y 

Table 2. Results for Test 1, Task 1 

All eight participants successfully completed this task, although sophistication and efficiency 
varied between participants. Some searched by the authors’ last names, which was not specific 
enough to return the item in question. Four participants attempted to use advanced search or the 
drop-down menu to the right of the search box to pre-filter their results. Two participants viewed 
the options in the drop-down menu, which were “everything,” “books+,” and “digital library,” and 
left it on the default “everything” search. When prompted, the participants explained that they 
were expecting the drop-down to contain title and/or author limiters. Similarly, participants 
expected an author limiter in the advanced search.  

The citation format seemed to confuse participants, and they tended to search for the piece of 
information that was listed first—the authors—rather than the most unique piece of 
information—the title. If the first search did not return the correct item in the first few results, the 
participant would modify their search by searching for a different element of the citation or 
adding another element of the citation to the initial search until the item they were looking for 
appeared as one of the first few results. Participant 5 thought they had successfully completed the 



 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 16 
 

task, but the facilitator had to point out that the item they chose did not meet the citation exactly, 
and on the second try they found the correct item. 

Participant 2 worked on the task for more than four minutes, significantly longer than the other 
seven participants. They immediately navigated to advanced search and filled out several fields in 
the advanced search form with the elements of the citation. If the search did not return their item, 
they added more elements until they finally found it. Simply searching the title in the citation 
would have returned the item as the first search result. Filling out the advanced search form with 
all of the information from the citation does not necessarily increase a user’s chances of finding 
the item in a discovery system, though it might do so when searching in an online catalog or 
subject database.  

The Discovery Advisory and Usability Groups made two recommendations to address some of the 
identified issues: include an author search option in the advanced search, and add an “articles+” 
option to the drop-down menu on the basic search. RDS implemented both recommendations. The 
Discovery Usability Group identified confusion around citations as a common breakdown during 
this task. The groups recommended providing instructional information about searching for 
known items to address this breakdown; however, RDS is still working on an effective method to 
provide this information in a simple and visible way.  

Test 1, Task 2 

You are doing a research project on the effects of video games on early childhood development. Find 
a peer-reviewed article on this topic, using OneSearch. 

Participant Time on Task Task Completion 

1 3m 44s Y 

2 2m 21s Y 

3 5m 23s Y (required assistance) 

4 2m 5s Y 

5 3m 32s Y 

6 2m 45s Y 

7 3m 8s Y 

8 3m 1s Y (required assistance) 

Table 3. Results for Test 1, Task 2 

All eight participants successfully found an article on this topic, but were less successful in 
determining whether the article was peer-reviewed. Only one participant used the “Peer-reviewed 



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

17 

Journals” facet without being prompted. Three users noticed the “[Peer-reviewed Journal]” note in 
the record information for search results, and used it to determine if the article was peer-reviewed. 
One participant went to the full-text of an article, and said it “seemed” like it was peer-reviewed 
and considered the task complete. The resource type facets were more heavily used during this 
task than the “Peer-reviewed Journals” facet, despite its being promoted to the top of the list of 
facets. Two participants used the “Articles” facet, and two participants used the “Reviews” facet, 
thinking it limited to peer-reviewed articles. Participants 3 and 8 needed help from the facilitator 
to determine whether a source was peer-reviewed. There was an overall misunderstanding of 
what peer-reviewed means, which affected participants’ confidence in completing the task. 

The design recommendations based on this task included changing the “Peer-reviewed Journals” 
facet to “Peer-reviewed Articles” or simply, “Peer-reviewed.” RDS changed the facet to “Peer-
reviewed Articles” to help alleviate confusion. Additionally, the groups recommended emphasizing 
the “[Peer-reviewed Journal]” designations within the search results and providing a method for 
limiting to peer-reviewed materials before conducting a search. Customization limitations of the 
system have prevented RDS from implementing these design recommendations yet. A way to 
address the breakdowns caused by misunderstanding terminology also has yet to be identified. It 
was disheartening that participants did not use the “Peer-reviewed Journals” facet despite its 
being purposefully emphasized on the search results page.  

Test 2, Task 1 

Recently your friend recommended the book The Lighthouse by P. D. James. Use OneSearch to find 
out if you can check this book out from the library. 

Participant Time on Task Task Completion 

1 1m 7s Y 

2 56s Y 

3 No recording Y 

4 2m 21s Y 

5 1m 8s Y 

6 2m 14s Y 

7 1m 15s Y 

Table 4. Results for Test 2, Task 1 

All seven participants were able to find this book using Primo, but had difficulty in determining 
what to do once they found it. For this task every participant searched by title and found the book 
as the first search result. Four users limited to “books+” before searching using the drop-down 



 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 18 
 

menu, while the other three remained in the default “everything” search. Only one participant 
used the locations tab within the search results to determine availability; the others clicked the 
title and went to the item’s catalog record. All participants were able to determine that the book 
was available in the library, but there was an overall lack of understanding about how to use the 
information in the catalog to check out a book. Participant 1 said that they would write down the 
call number, take it to the information desk, and ask how to find it, which was the most 
sophisticated response of all seven participants. Participant 4 spent nearly two minutes clicking 
through links in the OPAC expecting to find a “Check Out” button and only stopped when the 
facilitator stepped in.  

A recommended design change based on this task was to have call numbers in Primo and the 
online catalog link to a stacks guide or map. This is a feature that may be developed in the future, 
but technical limitations prevented RDS from implementing it in time for the release of the 
redesigned search interface. Like the previous tasks, some of the breakdowns occurred because of 
a lack of understanding of library services. Users easily figured out that there was a copy of the 
book in the library, but had little sense of what to do next. None of the participants successfully 
located the stacks guide or the request feature that would put the item on hold for them. Steps 
should be taken to direct users to these features more effectively.  

Test 2, Task 2 

You are writing a paper about the drug cartels’ influence on Mexico’s relationship with the United 
States. Find a newspaper article on this topic, using OneSearch.  

Participant Time on Task Task Completion 

1 4m 45s Y (required assistance) 

2 59s Y 

3 No recording N 

4 7m 47s Y 

5 2m 52s Y 

6 1m 33s Y 

7 1m 30s Y 

Table 5. Results for Test 2, Task 2 

This task was difficult for participants. Two users limited their search initially to “digital library” 
using the drop-down menu, thinking it would be a place to find newspaper articles; their searches 
returned zero results. Only two users used the “Newspaper Articles” facet without being prompted, 
and users did not seem to readily distinguish newspaper articles as a resource type. Participants 



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

19 

did not notice the resource type icons without being prompted. Several participants needed to be 
reminded that the task was to find a newspaper article, and not any other type of article. With 
guidance, most participants were able to complete the task. Participant 4 remained on the task for 
almost eight minutes because of their dissatisfaction with the relevancy of the results to the 
prompt. Interestingly, they found the “Newspaper Articles” facet and reapplied it after each 
modified search, suggesting that they learned to use system features as they went.  

One of the recommendations based on this task was to remove “digital library” as an option in the 
drop-down menu on the basic search. It was evident that “digital library” did not have the same 
meaning to end users as it does to internal users. This recommendation was easily implemented. 
Another recommendation was to emphasize the resource type icons within the search results, but 
we have not determined a way to do so effectively. One suggestion from the Discovery Usability 
Group was to exclude newspaper articles from the search results as a default, but no consensus 
was reached on this issue.  

LIMITATIONS  

The Discovery Usability Group identified limitations to the usability test that should be noted. 
Testing was done in a high-traffic portion of the library’s lobby, which is used as study space by a 
broad range of students. Participants were recruited from this study space, and we chose not to 
screen participants. The fifteen participants in the study did not constitute a representative 
sample. Almost all participants were undergraduate students, and no humanities majors 
participated. The outcomes might have been different if our participants had included more 
experienced researchers or students from a broader range of disciplines. By adding screening 
questions or choosing a more neutral location, we would have limited the number of participants 
who could complete our testing. 

Another limitation was that the participants started the usability test within the Primo interface. 
Because Primo is integrated into the Libraries’ website, users would typically begin searching the 
system from within the library homepage. The goals of the study required testing of our Primo 
development sandbox, which was not yet available to the public, and therefore could not be 
accessed in the same way. This gave participants some additional options from the initial search 
pages that are not usually available through the main search interface. While testing an active 
version of the interface would be preferable, one of our goals was to understand how our 
modifications affected user behavior, so testing the unmodified version was not an acceptable 
substitute. Additionally, the usability study presented tasks out of context and did not replicate a 
true user-searching experience. Despite the limitations, we learned valuable lessons from the 
participants in this study. 

DISCUSSION 

Users successfully completed the tasks in this usability study. Unfortunately, they did not take 
advantage of many of the features that can make such tasks easier—particularly facets. This was 



 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 20 
 

especially apparent when we asked users to find a peer-reviewed journal article (Test 1, Task 2). 
Primo has a facet that will limit a search to only peer-reviewed journal articles, and only one out of 
eight participants used this facet during this task. Participants appreciated the pre-search filtering 
options, and requested more of them (such as an author search), while post-search facets were 
underutilized.  

Similarly, participants almost uniformly ignored the links, or tabs, within the search results, which 
would provide users with more information, a preview of the full-text, and additional features 
such as an email function. Users bypassed these options and clicked on the title instead. The 
Discovery Usability Group theorized that users clicked on the title of the item because that 
behavior would be successful in a more familiar search interface like Google. The team customized 
the configuration so that a title click would open either the full-text of electronic items or the 
catalog record for physical items to accommodate users’ instinctive search behaviors. The tabs, 
though a prominent feature of the discovery system, have proved to have little value for users.  

Throughout the implementation of discovery systems in academic libraries, both research studies 
and anecdotal evidence have suggested that users do not find end-user features like facets 
valuable; however, discovery system vendors have made no apparent attempt to reimagine the 
possibilities for search refinements. Indeed, most of the findings in this study will present few 
surprises to anyone familiar with the discovery usability literature, which is itself concerning. As 
our literature review has shown, many of the same general usability issues have repeated 
throughout studies of Primo since 2008, and most are very similar to usability issues in other, 
competitor discovery systems. This raises some concerns about the pace of innovation in the 
discovery field, and whether discovery vendors are genuinely taking into account the research 
findings about the needs of our users as they refine their products. In a recent article, David 
Nelson and Linda Turney identified many issues with discovery facets in their current form that 
may be barriers to usage, particularly labeling and library jargon; we join them in urging vendors 
and libraries to collaborate more closely for deep analysis of actual facet usage by users, and to 
address those factors that have negatively affected facets’ value.25 

During our usability study, a common barrier to the successful completion of a task was not the 
technology itself but a lack of understanding of the task. Participants had difficulty deciphering a 
citation, which may have led to their tendency to search for a journal article by author and not by 
title. Many participants struggled with using call numbers, and how to find and check out books in 
the library. Peer review also proved to be a difficult or unfamiliar concept for many; when looking 
for peer-reviewed articles, some participants clicked on the “Reviews” facet, which limited their 
searches to an inappropriate resource type. Additionally, participants did not differentiate 
between journal articles and newspaper articles, which may indicate a broader inability to 
differentiate between scholarly and nonscholarly resources. This effect may be exaggerated by the 
high percentage of science students who participated, as these students may not have frequent 
need for newspaper articles. All of these challenges, however, are indicative of a deeper problem 



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

21 

with terminology. Regardless of how simple it is to limit a search to peer-reviewed articles, a user 
who does not understand what peer review means cannot complete the task with confidence or 
certainty.  

Librarians struggle with presenting understandable language and avoiding library terminology; as 
we discovered, academic language, like “peer-reviewed” and “citation,” presents a similar problem. 
These are not issues that can be resolved with a technological solution. Rather, we join previous 
authors in suggesting that instruction may be a reasonable way to address many usability issues in 
Primo. From our findings and from those in the wider literature, we conclude that general 
instruction in information literacy is prerequisite for effective use of this or any research tool, 
particularly for undergraduates. Nichols et al. “recommend studying how to effectively provide 
instruction on Primo searching and results interpretation,”26 but instruction on the use of a single 
tool is of limited utility to students in their academic lives. Instead libraries could bolster 
information literacy instruction on key concepts around the production and storage of 
information, scholarly communications, and differences in information types. Teaching these 
concepts effectively should help to alleviate the most common user issues, including 
understanding terminology and different types of information, as well as helping students to 
understand key elements of research in general. This is a particularly important point to note for 
librarians working as advocates for information literacy instruction, especially in cases where 
administrators or faculty may feel that more advanced tools, like discovery systems, should make 
instruction obsolete. 

CONCLUSION 

Several changes were made to the Primo interface in response to breakdowns identified during 
the usability study. Resource Discovery Systems first implemented the changes to the Primo 
development sandbox. After the Discovery Usability and Advisory Groups agreed on the changes, 
they were made available on the live site (see figure 4). The redesigned search results page 
became available to the general public between the spring and summer academic sessions of 2015. 
In addition to the changes that were made because the usability study, RDS made changes to the 
look and feel to make the search results interface more aesthetically pleasing and more in line 
with the University of Houston brand. 

 

 

 

 

 

 



 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 22 
 

Before (live site): 

 
Figure 4. Primo Interface before Usability Testing 

During (development sandbox): 

 
Figure 5. Primo Interface during Usability Testing 



 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

23 

After (live site):

 

Figure 6. Primo Interface after Usability Testing 

Many larger assertions of this study, encompassing implications for instruction and our needs 
from discovery vendors, will require further study to address. The authors intend to continue to 
investigate these issues as additional usability testing is conducted and to use the data to support 
future vendor relations and instructional curriculum development discussions. 

REFERENCES 

1. Tamar Sadeh, “User Experience in the Library: A Case Study,” New Library World 109, no. 1/2 
(2008): 7–24, doi:10.1108/03074800810845976. 

2. Aaron Nichols et al., “Kicking the Tires: A Usability Study of the Primo Discovery Tool,” Journal 
of Web Librarianship 8, no. 2 (2014): 172–95, doi: doi:10.1080/19322909.2014.903133; Scott 
Hanrath and Miloche Kottman, “Use and Usability of a Discovery Tool in an Academic Library,” 
Journal of Web Librarianship 9, no. 1 (2015): 1–21, doi:10.1080/19322909.2014.983259. 

3. David J. Comeaux, “Usability Testing of a Web-Scale Discovery System at an Academic Library,” 
College & Undergraduate Libraries 19, no. 2–4 (2012): 189–206, 
doi:10.1080/10691316.2012.695671. 

4. Kylie Jarrett, “FindIt@ Flinders: User Experiences of the Primo Discovery Search Solution,” 
Australian Academic & Research Libraries 43, no. 4 (2012): 278–300; Nichols et al., "Kicking 
the Tires." 

5. Sadeh, “User Experience in the Library.” 

http://dx.doi.org/10.1108/03074800810845976
http://dx.doi.org/10.1080/19322909.2014.903133
http://dx.doi.org/10.1080/19322909.2014.983259
http://dx.doi.org/10.1080/10691316.2012.695671


 
 

LESSONS LEARNED: A PRIMO USABILITY STUDY | BRETT, LIERMAN, AND TURNER    
doi: 10.6017/ital.v35i1.8965  

 24 
 

6. Jarrett, “FindIt@ Flinders”; Nichols et al., “Kicking the Tires.” 

7. Xi Niu, Tao Zhang, and Hsin-liang Chen, “Study of User Search Activities with Two Discovery 
Tools at an Academic Library,” Libraries Faculty and Staff Scholarship and Research 30, no. 5 
(2014), doi:10.1080/10447318.2013.873281; Hanrath and Kottman, “Use and Usability of a 
Discovery Tool in an Academic Library.” 

8. Rice Majors, “Comparative User Experiences of Next-Generation Catalogue Interfaces,” Library 
Trends 61, no. 1 (2012): 186–207, doi:10.1353/lib.2012.0029; Niu, Zhang, and Chen, “Study of 
User Search Activities with Two Discovery Tools at an Academic Library.” 

9. Beth Thomsett-Scott and Patricia E. Reese, “Academic Libraries and Discovery Tools: A Survey 
of the Literature,” College & Undergraduate Libraries 19, no. 2–4 (2012): 123–43, 
doi:10.1080/10691316.2012.697009. 

10. Sadeh, “User Experience in the Library.” 

11. Comeaux, “Usability Testing of a Web-Scale Discovery System at an Academic Library.” 

12. Jessica Mahoney and Susan Leach-Murray, “Implementation of a Discovery Layer: The 
Franklin College Experience,” College & Undergraduate Libraries 19, no. 2–4 (2012): 327–43, 
doi:10.1080/10691316.2012.693435. 

13. Joy Marie Perrin et al., “Usability Testing for Greater Impact: A Primo Case Study,” Information 
Technology & Libraries 33, no. 4 (2014): 57–67. 

14. Majors, “Comparative User Experiences of Next-Generation Catalogue Interfaces”; Thomsett-
Scott and Reese, “Academic Libraries and Discovery Tools.” 

15. Jarrett, “FindIt@ Flinders”; Mahoney and Leach-Murray, “Implementation of a Discovery 
Layer.” 

16. Jarrett, “FindIt@ Flinders”; Mahoney and Leach-Murray, “Implementation of a Discovery 
Layer”; Nichols et al., “Kicking the Tires." 

17. Jarrett, “FindIt@ Flinders”; Mahoney and Leach-Murray, “Implementation of a Discovery 
Layer”; Perrin et al., “Usability Testing for Greater Impact : A Primo Case Study.” 

18. Jarrett, “FindIt@ Flinders”; Nichols et al., “Kicking the Tires”; Hanrath and Kottman, “Use and 
Usability of a Discovery Tool in an Academic Library”; Majors, “Comparative User Experiences 
of Next-Generation Catalogue Interfaces.” 

19. Comeaux, “Usability Testing of a Web-Scale Discovery System at an Academic Library”; 
Thomsett-Scott and Reese, “Academic Libraries and Discovery Tools.” 

20. Jarrett, “FindIt@ Flinders.” 

21. Mahoney and Leach-Murray, “Implementation of a Discovery Layer”; Perrin et al., “Usability 
Testing for Greater Impact.” 

http://dx.doi.org/10.1080/10447318.2013.873281
http://dx.doi.org/10.1353/lib.2012.0029
http://dx.doi.org/10.1080/10691316.2012.697009
http://dx.doi.org/10.1080/10691316.2012.693435


 

 

INFORMATION TECHNOLOGY AND LIBRARIES | MARCH 2016  
  

25 

22. Mahoney and Leach-Murray, “Implementation of a Discovery Layer”; Nichols et al., “Kicking 
the Tires”; Niu, Zhang, and Chen, “Study of User Search Activities with Two Discovery Tools at 
an Academic Library.” 

23. Thomsett-Scott and Reese, “Academic Libraries and Discovery Tools.” 

24. Tao Zhang and Merlen Prommann, “Applying Hierarchical Task Analysis Method to Discovery 
Layer Evaluation,” Information Technology & Libraries 34, no. 1 (2015): 77–105, 
doi:10.6017/ital.v34i1.5600.  

25. David Nelson and Linda Turney, “What’s in a Word? Rethinking Facet Headings in a Discovery 
Service,” Information Technology & Libraries 34, no. 2 (2015): 76–91, 
doi:10.6017/ital.v34i2.5629. 

26. Nichols et al., “Kicking the Tires,” 184. 

http://dx.doi.org/10.6017/ital.v34i1.5600
http://dx.doi.org/10.6017/ital.v34i2.5629