Conducting Small-Scale Usability Studies | Field Reports

We knew there were problems with our library website at Fitchburg State University (FSU). Users either couldn’t find what they wanted or were unaware of the site’s existence. This was particularly a problem owing to the limited number of librarians available to assist. While there was some consensus among librarians regarding these design problems, there was little agreement as to how these problems could be ­addressed. We decided that usability testing was needed before making changes, but we didn’t have the budget to develop an expensive usability lab with one-way mirrors, sophisticated eye-movement testing devices and the like. Despite this, with a little creativity, we were able to design a solid and reliable usability study with limited resources.

jason simon_400x400We knew there were problems with our library website at Fitchburg State University (FSU). Users either couldn’t find what they wanted or were unaware of the site’s existence. This was particularly a problem owing to the limited number of librarians available to assist. While there was some consensus among librarians regarding these design problems, there was little agreement as to how these problems could be ­addressed.

We decided that usability testing was needed before making changes, but we didn’t have the budget to develop an expensive usability lab with one-way mirrors, sophisticated eye-movement testing devices and the like. Despite this, with a little creativity, we were able to design a solid and reliable usability study with limited resources.

Usability testing does not require large user samples in order to surface useful information. Though most types of research require many more subjects, with usability testing, you can get away with a fraction of what other studies would require. Our goal was to identify any problem areas with our site, and as research by usability expert ­Jakob Nielsen has demonstrated, tests involving only five users can conclusively identify a site’s biggest obstacles. Any place any user has a problem will provide testers with valuable information, and even one test subject is better than none.

How did we do it?

The first step involved identifying the areas that needed evaluation. These were divided into two distinct groups: resources that librarians felt users needed to be able to find and the items that students wished to find. The librarians agreed that a questionnaire would be a good beginning. Questions were divided into sections that covered database and catalog-based research, finding research help, and finding library policies and ­services.

There are two ways this type of survey can be implemented. One approach involves allowing users to explore the site and fill out a form at their leisure. The advantage of this method is that users experience no anxiety over being “watched” and their answers are not altered or influenced by their awareness of being observed. However, this approach can have problems with obtaining accurate data regarding website effectiveness. While subjects may be more comfortable, they may treat the questionnaire as a test, and believe they need to “get it right,” regardless of instructions, causing results to be skewed.

The other approach is an observation model. The advantage here is that the tester can provide clear instructions to the subject, observe the subject’s actions in real time, and gain more accurate data about how users actually work with the site. However, the subject might be nervous with an observer watching over his or her shoulder. Despite this concern, we opted for this model.

To alleviate the impact of the observers, instead of having observers stand behind subjects to watch them navigate the site, we seated users at a computer on one end of a table, attached a separate monitor to a computer for observers, and placed that monitor at the other end of the table, facing in the opposite direction. This way, the subject’s mouse movements could be seen with less interference. Subjects were assured that they were not being tested and that it was the website that was being evaluated. The user was told to indicate when a question was completed. If the subject was seen by the observers to be having clear difficulty, he or she was told to proceed.

Card sorting

The above model worked well for discovering whether users could find items librarians wished them to find. We ascertained that the only method for identifying what users wanted to find was through an open-ended section at the conclusion of the questionnaire that asked subjects to note areas of the website that they liked or that needed improvement. To understand better how users would approach a library website, in a second stage of the study, a card sorting exercise was instituted.

Our existing website headings were written out onto several sets of index cards. No distinction was made as to whether the item was a major heading on the site or a subgrouping. Users were then told to organize the cards into piles that would indicate their organizational preference, and they could choose the top levels themselves. The results were then collated, making note of those that were chosen for the top level and which items were grouped together, so patterns could be pinpointed.

Results

The process of testing, while a little time-consuming, produced valuable results. There’s a strong tendency to get stuck in “library think,” and information we learned from both sections of the test was striking. Basically, it is much better to do some small-scale usability testing than none at all and be stuck with a website that librarians like but that no one uses.

Jason C. Simon is a Technology & Serials Librarian at Fitchburg State University, MA, and a consultant providing database development and information management services with J. Simon Consulting (jsimonconsulting.com)

Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.


RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?