Documentos de Académico
Documentos de Profesional
Documentos de Cultura
The value of a research library traditionally has been measured in terms of size
indicators. Size of collections, budget, expenditures and staff, for example, are input
measures to describe the effort and potential of the library to be positioned to meet user
needs. These input measures do not assess how well user needs are met. The impact of
the library must in some way be measured in terms of the user’s interaction with the
library’s resources and its services. A critical judge of the impact is the user. One topic
the New Measures Group seeks to address is the user’s judgement as a component of
evaluating a library, focusing on user satisfaction and service quality assessments.
This background “white paper” will frame the topic through 1) clarifying definitions
of user satisfaction and service quality, 2) suggesting the need for this focus, 3)
summarizing existing practices for data gathering, 4) posing questions for further
research, and 5) suggesting next steps.
1. Definitions
1
Rachel Applegate, “Models of Satisfaction,” in Encyclopedia of Library and Information Science 60,
supplement 23, edited by Allen Kent (New York: Marcel Dekker, 1997), 200.
2
Mary Jo Bitner & Amy R. Hubbert, “Encounter Satisfaction Versus Overall Satisfaction Versus Quality:
The Customer’s Voice,” in Service Quality: New Directions in Theory and Practice, edited by Roland T.
Rust and Richard L. Olvier (Thousand Oaks, CA: Sage, 1994), 76-77.
Service Quality, developed by the marketing research team of Parasuraman, Berry, and
Zeithaml. They define service quality in terms of reducing the gap between customers’
expectations for excellent service and their perceptions of services delivered3.
Since it is personal to an individual’s experience with a specific encounter or series of
experiences, satisfaction may or may not be related to the performance of a library. One
user may be satisfied, while another is not, with the same library service. Service quality
aims to describe a global judgment or attitude. Hernon and Altman point out:
3
A. Parasuraman, Leonard L. Berry, & Valarie A. Zeithaml, “A Conceptual Model of Service Quality and
its Implications for Future Research,” Journal of Marketing 49 (Fall 1985), 41-50.
4
Peter Hernon and Ellen Altman, Assessing Service Quality: Satisfying the Expectations of Library
Customers (Chicago: American Library Association, 1998), 8.
2
At least three types of indicators of the library’s impact have been identified: the
number of service interactions, user satisfaction, and service quality. ARL has included
counts of service interactions such as reference transactions, library presentations to
groups, circulation, and interlibrary loans in its ARL Statistics questionnaire since 1994-
95. From 1974 until 1994-95, interlibrary loans provided and received were included in
the ARL Statistics questionnaire and, in 1984, external circulation, reserve circulation,
total circulation, and reference transactions were added.
As recently as five years ago, 68 percent of the responding ARL libraries indicated
they had performed a user survey in the past five years and the majority reported their
impetus was to evaluate services.5 Respondents also indicated most frequently that what
was being measured in their survey was “attitudes.6” In more recent years, user
satisfaction measures have become an important aspect of library user surveys in ARL
Libraries. In 1996, for example, the University of California at San Diego conducted
user surveys in conjunction with their goal of having “90 percent of our primary users
(faculty, students, and staff) rate the libraries’collections, services, programs, staff, and
facilities as either ‘outstanding’or ‘excellent’by 1998.”7 Despite these individual efforts,
however, there is not currently a coordinated effort underway to develop common user
satisfaction measures or compare user satisfaction results across ARL institutions.
Several individual libraries have conducted independent measures of user satisfaction
and characteristics of library use, but there are no systematic reporting mechanisms for
the results among research libraries. Only a few libraries are exploring methods to
assess service quality, distinguished as a gauge of users’perceptions of the library’s
performance in relation to their expectations.
There is recognition that research libraries share the need and desire to incorporate
assessment methods to understand user satisfaction and its relation to service quality and
library value. What methods have been successfully used in research library settings? Is
there a conceptual framework for describing user satisfaction and its relation to service
management that applies to research libraries? Is there a compelling need to focus
attention on user satisfaction and is the need shared among research libraries? Can
satisfaction measures be comparatively ranked among research libraries? Are there
common contributors to foster user satisfaction? The pursuit of these and related
questions is being undertaken by the New Measures Group in the belief that there are
common needs for alternative assessment methods and that there is benefit from
developing them collectively. Even if there is not benefit in trying to comparatively rank
levels of satisfaction across libraries, there is obvious benefit in exploring the ways to
best determine user satisfaction and analyze trends in what affects it and how it is
solicited.
5
Brekke, Elaine. User Surveys in ARL Libraries, SPEC Kit 205, Washington, D.C.: Association of
Research Libraries, 1994, 4.
6
Ibid. 7.
7
DeCandido, GraceAnne A. After the User Survey, What Then? SPEC Kit 226, Washington, D.C.,
Association of Research Libraries, 1997, 15.
3
3. Data Gathering Practices
There are many ways to judge satisfaction, though typically it is measured after the
experience has occurred, as the user’s recollection of the reaction to a specific service or
product. In library surveys, user satisfaction is often considered in response to a specific
service encounter or series of experiences. Sometimes, the respondents are asked to
characterize their satisfaction with the library based either on their projections from one
encounter or on their reflections on multiple experiences. Satisfaction sometimes is
examined in terms of user reactions to staff, library settings, or the information provided8.
A drawback of using satisfaction alone as a measure of library performance is that it
provides managers little insight into what contributes to dissatisfaction, or what problems
in the organization or services require improvement. The broader focus in satisfaction
surveys gathers information about the user’s behavior and the specifics of the experience
with the library. Coupled with indications of satisfaction, such information begins to
offer general directions to librarians seeking to improve library performance.
User satisfaction in academic libraries has typically been measured by selecting a
representative sample of library users and administering a survey instrument to the
sample population. Alternatively, when the population of library users has been small
(e.g., the faculty within a specific academic department), a census approach to surveying
the entire population has been employed. Surveys are usually administered by mail or
electronic mail, or distributed in randomly selected classes to undergraduate students.
Response rates typically vary considerably among user groups and institutions.
Data gathering is commonly based on the use of a five point Likert scale, where a low
score represents low satisfaction and a high score represents high satisfaction. The
questions aimed at measuring user satisfaction typically address specific items related to
three major categories: library collections, library services and library
facilities/equipment (e.g., Northwestern University, 1991; Rice University, 1992;
University of Rochester, 1993).9 More recent user surveys conducted at ARL libraries
have also focussed on these three categories, but have added an additional question to
gauge a user’s overall satisfaction with the library (e.g., University of Virginia, 1993;
Ohio State University, 1996; University of Connecticut, 1996).10
Another approach to measuring user satisfaction was introduced by Andaleeb and
Simmonds in 1998.11 Their study, a variation on the SERVQUAL model, tested a five-
factor model to explain user satisfaction at three academic libraries in Pennsylvania. The
five factors Andaleeb and Simmonds tested as influencing user satisfaction were: the
perceived quality of the library’s resources; the responsiveness of the library staff; the
perceived competence of the library staff, the demeanor of library staff; and the perceived
overall physical appearance of the library facilities. Their study determined that
demeanor, competence, quality of resources and physical appearance (but not
8
Hernon and Altman. 182-187.
9
Brekke, pp. 24-51.
10
Association of Research Libraries. User Surveys in AcademicLibraries, April 11, 1997. Nashville, Tenn.
(Workshop Notebook)
11
Syed Saab Andaleeb and Patience L. Simmonds, Explaining User Satisfaction with Academic Libraries:
Strategic Implications,” College and Research Libraries 59 (March 1998), 156-167.
4
responsiveness) had an effect on user satisfaction. They also determined that physical
appearance had a considerably lower impact than the other significant variables.
Assessment of service quality has been an active topic of research for over a decade,
led by the pioneering work of Parasuraman, Berry and Zeithaml. They identified five
universally important dimensions of service quality: reliability, assurance, tangibles,
empathy, and responsiveness. They developed the SERVQUAL instrument to measure
customer assessment of service quality12.
The SERVQUAL instrument is a questionnaire that is distributed following basic
mail survey principles. It consists of 22 pairs of statements. The first set of these
statements measures the library user’s expectations by asking each respondent to rate, on
a 7-point scale, how essential each item is for an excellent library. The second set of 22
statement measures the respondent’s perceptions of level of service given. The
differences between the ratings for each statement are averaged to calculate the
SERVQUAL score, an indicator of the library service’s quality as perceived by its users.
In addition, the questionnaire includes a section in which participants are asked to
allocate 100 points among descriptions of the five dimensions to indicate how important
each is when they evaluate the quality of a library’s service. A set of overall and
comparative service quality questions and a set of demographic questions are included on
most adaptations of the SERVQUAL to library settings.
Nitecki reviewed the published reports of research utilizing the SERVQUAL in eight
library settings, involving a total of over 1,240 library users. Response rates were
typically high; with most applications of the instrument, the majority of mailings were
returned. The instrument lends itself well to inexpensive transmission by electronic mail,
thereby lowering overhead survey costs and delays. In response to questions if the
instrument identified important expectations for quality service, the vast majority of
respondents affirmed that it did. From applications in a variety of service settings,
Parasuraman, Berry, and Zeithaml identified that reliability consistently ranks as most
important to the delivery of service quality and tangibles as least important. Adopting the
SERVQUAL instrument to at least eight library settings, researchers consistently have
found similar conclusions. Other observations about expectations for quality service are
suggested in the research, but require further application to draw any conclusive patterns.
Shared exploration of the applicability of the SERVQUAL may foster understanding of
user satisfaction and service quality in research libraries13.
The SERVQUAL also has stimulated a debate in the literature, both among library
and marketing researchers. Issues of the instrument’s reliability are periodically raised.
Some criticize the tool for assuming that assessment factors from a business environment
apply in an academic setting; others are concerned that individual libraries may not share
the same set of priorities for service and thus require different sets of factors to assess
quality. Hernon and Altman propose two alternatives: compare user expectations to
objective indicators of service quality or to library staff members or management’s
perceptions of service performance. Hernon, working with Altman and Calvert on
12
Parasurian, Berry, and Zeithmal (1985).
13
Danuta A. Nitecki, “Assessment of Service Quality in Academic Libraries: Focus on the Applicability of
the SERVQUAL,” in Proceedings of the 2nd Northumbria International Conference on Performance
Measurement in Libraries and Information Services (Newcastle upon Tyne, England: Department of
Information and Library Management, University of Northumbria at Newcastel, 1998), 181-196.
5
different projects, developed a list of more than 100 statements about library user
expectations. He continues to explore areas to increase the list’s comprehensiveness.
The statements cover three general areas: resources, the organization, and service
delivery. He encourages staff to review the statements and identify those regarded to be
of highest priority to meet user expectations for excellent service. The most important 30
or so statements are used in a user survey to rank importance as indicators of high quality
service14. Hernon and Nitecki are exploring a combination of their two approaches where
the local library customization of statements and the SERVQUAL gap measure will be
merged.
User satisfaction and service quality measures at ARL libraries to-date might be
characterized as individual efforts to measure user attitudes and expectations at particular
institutions’ libraries. The primary issues at this juncture are whether a more
standardized approach to assessing user satisfaction and service quality can be developed
and, if so, whether such a standardized approach might yield comparable data that would
be useful to ARL libraries.
ARL’s role to-date has been to collect member libraries’experiences into SPEC kits
and conduct workshops that introduce libraries to user survey techniques. There may be
a role for ARL to play in recommending or designing standardized sampling plans,
survey instruments, and data analysis techniques similar to what is currently done for
ARL Statistics. Most user surveys at ARL Libraries, for example, have measured user
satisfaction with specific items under the headings of services, collections, and
facilities/equipment. Could a standard set of assessment variables be developed and then
offered for application at specific libraries? Would ARL support its membership and the
profession through facilitating dialog and collaborative, systematic exploration among
persons with appropriate expertise and interest in issues related to user satisfaction and
service quality? Can user satisfaction and user-based judgements of library service
quality contribute to our understanding of library impact or value?
ARL libraries have employed different sampling plan approaches when conducting
user surveys and the influence of different methods and the user satisfaction levels
measured may suggest different trends. While some libraries have treated faculty,
graduate students, and undergraduate students as three distinct samples, other libraries
have sub-sampled specific groups within these samples (e.g., underclassmen,
upperclassmen, biology faculty, business graduate students). Are there trends among
ARL libraries related to user satisfaction within these sub-groups? Are there libraries
that are particularly successful in achieving high levels of user satisfaction either overall
or in specific activities or with particular constituencies? If there are examples of
successful libraries, are there best practices that can be identified as directly correlated to
high levels of user satisfaction?
With regards to data analysis, individual libraries have sought to correlate satisfaction
with specific collections, services, or facilities/equipment to overall satisfaction. Other
studies have sought to measure the influence of particular variables, such as competence
or quality of resources on overall user satisfaction. Are there insights to be gained by
14
Hernon & Altman, pp. 101-116.
6
extending these analyses to a broader group of libraries to see if generalizations can be
made?
Finally, what roles do ARL libraries envision for themselves with regards to
coordinated user satisfaction and service quality efforts? Will ARL libraries provide
leadership for changing the way library performance and impact are measured? Would
libraries engage in studies and share results over time to produce longitudinal
comparative data? What value will be gained to justify the effort?
5. Next Steps
• What are libraries doing toward assessing user satisfaction and service quality?
• What are other service institutions doing that might be applicable to academic
libraries?
• What theoretical models and data-gathering instruments are either tested or
emerging that might direct thinking about user satisfaction and service quality in
libraries? Where do we turn for innovative thinking in these areas?
• What trends can be identified about what library users value most and what they
expect most from an excellent library?
• Do any patterns emerge that inform us on where to focus quality improvement
efforts?
• To what extent are conclusions drawn from localized user satisfaction studies
applicable to other libraries?
• What value exists in collaborations in this area?
• What support would be helpful to expand exploration in these areas?
The symposium should conclude with a set of commitments on what might be done
across ARL institutions, a mechanism for systematic communication and sharing of
insights in this field, and identification of possible topics for continued practice and
focused research.