gratch.p65


546  College & Research Libraries November 1998

Defining and Measuring the Library’s 
Impact on Campuswide Outcomes 

Bonnie Gratch Lindauer 

Accreditation agencies, higher education institutions, and professional 
organizations all emphasize the importance of measuring and assess­
ing the impacts or effects of teaching, learning, and other valued institu­
tional activities. Academic libraries, one of the key players in providing 
and structuring instructional resources and services, also are expected 
to document how their performance contributes to institutional goals and 
outcomes. Using accreditation and ACRL sectional standards/criteria, 
higher education outcomes assessment research findings and recent 
findings from performance effectiveness studies, this article identifies 
important institutional outcomes to which academic libraries contribute; 
describes specific performance indicators whose measures of impacts 
and outputs provide evidence about progress and achievement; and 
offers a conceptual framework of assessment domains for the teach­
ing–learning library. 

n increasingly important con­
cern for academic librarians is 
how to document and measure 
the ways that the library, learn­

ing resources, and computer services 
units make a real difference in the aca­
demic quality of life for students and fac­
ulty. This concern was expressed clearly 
by Sarah M. Pritchard: 

The future vitality of libraries in 
academia will be dependent on 
whether they can dynamically and 
continually prove their value to the 
overall educational endeavor. This 
value must be documented at a level 
that transcends specific formats of 
information, locations of collections 
and location of users, and that 
clearly links the investment in 

campuswide information resources 
to the effectiveness of particular dis­
ciplinary programs.1 

Generally, academic librarians face two 
problems when trying to describe the 
impact of their services and resources on 
desired institutional outcomes and goals. 
First, they are not sufficiently strategic or 
externally focused when determining 
which measures to use as evidence of how 
the library affects educational outcomes. 
Second, they often do not organize their 
data and other supporting documenta­
tion in ways that are accessible or mean­
ingful to academic administrators and 
accreditation teams, nor do they use lan­
guage that reflects what is used in 
campuswide planning documents. Typi­
cally, all sorts of data are presented in 

Bonnie Gratch Lindauer is Reference and Instruction Librarian at City College of San Francisco; e-mail: 
bgratch@ccsf.cc.ca.us. The author acknowledges the California Academic and Research Librarians Asso­
ciation for supporting this project with a research award. 

546 

mailto:bgratch@ccsf.cc.ca.us


The Library’s Impact on Campuswide Outcomes 547 

annual reports and program reviews, but 
they do not explicitly address how the 
library’s resources and services make a 
qualitative difference to student learning, 
staff development, faculty scholarly ac­
tivity, and other campuswide goals. 

Motivated by the desire to improve the 
measurement and documentation of the 
impact of academic library’s services and 
programs on institutional outcomes, the 
author seeks to (1) identify key institu­
tional outcomes to which academic librar­
ies contribute; (2) specify library perfor­
mance indicators whose measures can 
provide a culture of evidence to document 
progress and contributions toward the 
realization of campuswide outcomes and 
goals; (3) offer a framework of assessment 
categories that emphasizes the teaching– 
learning role; and (4) build on the library 
effectiveness/quality knowledge base by 
pinpointing useful publications for mea­
suring inputs and outputs. To accomplish 
these purposes, this article: 

� summarizes some of the character­
istics of the library effectiveness/quality 
literature; 

� summarizes findings derived from 
reviews and analyses of a variety of au­
thoritative sources that identify contem­
porary and emerging campuswide per­
formance expectations for academic li­
braries; 

� presents a framework of assess­
ment categories that reflects a primary 
teaching–learning role; 

� identifies specific key institutional 
outcomes and outputs, along with corre­
sponding performance indicators, that 
academic libraries can use to describe and 
assess their impact; 

� offers some practical concluding 
comments to assist librarians who want 
to connect their programs and services to 
broader campus educational goals and 
desired outcomes for self-studies, pro­
gram reviews, and other assessment ac­
tivities. 

Thus, this article’s contribution is 
threefold. First, taking a campuswide 
perspective, it advocates that the assess­
ment of library performance should be 

defined and shaped by its connections 
and contributions to institutional goals 
and desired educational outcomes. Sec­
ond, it proposes that assessment efforts 
and results be focused on the primary role 
of the teaching–learning library. And 
third, it identifies specific performance 
indicators for measuring and document­
ing the library’s impact on key 
campuswide outcomes. The article does 
not include assessment issues related to 
making institutional comparisons be­
cause the author agrees with a recent 
Middle States Association of Colleges and 
Schools’ publication that states that ev­
ery institution must be considered within 
its own setting and not by comparison 
with general patterns or norms.2 Because 
this article’s perspective is organization-
level assessment, it excludes library 
suborganizational approaches. Also ex­
cluded are criteria and indicators dealing 
with cost and cost-efficiency and effec­
tiveness. Readers are referred to Charles 
R. McClure and Cynthia L. Lopata for 
network efficiency indicators and to Paul 
B. Kantor and Frederick W. Lancaster for 
guidance and methods evaluating tradi­
tional library suborganizational services 
and resources.3 

Literature Review 
A great deal of literature exists on the 
models, performance criteria, measures, 
methods, and results of evaluation stud­
ies related to academic library effective­
ness, efficiency, and quality of perfor­
mance. In summary, the literature is 
devoted to two major concerns that are 
often combined in publications. The first 
concern centers on efforts to describe the 
determinants of effectiveness or quality— 
that is, what is conceptually meant by 
quality or effectiveness so that it can be 
operationalized into performance criteria 
or other types of criteria to use for mea­
surement purposes. The second concern 
focuses on the numerous attempts to de­
scribe specific measures and methods of 
collecting data. Indeed, there are several 
good publications that offer field-tested 
measures and data-gathering techniques 



548 College & Research Libraries November 1998 

to provide guidance in all aspects of mea­
suring and evaluating inputs, processes, 
and outputs.4 However, almost none of 
these publications provides measures or 
methods for assessing the impact of aca­
demic libraries on campuswide educa­
tional outcomes. Overwhelmingly, the lit­
erature is internally focused, looking at 
the academic library as an overall orga­
nization or at one or more of its compo-

The teaching–learning role of 
academic libraries is well estab­
lished, as are the expectations of 
accreditation agencies that libraries 
connect their evaluation of collec­
tions, resources, and services to 
educational outcomes. 

nents or services. Except for the literature 
that looks at evaluating libraries as part 
of accreditation or as part of planning ef­
forts, most of the literature on academic 
library effectiveness or quality does not 
take a campuswide view in relating li­
brary programs to campus outcomes. In­
stead, it has been concerned primarily 
with measuring and evaluating the quan­
tity, effectiveness/quality, and efficiency 
of traditional academic library inputs 
(staff, budget, collections, facilities), pro­
cesses (collection development, catalog­
ing, management practices), and outputs 
(reference service, OPAC use, ILL/docu­
ment delivery service). Most of this lit­
erature is well reviewed in publications 
by Rosemary DuMont and Paul F. 
DuMont, Deborah L. Goodall, Joseph A. 
McDonald and Linda B. Micikas, Sarah 
M. Pritchard, and Nancy A. Van House.5 

Pritchard’s review is particularly rec­
ommended for its coverage of funda­
mental concepts, its focus on assess­
ment in higher education as a whole 
and ways that determinants of library 
quality should be linked to educational 
outcomes. 

There is no shortage in the number of 
writers who have decried the redundancy 
coming from this forty-year-plus litera­
ture, particularly the lack of objective 

ways to measure and incorporate library 
value into processes such as academic ac­
creditation and educational assessment. 
The common observation made in numer­
ous publications is that what is most 
needed are performance indicators that 
demonstrate the academic library’s im­
pact on desired educational outcomes and 
methods to measure them. 

However, some notable exceptions 
have looked at the academic library’s con­
nection to institutional outcomes such as 
student academic performance and fac­
ulty productivity. Ronald R. Powell’s 
summary of these works includes the 
impact studies of several earlier research­
ers.6 The types of impacts discussed in 
these works are measures of academic li­
brary use and library skills instruction 
correlated to lower attrition rates, higher 
grades, higher GRE scores, student per­
sistence, and savings in faculty time. 
Powell provides a list of several perfor­
mance indicators of impact derived from 
his literature review (test scores, course 
evaluations, course grades, quality of pa­
pers) and recommends user panels for 
data collection because “they share some 
of the strengths of focus group interviews 
but go beyond them by being more lon­
gitudinal and comprehensive.”7 

More recent is the major influence of 
the outcomes assessment movement. Fu­
eled by state legislatures, this movement 
requires higher education institutions to 
provide evidence for what students have 
learned and how much, sometimes along 
with the costs of doing so. The primary 
change that outcomes assessment has 
caused, it seems, is to place responsibil­
ity on all institutional units for provid­
ing evidence of their contributions to de­
sired educational outcomes and to incor­
porate outcomes assessment into organi­
zational planning and improvement. 
Ralph A. Wolff, executive director of the 
Senior Colleges and Universities Com­
mission of the Western Association of 
Schools and Colleges (WASC), calls for a 
“culture of evidence” in his writings de­
scribing a stronger instructional role for 
libraries.8 He stressed that assessment 



The Library’s Impact on Campuswide Outcomes 549 

must reflect the library’s relationship to 
the teaching and learning functions of the 
institution. He also has provided much 
useful guidance about improving library 
accreditation self-studies and has sug­
gested needed measures that demonstrate 
library impact, such as usage data orga­
nized by academic programs; the role of 
the library in curricular development; 
evaluation of what students learn from 
bibliographic instruction programs; and 
the relationship of the library to campus 
information systems development.9 Thus, 
both the library effectiveness/quality lit­
erature and the higher education litera­
ture reflect the need for assessing out­
comes. Although academic libraries con­
tribute to various institutional outcomes, 
it is the impact of their instructional pro­
gram that has been typically connected 
to student learning outcomes.10 

Teaching library and information lit­
eracy skills is viewed as directly affect­
ing student outcomes because these skills 
support such general/liberal education 
outcomes as critical thinking, computer 
literacy, problem-solving, and lifelong 
learning. In fact, although the teaching– 
learning role is not a new one for aca­
demic librarians, it has taken on a re­
newed importance, in part because of the 
effects of information technology on 
higher education and the leadership at 
national, state, and local levels of the in­
formation literacy movement. Indeed, it 
seems that the common denominator in 
the many publications describing the new 
and/or reshaped roles of the academic 
library is that of the teaching–learning li­
brary, defined by Carla J. Stoffle and 
Karen Williams as follows: 

. . . it focuses on teaching as both a 
direct activity and a support activ­
ity for other disciplines; creates new 
knowledge packages and access 
tools; provides a physical environ­
ment that facilitates student and fac­
ulty research and collaboration; and 
provides access to resources that are 
the necessary underpinnings of the 
new learning environment.11 

The teaching–learning role of academic 
libraries is well established, as are the 
expectations of accreditation agencies that 
libraries connect their evaluation of col­
lections, resources, and services to edu­
cational outcomes. Although there are 
some useful suggestions from accredita­
tion, outcomes assessment, and academic 
library effectiveness publications on how 
to assess the impact of libraries, what is 
lacking is the identification of a more com­
prehensive set of performance indicators 
linked to valued higher education out­
comes. However, before presenting the 
findings from the content analyses and 
reviews, it is necessary to define some ter­
minology. 

Review of Terminology 
What is the difference between perfor­
mance measures and indicators? What is 
meant by “valued institutional out­
comes”? Is the evaluation of library effec­
tiveness the same as library quality or 
performance? Although some writers 
define performance measures more nar­
rowly, the author has adopted the fol­
l o w i n g  d e f i n i t i o n  b y  McClure and 
Lopata: 

Performance measures are a broad, 
managerial tool that encompass 
measurement of inputs (indicators 
of the resources essential to provide 
service); outputs (indicators of the 
services resulting from the use of 
those resources); and impacts (the 
effects of these outputs on other 
variables or factors) . . . .12 

Although the terms performance criteria, 
performance indicators, and performance fac­
tors are sometimes used interchangeably, 
the author uses criteria to mean guidelines 
or standards operationally employed as 
the basis for making a judgment or deci­
sion. Typically, these criteria are identi­
fied from a literature review or a survey 
of various user groups, and as a result of 
some selection process they represent 
traits or characteristics of libraries/librar­
ians presumed to be desirable or impor­

http:environment.11
http:outcomes.10


550 College & Research Libraries November 1998 

tant. Thus, they can be called perfor­
mance indicators because their mea­
sures indicate something desired or im­
portant. 

Throughout this article, the word out­
comes is reserved for the realized goals 
valued by various campus constituents, 
also called stakeholders, and the word 
impact(s) is used for those direct effects 
the library has on institutional outcomes, 
or if more indirect, the enabling effects 
that contribute to these outcomes. Some 
researchers have further defined out­
comes by involving higher education con­
stituent groups to identify important out­
comes. McDonald and Micikas explain 
that “valued institutional outcomes” are 
those that are perceived to be important 
by the key stakeholders of colleges and 
universities (students, faculty, and aca­
demic staff and administrators) as well 
as by the external professional culture 
(accreditation agencies and profes­
sional organizations) and society at 
large (political bodies and the market-
place).13 The terms evaluation and assess­
ment are used interchangeably in this 
article, as are library effectiveness and li­
brary quality. 

Methods 
To identify valued institutional outcomes 
from key stakeholder groups, several lit­
erature reviews and content analyses 
were completed of the latest editions of 
regional accreditation standards, ACRL 
sectional standards, higher education 
outcomes assessment research, and aca­
demic library literature dealing with in­
formation literacy and changing roles. 
The author searched these various docu­
ments for language describing or imply­
ing enabling library services and re­
sources that contribute to the achievement 
of expected or desired educational out­
comes, thus making them prime candi­
dates for developing performance indi­
cators. She assumed that in addition to 
the higher education outcomes assess­
ment sources, the regional accreditation 
standards would be authoritative for ex­
posing institutional outcomes and ex­

pected/required services and resources of 
academic libraries. Several research re­
ports employing the Delphi survey tech­
nique were particularly useful because 
they reflected the views of large groups 
of key stakeholders on topics such as what 
college students should know and be able 
to do; what core information literacy skills 
are; and what instructional good practice 
criteria are. In addition to the sources 
mentioned above, performance indicators 
also were derived from reviews of library 
effectiveness research and performance 
evaluation manuals. Moreover, the author 
extracted ideas from publications discuss­
ing the emerging and future roles of 
largely digital libraries. For example, roles 
and functions that should be reflected in 
a library’s performance criteria and indi­
cators include:14 

� creation and support of “holistic 
computing environments” that make the 
technology work for all users, regardless 
of location; 

� delivery of around-the-clock refer­
ence and instructional services over the 
network; 

� partnering across administrative 
lines for the improvement of services and 
resources; 

� provision of improved electronic 
integrated library systems that emphasize 
direct user access to both full-text and bib­
liographic resources for resident and dis­
tance education learners; 

� high-quality document delivery 
service; 

� instructional design and produc­
tion of teaching materials; 

� creation of new knowledge pack­
ages and new access tools; 

� improvement of campus under­
standing of, and participation in, local in­
formation policy development. 

The results of the various reviews and 
analyses are presented next, organized by 
grouping selective findings from the re­
gional accreditation and ACRL standards; 
higher education research on teaching– 
learning outcomes; and library effective­
ness evaluation research and performance 
evaluation manuals. 

http:place).13


 

 

 

The Library’s Impact on Campuswide Outcomes 551 

Findings from the Regional 
Accreditation and ACRL Standards 
Five of the seven regional accreditation 
commissions’ standards were revised 
within the past three to four years, and 
three were revised in 1996.15 With the ex­
ception of the North Central Regional 
Association, these documents contain 
separate sections dealing with library and 
learning resources and all but one define 
the “library” section’s scope broadly to 
include learning resources such as in­
structional media centers, computer cen­
ters, museums, language labs, networks 
and telecommunications facilities. The 
regional accreditation standards do not 
explicitly describe many of the institu­
tional outcomes to which academic librar­
ies directly contribute, but they do con­
tain statements in various sections that 
relate to library performance criteria, as 
well as clear statements about expected 
or required library outputs and inputs 
that support educational outcomes. Al­
though the standards in the designated 
library section relate primarily to inputs 
such as collections, facilities, and staff, 
there are standards in most of the docu­
ments that relate to contemporary issues 
such as access versus ownership, distance 
education, information literacy, and the 
availability of suitable and sufficient in­
formation technology. One overall theme 
is the importance of use over resource 
acquisition. For example, “the size of col­
lections and the amount of money spent 
do not ensure adequacy. Of more impor­
tance are the quality, relevance, accessi­
bility, availability and delivery of re­
sources and services, and their actual use 
by students, regardless of location.”16 The 
following themes reflect institutional ex­
pectations of libraries, thus making them 
key areas for the identification of perfor­
mance measures that can generate data 
to be part of the culture of evidence. 

Access, availability, and use: All six 
documents have a section heading or at 
least one entire paragraph of text devoted 
to access and availability, frequently mak­
ing connections to use. Statements deal­
ing with off-site programs, remote access, 

or distance learning support are often in­
cluded in the access and availability sec­
tions. For example, “Because adequate 
library and other learning resources and 
services are essential to teaching and 
learning, each institution must ensure 
that they are available to all faculty and 
enrolled students wherever the programs 
or courses are located and however they 
are delivered.”17 

Collections and learning resources: All 
six documents connect collections to the 
library’s primary goal of supporting 
teaching and learning, and all use lan­
guage to include a broad understanding 
of “collections.” Several directly connect 
resources to access and use. For example: 
“Library/learning resources must be in 
reasonable proportion to the needs to be 
served, but numbers alone are no assur­
ance of excellence. Of more importance 
are the quality, accessibility, availability 
and delivery of resources on site and else­
where; their relevance to the institution’s 
current programs; and the degree to 
which they are actually used.”18 

Information literacy: All the docu­
ments contain some type of statement 
about orientation, instruction, and/or 
training within the library section of the 
standards, usually connecting the value 
of this service to students becoming ef­
fective and/or independent learners and 
increasing their use of library and net­
work resources. But only one document 
contains language in both library and 
educational program sections. The fol­
lowing text from the educational program 
section connects information technology 
facility to educational outcomes: “The 
general education program provides the 
opportunity for students to develop the 
intellectual skills, information technology 
facility, affective and creative capabilities, 
social attitudes, and an appreciation for 
cultural diversity that will make them ef­
fective learners and citizens.”19 

Information technology: Academic 
computing is included within the scope 
of the library section of the standards in 
all but two of the documents (Middle 
States and Southern), where there are 



 

 

 

 

552 College & Research Libraries November 1998 

separate sections in the standards for aca­
demic computing and information tech­
nology. All the documents make refer­
ences to having appropriate and sufficient 
information technology available, usually 
connecting it to improving or extending 
access. An example of a strong statement 
follows: “Institutions must provide the 
means by which students may acquire 
basic competencies in the use of comput­
ers and related information technology 
resources . . . reliable data networks 
should be available so that faculty, stu­
dents and staff may become accustomed 
to electronic communication and famil­
iar with accessing national and global in­
formation resources. There must be pro­
visions for ongoing training of faculty and 
staff so that they may make skillful use 
of appropriate application software.”20 

Outcomes assessment: All seven of the 
regional accreditation commissions have 
included in their standards, and/or in 
more recent supplemental publications, 
statements about the importance of stu­
dent outcomes assessment. Common to 
most of the documents are statements re­
quiring documentation of an assessment 
plan based on the institutional mission 
and academic program goals; ongoing 
assessment that involves various stake­
holders; use of a variety of assessment 
measures and methods, including making 
use of already collected institutional data; 
and evidence that the results of assessment 
are being used for program improvement. 

Collaboration with faculty and other 
academic staff: All six documents include 
statements in the library section and/or 
in another section of the standards about 
librarian collaboration with faculty or 
other academic staff. The most common 
type of collaboration specified is for col­
lection development. 

Staff: All six documents have a section 
or paragraph of the standards about pro­
fessional staff, and some also discuss sup­
port staff. For example, “Librarians and 
other resources center staff must demon­
strate their professional competence on 
the basis of criteria comparable to those 
for other faculty and staff. . . .”21 

Just as important is the text related to 
libraries and information technology in 
other sections, such as the educational 
program section. All but one association’s 
standards (New England) include some 
reference to libraries in this section, stat­
ing that either library resources must be 
sufficient to support the academic pro­
grams or the use of resources is required 
or expected. Other text in this section sup­
porting the library’s direct role in teach­
ing and learning refers to academic pro­
grams that demonstrate innovative teach­
ing methods by the use of library and 
media resources and text about coopera­
tive relationships, such as: “Librarians 
must work cooperatively with faculty and 
other information providers in assisting 
students to use materials.”22 

Comparing the three ACRL sections’ 
latest standards/criteria documents (Uni­
versity Libraries Section, 1989; College 
Libraries Section, 1995; and the Commu­
nity and Junior Colleges Libraries Section, 
1994) to the regional accreditation stan­
dards reveals many commonalties and 
some significant differences. All three 
documents parallel the regional accredi­
tation standards by including a section 
heading and/or at least one full paragraph 
of text devoted to bibliographic instruction 
or information literacy; cooperative rela­
tionships on and off campus and resource-
sharing agreements; access and availabil­
ity issues related to collections and re­
sources; and assessment or evaluation. The 
following examples illustrate how some of 
these ideas are represented. 

The three ACRL section documents 
contain several statements relating to col­
laboration and cooperation with disci­
plinary faculty and other academic staff, 
but only the University Library Section 
(ULS) document has an entire section 
devoted to this theme. Like some of the 
regional accreditation documents, it calls 
for the establishment of a relationship 
between the library and the computer and 
telecommunications services. Also, all 
three documents contain language about 
the importance of evaluation, but only the 
Community and Junior Colleges Librar­



 

The Library’s Impact on Campuswide Outcomes 553 

ies Section (CJCLS) and the ULS docu­
ments describe the need for different mea­
sures. In fact, the ULS document provides 
useful guidance about the process of evalu­
ation and offers specific criteria that aca­
demic libraries might use. The CJCLS text 
makes the strongest connection between 
libraries and institutional impact: 

If institutional effect is measured in 
terms of student success in grades, 
credit and completion and transfer 
rates, then learning resource stan­
dards based on circulation statistics, 
book counts and other traditional 
measures may not be relevant be­
cause they are limited in detailing 
the direct impact of learning re­
source programs in effecting suc­
cessful learning outcomes. Learning 
resource effectiveness measures 
should rely on the relational at­
tributes of the program which di­
rectly impact learning attained by 
students.23 

The analysis and comparison to re­
gional standards suggest that although 
there is overlap with the regional accredi­
tation standards, overall the ACRL sec­
tion standards reflect a more internal ori­
entation. That is, they focus primarily on 
the inputs, processes, and outputs con­
sidered necessary for high-quality librar­
ies and learning resources, with little text 
devoted to broader roles or connections 
of library use to student learning or other 
institutional outcomes. The major excep­
tions to this observation are the ULS and 
CJCLS documents that include some lan­
guage about broader roles in the role and 
purpose sections. 

Also lacking is the prominence given 
to the kinds of statements found in sev­
eral of the regional accreditation docu­
ments which express the importance of 
access and delivery of resources in all for­
mats to all locations, on-site or elsewhere. 
Only the ULS document has a separate 
section labeled “Access,” which “implies 
the delivery of information, whether in 
printed or electronic format, by the library 

to the user at the user ’s location.”24 How­
ever, there are statements in the College 
Libraries Section and CJCLS documents 
about providing access to off-campus pro­
grams. In general, the accreditation stan­
dards seem to do a better job than the ACRL 
standards of connecting the use of library, 
learning, and network resources and ser­
vices to student educational outcomes. 

Findings from Higher Education 
Research on Teaching–Learning 
Outcomes 
Many of the teaching–learning outcomes 
included in this section come from major 
higher education researchers and centers, 
such as Alexander Astin; Peter Ewell; the 
National Center on Postsecondary Teach­
ing, Learning and Assessment; and the 
National Center for Higher Education 
Management Systems. Perhaps not sur­
prisingly, the author found that in most 
cases empirical research connecting col­
lege students’ experiences and outcomes 
with specific campus services and re­
sources did not include any mention of 
libraries/learning resource centers. One 
has to look closely to find outcomes that, 
though not directly stated, suggest the 
involvement of academic libraries. 

Alexander Astin’s longitudinal re­
search on freshmen and undergraduates 
is well known and frequently cited. His 
book, What Matters in College? Four Criti­
cal Years Revisited, focuses on student out­
comes and how they are affected by the 
college environment. Several findings 
reported in Astin’s review article in Lib­
eral Education are noteworthy: 

The single most powerful source of 
influence on the undergraduate 
student’s academic and personal 
development is the peer group. . . . 
The amount of interaction among 
peers has far-reaching effects on 
nearly all areas of student learning 
and development. . . . Time spent 
studying and doing homework had 
significant effects on more than two-
thirds of the eighty-two outcome 
measures (e.g. retention, graduating 

http:students.23


 

554 College & Research Libraries November 1998 

with honors, enrollment in gradu­
ate school, and standardized test 
scores).25 

In addition to the importance of time 
spent studying and peer group interac­
tion are other findings highly associated 
with student academic development. 
Many of these can be connected to librar­
ies, such as participating in college intern­
ship programs; participating in racial/ 
cultural awareness workshops; doing in­
dependent research projects; making class 
presentations; and taking essay exams.26 

One has to look closely to find 
outcomes that, though not directly 
stated, suggest the involvement of 
academic libraries. 

Several government-sponsored re­
search studies connected to National Edu­
cation Goal Six (formerly numbered Goal 
Five) offer significant findings of rel­
evance to the design of student informa­
tion literacy performance outcomes and 
faculty good practice instructional crite­
ria. The National Assessment of College 
Student Learning has produced several 
studies, but the one of most interest is the 
1995 report Identifying College Graduates 
Essential Skills in Writing, Speech, Listen­
ing and Critical Thinking. What follows are 
examples of specific skills or expected 
performance outcomes of relevance to 
academic librarians which a consensus of 
stakeholder groups considered to be im­
portant or very important for college 
graduates to possess. Although these 
skills are not new to librarians involved 
with information literacy instruction, the 
fact that they have been articulated at a 
national level increases their credibility 
at all levels and their importance for in­
structional development and assess­
ment.27 For example: 

Pre-writing skills: College gradu­
ates should be able to research their 
subject and identify problems to be 
solved that their topic suggests. 1.a. 

Locate and present adequate sup­
porting material. 

Critical Thinking Skills: Evaluation 
Skills: The ability to evaluate the 
credibility, accuracy and reliability 
of sources of information was cited 
as extremely important. 

Critical Thinking Skills: Inference 
Skills: Collecting and Questioning 
Evidence: Determine what is the 
most significant aspect of a problem 
or issue that needs to be addressed, 
prior to collecting evidence. Deter­
mine if one has sufficient evidence 
to form a conclusion.28 

Another research project connected to 
the same National Education Goal ex­
plored the feasibility and utility of estab­
lishing good practice indicators in under­
graduate instruction as supplementary 
data to student outcomes data. Findings 
from this project are important to librar­
ians and others who teach or conduct 
training because they focus on assessing 
the instructional planning and delivery 
practices of librarians, which is after all 
the other side of the coin of assessing stu­
dent learning and performance. The re­
port synthesizes an extensive review of 
the empirical research in this area. Al­
though there are many useful findings of 
interest to instructional services librar­
ians, what follows are selected findings 
based on robust and strong data correla­
tions thought to be highly relevant for se­
lecting performance measures and types 
of methods to assess teaching practices: 

Indicators based on student behav­
iors and active learning instruc­
tional processes gathered through 
student and faculty questionnaires 
are most promising for develop­
ment as potential national indica­
tors, supplemented by transcript 
studies and assessments of typical 
college examinations and assign­
ments. . . . Overall the bulk of evi­
dence appears to support the util­

http:conclusion.28
http:exams.26
http:scores).25


 

The Library’s Impact on Campuswide Outcomes 555 

ity and consistency of data obtained 
from student self-reports.29 

For broader critical thinking and 
problem-solving abilities . . . three 
distinct kinds of in-class activities 
made a difference in promoting 
thinking skills—student discussion, 
an explicit emphasis on problem-
solving procedures and applica­
tions, and stressing the use of ver­
balization and modeling strategies 
in which students think through a 
problem.30 

Library Effectiveness Criteria and 
Performance Indicators from 
Research and Evaluation Manuals 
The following sources dealing with orga­
nization-level evaluation of academic li­
braries are briefly profiled because even 
though few impact measures were dis­
covered from the author’s careful review, 
they illustrate improved approaches for 
defining essential performance criteria. 
The selection of sources that follow was 
based on two criteria: (1) empirical stud­
ies using a multiple stakeholder or con­
stituent satisfaction approach; and (2) 
performance evaluation manuals that 
have been field-tested or developed with 
broad professional input. 

More recent studies of library effective­
ness at the organizational level have em­
ployed different approaches to measur­
ing effectiveness and service quality, 
which are based on the research of Nancy 
A. Van House and Thomas Childers and 
others.31 Although none of these multiple 
stakeholders or constituent satisfaction 
studies address the impact of academic 
libraries, they seem to offer improve­
ments over previous approaches to deter­
mining what should be measured. For ex­
ample, user validation of the most impor­
tant indicators of effective performance 
is provided; the methods are theoreti­
cally grounded, yielding reliable and 
valid test instruments; and librarians can 
select locally appropriate performance 
indicators for replication at their institu­
tions. 

Rowena Cullen and Philip Calvert’s 
work, carried out in all New Zealand aca­
demic libraries, included random samples 
of all key stakeholder groups to rate the 
importance of ninety-nine performance 
indicators. Figure 1 presents those indi­
cators rated 3.8 or higher by all six and 
by five of the stakeholder groups, using 
a scale where five is “very important.”32 

Tobin de Leon Clarke’s research and 
resulting Output Measures Manual for 
California community college library/ 
learning resource centers was based on a 
survey of all California community col­
lege libraries/learning resource centers. 
Using resource center administrators to 
rate potential output measures, she iden­
tified twelve useful measures and ob­
tained data on how such measures would 
be used at a local level. Only one impact 
measure was included, which could be 
used at institutions where separate infor­
mation literacy/library research courses 
are offered: “library skills course comple­
tion rate in proportion to the FTE student 
population.”33 

McClure and Lopata’s qualitative re­
search on assessing academic networking 
resulted in the publication of Assessing the 
Academic Networked Environment: Strate­
gies and Options. This much-needed 
manual provides strategies, performance 
measures, and procedures to document 
the extent, effectiveness, efficiency, and, 
to a lesser degree, the effects of the aca­
demic networked environment. Derived 
from their research findings is a descrip­
tion of the constituent elements of the aca­
demic networked environment and this 
conclusion: “An adequate network infra­
structure is believed to be essential to at­
tract and retain high quality faculty and 
students.”34 Depending on local educa­
tional goals, the following are potentially 
useful impact measures stated in, or re­
phrased from, this source: 

1. Teaching–learning indicators: 
� percent of all students enrolled in 

distance learning classes in a given semes­
ter and distance learning student GPA com­
pared to non-distance-learning student GPA; 

http:others.31
http:problem.30
http:self-reports.29


 

556 College & Research Libraries	 November 1998 

FIGURE 1
 
Cullen and Calvert Study: Performance Indicators for Which Group
 

Means Were 3.8 or Higher*
 

I. 	Management!Administration
+Match of goals and objectives to user group needs (4.18)
+Competence of library management (4.32)
#Total amount of library budget (4.06) 

II. 	Collections and Learning Resources Adequacy
 # Provision of multiple copies of items in high use (4.12)
 + Currency of library materials (4.15)
+Flexibility of budget to respond to new subject areas (4.01)

 +Speed of acquisition of new materials (4.01)
 +Adequacy of library collection compared with other institutions (3.84)
 # Frequent evaluation of collection (3.82) 

III.  Access, Availability, and Use
+ Match of hours open with user needs (4.33)

+ Proportion of library materials listed on computer catalog (4.33)

+Proportion of items wanted by user finally obtained (4.24)

+Access to library catalogues, via networks throughout the campus (4.06)
 
+ Ease of use of public catalogs (4.09)

# Speed and accuracy of reshelving of materials (4.1)

# Provision made for disabled users (4.17)

# Access to CD-ROMs, databases, via networks throughout campus (3.94 )

# Speed of recall of items out on loan requested by other users (3.95)

# Speed of recall of reserved items (3.85)

# Availability of periodical indexes on CD-ROM (3.85)
 

IV. 	Instructional and Research Services
+ Expert staff assistance to users available when needed (4.53)
+ Helpfulness, courtesy of staff  (4.50)
+ Expertise of reference staff (4.38)
+ Availability of reference staff when needed  (4.18)
+ Success in answering reference questions (4.15) 

V. 	Facilities!Infrastructure
+ Quietness of study environment (4.18 )

+ Number of seats per full-time student equivalent (4.06)

+Equipment (e.g. photocopiers) kept in service by good maintenance (4.16)
 

* Source: Cullen and Calvert academic libraries data. The + before an indicator indicatesall six groups had means of 3.8 or higher out of a possible 5, and a # before an indicator
shows that five of the six groups' means were 3.8 or higher. 

� percent of all student, faculty, and 
librarian respondents who rate the use­
fulness of having access to the OPAC and 
other online library services from outside 
the library as “very useful” for complet­
ing their class assignments, research, 
teaching and design of class assignments, 
job-related activities, and professional de­
velopment; 

� percent of student, faculty, and li­
b r a r i a n  re s p o n d e n t s  w h o  i n d i c a t e  
that student use of networked infor­
mation resources has positively af­
fected the quality of student papers/ 
projects; 

� percent of students who indicate to 
what extent and how the use of aspects 
of particular network resources have af­



    

The Library’s Impact on Campuswide Outcomes 557 

FIGURE 2 
Assessment Domains for the Teaching–Learning Library 

Infrastructure: Human Resources, Collections, and EquipmentiFacilities
 Access, Availability, and Use of Teaching-Learning Resources 

Institutional Viability and Vitality 

FacultyiAcademic Staff Teaching Effectiveness,
Scholarly Productivity, and Professional Development 

Learning Outcomes and Enabling Instructional Outputs 

fected their learning specific information 
literacy skills; 

� longitudinal data for same sample 
of students comparing freshmen and se­
niors rating their confidence of ability to 
be independent, electronic information 
seekers and evaluators; 

� percent of classes within academic 
programs requiring networked-based as­
signments. 

2. Faculty productivity indicators: Per­
cent of faculty indicating that in the past 
two years their use of networked infor­
mation resources has resulted in desirable 
outcomes such as obtaining funding and 
publishing research/scholarly articles. 

3. Recruitment indicators: Percent of 
new faculty and professional staff hires 
and new students and their ratings of the 
extent to which specific network appli­
cations affected their decision to come to 
the institution.35 

Performance Evaluation Manuals 
In the past ten years, there have been vari­
ous initiatives to advance the measure­
ment of library and network effective­
ness/quality by the publication of several 
manuals. The books by Van House, Weil, 
and McClure; McClure and Lopata; and 
Roswitha Poll were sponsored by major 
professional organizations, and were de­
signed for use in all academic libraries 
and field-tested.36 They describe and pro­
vide detailed guidance on data collection 
for a variety of measures of effectiveness 

(and cost-benefit/efficiency measures in 
the McClure and Lopata manual), but 
only the McClure and Lopata manual in­
cludes some impact measures. All include 
user satisfaction measures, materials 
availability and use measures, facilities 
use measures, and some public service 
measures. A comparison of the types of 
performance measures included in these 
manuals with those identified by Cullen 
and Calvert’s stakeholder research re­
veals that there are several equivalent in­
dicators, as well as unique ones from 
Cullen and Calvert’s findings, such as 
match of goals and objectives to user 
group needs; proportion of library mate­
rials listed on OPAC; access to library 
catalogues and other databases via net­
work; provision made for disabled users; 
expert staff assistance to users available when 
needed; and quietness of study environment. 

The findings described above from 
multiple stakeholder research studies and 
field-tested manuals reveal what aca­
demic library constituent groups and re­
spected colleagues perceive to be impor­
tant input and output performance indi­
cators. As such, they are candidates for 
key enabling indicators, those that make 
it possible for the achievement or success 
of valued institutional outcomes such as 
faculty scholarly and research productivity 
and student development of information lit­
eracy skills. Moreover, it is clear that accredi­
tation standards require or expect good per­
formance on these types of measures. 

http:field-tested.36
http:institution.35


558 College & Research Libraries November 1998 

Conceptualizing Outcomes-Based 
Assessment 
The next section provides both a concep­
tual framework to structure assessment 
efforts and a listing of specific outcomes, 
outputs, and inputs connected to perfor­
mance indicators that will generate evi­
dence of progress toward and/or accom­
plishment of valued campuswide out­
comes. Figure 2 presents a framework of 
five assessment domains the author sug­
gests are important assessment categories 
that all academic libraries should include 
in their assessment plans. It depicts the 
foundational role that infrastructure in­
puts play, as opposed to the primary role 
they have been known for in past library 
effectiveness evaluation studies. It also 
illustrates the priority of student learn­
ing outcomes by placing them at the top. 
Another intent of this schema is to com­
municate that each layer depends on the 
layers under it, although in reality there 
is an interplay of performance indicators 
represented by the categories. 

Connecting Institutional Outcomes 
and Outputs to Performance 
Indicators 
Using the findings from all the sources 
analyzed, figures 3 through 5 (pp. 564– 
570) reflect the five assessment domains 
and contain outcome/output statements 
and salient performance indicators. 
McClure and Lopata’s publication, Assess­
ing the Academic Networked Environment, 
is credited for many of the network-re­
lated performance indicators listed in 
these figures. Although a library’s selec­
tion of outcome statements and perfor­
mance indicators must be based on its 
institution’s mission, goals, and planning 
documents, all libraries might find use­
ful some of the outcome statements and 
performance indicators in figures 3 
through 5. They reflect what has been 
documented as valued by academic stake­
holders in three or more sources. Many 
variables other than the indicators listed 
in these figures lead to the accomplish­
ment of the specified campuswide out­
comes, but this article, understandably, is 

limited to the identification of only library 
and network contributions. 

The usefulness and benefits of figures 
3 through 5 are to be found in the sys­
tematic linkages of specific performance 
indicators to important campuswide out­
comes and outputs, but, admittedly, the 
author has not tried to be comprehensive. 
The emphasis of this article has been on 
identifying teaching–learning outcomes, 
outputs, and performance indicators be­
cause they are at the heart of how the li­
brary makes a difference to its institution. 
Of equal value is the inclusion of good 
practice criteria for teaching effectiveness, 
reminding the reader that librarian-in­
structors need to be evaluated using 
many of the same performance indicators 
as disciplinary faculty because good 
teaching enables effective learning. In 
addition, representative performance in­
dicators are included that were men­
tioned in standards and multiple stake­
holder studies for access, use, infrastruc­
ture, and institutional vitality. Both quan­
titative and qualitative types of measures 
are included because building a culture 
of evidence involves a cumulative, 
multimethod approach. Although data 
collection methods are not specified, the 
performance indicators suggest certain 
methods and readers can find guidance 
in several of the previously referenced 
performance evaluation manuals. 

The following example illustrates and 
clarifies how these tables might be used. 
A library preparing a self-study for an 
upcoming accreditation visit might begin 
by comparing the outcome statements 
found in figures 3 and 4 with outcomes 
articulated in its campus strategic plan or 
educational goals and priorities docu­
ments. As a result, additional outcome 
statements might be identified and/or 
modification of the language used in the 
figures to reflect local preferences. The 
idea is to develop library outcome state­
ments that reflect desired institutional 
outcomes and priorities as closely as pos­
sible, as well as drafting those that are 
important to librarians and accreditation 
teams (e.g., “All graduates are informa­



The Library’s Impact on Campuswide Outcomes 559 

tion literate,” or “The campus environ­
ment is conducive to learning”), but may 
not be contained in local documents. 
Within these broad outcome statements, 
librarians then begin to specify the en­
abling outputs and inputs (resources, 
materials, facilities, services) that their li­
braries provide and to determine perfor­
mance indicators that will actually be used 
for data collection and documentation. As 
figure 3 illustrates for some student learn­
ing outcomes such as #1, the library’s con­
tributions can be captured by performance 
measures that directly indicate progress 
(or lack of progress) and extent of accom­
plishment of the desired student learning 
outcome. However, for other outcomes, 
such as #2, the linkage is indirect, and 
enabling outputs and inputs must first be 
identified for which measurable library 
support can be documented that makes 
essential contributions to the accomplish­
ment of the outcome. This process should 
be carried out collaboratively with in­
volvement, or at least input, from aca­
demic administrators, faculty, and students. 
Perhaps the members of a library advisory 
committee could be used for this task. 

Clearly, outcome statement #2 in figure 
3 is extremely broad (and could be re­
phrased for graduate programs), yet it is 
predictably part of every institution’s de­
sired outcomes. The value of the eight “en­
abling instructional outputs and good 
practice criteria” is that, taken together, 
they define the library-related compo­
nents of necessary resources, services, and 
facilities that make it possible for the ac­
complishment of this broad outcome. In­
deed, the challenge to librarians is to make 
this connection clear by explicitly linking 
the enabling outputs and inputs to the 
desired outcome and to document the 
amount, quality, and effects of use of these 
(or other) essential factors. Throughout 
this process, care should be taken to use 
language from campus or higher educa­
tion documents, not library jargon. More­
over, another challenge is to organize and 
present the data meaningfully for the tar­
get audience of faculty and academic ad­
ministrators. Typically, library statistics 

are not kept or organized by academic 
programs, but to document the effects on 
students in particular academic divi­
sions, whenever possible data can be or­
ganized in that way. Use of the perfor­
mance indicators will generate data and 

One of the main points to be drawn 
from this article is that the assess­
ment of library performance should 
be defined and shaped by its 
connections and contributions to 
institutional goals and desired 
educational outcomes. 

other types of documentation (such as 
copies of assessment plans) that describe 
to what extent and with what results the 
library’s inputs and outputs (collections, 
electronic resources, services) have con­
tributed to progress and/or accomplishment 
of these enabling outputs and practices. 

Conclusions 
Academic libraries, computer/informa­
tion technology units, and their staffs do 
make a significant difference in the qual­
ity and outcomes of learning and teach­
ing. Sometimes, though, librarians are so 
involved in daily operations that assess­
ment and providing evaluative informa­
tion on a regular basis for themselves and 
their constituents take a low priority. 
Only a scheduled accreditation visit or 
campus program review causes a change 
in priorities. One of the main points to 
be drawn from this article is that the as­
sessment of library performance should 
be defined and shaped by its connections 
and contributions to institutional goals 
and desired educational outcomes. Thus, 
rather than continuing to generate poten­
tially irrelevant data, librarians, in col­
laboration with faculty in the disciplines 
and other academic staff, need to define 
for their institutions the key functions 
and resources perceived to be directly (or 
indirectly) linked to valued outcomes, 
such as student learning, teaching, and 
scholarly activity. Moreover, librarians 
need to specify indicators of performance 
that would generate needed and accept­



560 College & Research Libraries November 1998 

able data and other forms of documenta­
tion. Although it is always desirable to 
obtain data that attempt to “prove” that 
such and such an effect resulted from such 
and such cause or intervention, the au­
thor agrees with many higher education 
research findings uncovered in her litera­
ture review that confirm the reliability of 
student self-report data and other types 
of qualitative data that together can be 
used to demonstrate impact. 

Working cooperatively to define the 
performance indicators and key outputs 
need not only occur as a separate activity 
at program review or accreditation times 
but, rather, can be part of all ongoing com­
mittee work, such as collection develop­
ment, curriculum planning, and informa­
tion technology planning. Finally, this 
type of dialogue should result in pre­
ferred groupings and presentation of both 
quantitative and qualitative data that will 
form the culture of evidence for all stake­
holders. 

This article has tried to facilitate such 
a process by offering a framework of as­
sessment categories for academic librar­
ies that reflects valued institutional out­
comes found in a variety of publications 
and by providing examples of library per­
formance indicators whose measures 
form part of a culture of evidence that 
documents progress and contributions to­
ward the realization of desired outcomes. 
Although it is expected that institutional 
differences will lead to the development 
of other performance criteria and indica­
tors, the assessment domains, or their 
equivalents, are important for all institu­
tions. Particularly important is the em­
phasis on the teaching–learning outcomes 
to which the library contributes, such as 
teaching students to be information liter­
ate, training staff to use technology, and 
the library’s role in the development of 
new knowledge or information-retrieval 
products. These are active roles, some­
times unique leadership roles, for which 
the results of performance assessment can 
be directly linked to undergraduate and 
graduate learning outcomes. By identify­
ing those indicators found to be connected 

to institutional outcomes, the author 
hopes to have clarified how library ser­
vices, human and material resources, and 
facilities can be defined and measured to 
document contributions to these out­
comes. Also, by recommending specific 
performance evaluation manuals, the au­
thor has attempted to build on the exist­
ing library and network effectiveness 
knowledge base. 

In summary, these points remain es­
sential to the improvement of assessing 
impacts and outputs: 

� In collaboration with other key con­
stituent groups, a library assessment plan 
should be developed that focuses on per­
formance indicators which contribute to 
valued institutional outcomes and out­
puts. This may require that decisions be 
made to stop certain types of data collec­
tion so that time and resources are avail­
able for new data collection. It also means 
that changes may be needed in the types 
of questions asked of users and in the 
unobtrusive ways that computerized sys­
tems can document use. 

� Other assessment instruments and 
opportunities on campus should be 
sought where aspects of library perfor­
mance can be included, such as senior or 
alumni surveys or focus groups. 

� Benchmark data should be estab­
lished for appropriate performance indi­
cators and progress, change, and achieve­
ment documented as part of the culture 
of evidence. Lack of progress also can be 
connected to fiscal or other lacks in es­
sential inputs so that unmet or unsatis­
fied demand/needs might be better ex­
plained. 

� Relevant, available institutional 
and library data should be inventoried 
and used to complement other data col­
lection methods so that a multimethod ap­
proach is used. 

Data and findings should be organized 
and presented in ways that are more 
meaningful to faculty and academic ad­
ministrators, such as grouping data 
items by academic program or broad 
disciplines and describing how stu­
dents benefit. 



 

The Library’s Impact on Campuswide Outcomes 561 

On a broader, more external level, aca­
demic librarians can provide leadership 
to further improve the assessment of their 
libraries. They can become more familiar 
with accreditation standards and be part 
of their revision so that explicit language 
might be included about the library’s in­
structional role and the need for informa­
tion literacy as an undergraduate out­
come.37 Librarians can work to revise 
ACRL standards so that they reflect the 
emphases found in current accreditation 
standards and other higher education 
publications, such as demonstrating li­
brary/network use and its connection to 
teaching–learning outcomes and the in­
structional and technology development 
roles contributed by librarians to valued 
institutional outcomes. 

Clearly, the time is ripe for on-campus 
and broader professional initiatives to 

emphasize the measurement and descrip­
tion of the extent and effects on teaching– 
learning outcomes of library activities 
such as information literacy programs; 
course/curricular development practices 
and teaching methods; creation of learn­
ing opportunities via programming, 
physical or virtual exhibits, online tuto­
rials, and customized Web-based infor­
mation resources; and collaborations with 
disciplinary faculty and other academic 
staff in technology planning and the de­
velopment of instructional innovations 
and new knowledge or access products. 
Assessing impact becomes a way of or­
ganizational thinking about how aca­
demic libraries are linked to the overall 
educational enterprise. The resulting 
linkages, relationships, and benefits to the 
institution strengthen and help transform 
the library for the twenty-first century. 

NOTES 

1. Sarah M. Pritchard, “Determining Quality in Academic Libraries,” Library Trends 44 (win­
ter 1996): 591. 

2. Guidelines for Librarian Evaluators (Philadelphia: Commission on Higher Education, Middle 
States Association of Colleges and Schools, 1997), 1. 

3. See Paul B. Kantor, Objective Performance Measures for Academic and Research Libraries  (Wash­
ington, D.C.: ARL, 1984); Charles R. McClure and Cynthia L. Lopata, Assessing the Academic 
Networked Environment: Strategies and Options (Washington, D.C.:  Coalition for Networked Infor­
mation, 1996); Frederick W. Lancaster, If You Want to Evaluate Your Library, 2nd rev. ed. (London: 
Library Association Publishing, 1993). 

4. For example, see Steve Morgan, Performance Assessment in Academic Libraries (New York: 
Mansell, 1995); Roswitha Poll, ed., Measuring Quality: International Guidelines for Performance Mea­
surement in Academic Libraries (Munchen: K.G. Saur, 1996); Patricia A. Sacks and Sara L. Whildin, 
Preparing for Accreditation: A Handbook for Academic Librarians (Chicago: ALA, 1993), 39–74; Nancy 
A. Van House, Beth T. Weil, and Charles R. McClure, Measuring Academic Library Performance: A 
Practical Approach (Chicago: ALA, 1990). 

5. For example, see Rosemary DuMont and Paul F. DuMont, “Measuring Library Effective­
ness: A Review and an Assessment,” in Advances in Librarianship, vol. 9, ed. Michael H. Harris 
(New York: Academic Pr., 1979), 103–141; Deborah L. Goodall, “Performance Measurement: A 
Historical Perspective,” Journal of Librarianship 20 (1988): 128–144; Joseph A. McDonald and Lynda 
B. Micikas, Academic Libraries: The Dimensions of Their Effectiveness (Westport, Conn.: Greenwood 
Pr., 1994); Pritchard, “Determining Quality in Academic Libraries,” 572–594; Nancy A. Van House, 
“Output Measures in Libraries,” Library Trends 38 (fall 1989): 268–279. 

6. For example, see Jane E. Hiscock, “Does Library Usage Affect Academic Performance?” 
Australian Academic and Research Libraries 17 (1986): 207–14; Patricia B. Knapp, The Monteith Col­
lege Library Experiment (New York: Scarecrow Pr., 1966); Lloyd A. Kramer and Martha B. Kramer, 
“The College Library and the Drop-out,” College & Research Libraries 29 (1968): 310–12; Felix Snider, 
“The Relationship of Library Ability to Performance in College” (Ph.D. diss., Univ. of Illinois, 
Urbana-Champaign, 1965). 

7. Ronald R. Powell, “Impact Assessment of University Libraries,” Library and Information 
Science Research 14 (1992): 254. 

8. Ralph A. Wolff, “Using the Accreditation Process to Transform the Mission of the Library,” 
New Directions for Higher Education 90 (summer 1995): 79. 



 

 

562 College & Research Libraries November 1998 

9. ———, “Rethinking Library Self-Studies and Accreditation Visits,” in The Challenge and 
Practice of Academic Accreditation: A Sourcebook for Library Administrators, ed. Edward D. Garten 
(Westport, Conn.: Greenwood Pr., 1994), 125–138. 

10. For example, see D. W. Farmer and Terrence F. Mech, eds.,“Information Literacy: Devel­
oping Students as Independent Learners,” New Directions for Higher Education 78 (1992); Arlene 
Greer, Lee Weston, and Mary Alm, “Assessment of Learning Outcomes: A Measure of Progress 
in Library Literacy,” College & Research Libraries 52 (Nov. 1991): 549–557. 

11. Carla J. Stoffle and Karen Williams, “The Instructional Program and Responsibilities of 
the Teaching Library,” New Directions for Higher Education 90 (summer 1995): 64. 

12. McClure and Lopata, Assessing the Academic Networked Environment, 5. 
13. McDonald and Micikas, Academic Libraries, 36. 
14. These roles and functions have been drawn primarily from Chris D. Ferguson and Charles 

A. Bunge, “The Shape of Services to Come: Values-based Reference Service for the Largely Digi­
tal Library,” College & Research Libraries 58 (May 1997): 252–265; Stoffle and Williams, “The In­
structional Program and Responsibilities of the Teaching Library,” 63–74. 

15. Although there are six regions, there are seven accrediting higher education commissions 
because the Western Association of Schools and Colleges (WASC) has one for community col­
leges and one for senior colleges and universities. The North Central Association lacks a desig­
nated section for libraries/learning resources within the 1996 “Criteria for Accreditation,” but it 
does include library requirements in the “General Institutional Requirements” that describe 
threshold-level requirements for affiliation with the commission. Therefore, in most cases, the 
analysis was based on these six documents: Characteristics of Excellence in Higher Education: Stan­
dards for Accreditation (Philadelphia: Commission on Higher Education of the Middle States As­
sociation of Colleges and Schools, 1994); Standards for Accreditation (Winchester, Mass.: Commis­
sion on Institutions of Higher Education, New England Association of Schools and Colleges, 
1992); Accreditation Handbook (Seattle: Commission on Colleges, Northwest Association of Schools 
and Colleges, 1996); Criteria for Accreditation (Decatur, Ga.: Commission on Colleges, Southern 
Association of Colleges and Schools, 1996); Handbook of Accreditation (Oakland, Calif.: Accredit­
ing Commission for Senior Colleges and Universities, Western Association of Schools and Col­
leges, 1988); Handbook of Accreditation and Policy Manual (Santa Rosa, Calif.: Accrediting Com­
mission for Community and Junior Colleges, Western Association of Schools and Colleges, 1996). 

16. Southern Association of Colleges and Schools, Criteria for Accreditation, 5.1.1. 
17. Ibid. 
18. Middle States Association of Colleges and Schools, Characteristics of Excellence in Higher 

Education, 15. 
19. Western Association of Schools and Colleges, Commission for Community and Junior 

Colleges, Handbook of Accreditation, 4.C.3. 
20. Southern Association of Colleges and Schools, Criteria for Accreditation, 5.3. 
21. Middle States Association of Colleges and Schools, Characteristics of Excellence in Higher 

Education, 16. 
22. Southern Association of Colleges and Schools, Criteria for Accreditation, 5.3. 
23. “Standards for Community, Junior and Technical College Learning Resources Programs,” 

College & Research Libraries News 55 (Oct. 1994): 572–73. 
24. “Standards for University Libraries: Evaluation of Performance,” College & Research Li­

braries News 50 (Sept. 1989): 684. 
25. Alexander Astin, “What Matters in College,” Liberal Education 79 (fall 1993): 7. 
26. Ibid. 
27. In fact, several other significant publications have been penned by librarians that identify 

desired student outcomes and skills related to information literacy. For example, see Christina S. 
Doyle, Outcome Measures for Information Literacy within the National Education Goals of 1990: Final 
Report to the National Forum on Information Literacy—Summary of Findings 1992, ERIC, ED 351033; 
Jeremy J. Shapiro and Shelley K. Hughes, “Information Technology as a Liberal Art: Enlighten­
ment Proposals for a New Curriculum,” Educom Review 31 (Mar./Apr. 1996): 31–35; Susan C. 
Curzon and Work Group on Information Competence, “Information Competence in the Califor­
nia State University System: A Report” (Commission on Learning Resources and Instructional 
Technology, California State University System, Dec. 1995), photocopy. 

28. Elizabeth A. Jones, National Assessment of College Student Learning: Identifying College Gradu­
ates’ Essential Skills in Writing, Speech and Listening, and Critical Thinking: Final Project Report (Wash­
ington D.C.: U.S. Department of Education, 1995), 39, 136, 142. 

29. A Preliminary Study of the Feasibility and Utility for National Policy of Instructional “Good 
Practice” Indicators in Undergraduate Education: Contractor Report (Washington, D.C.: Office of 
Educational Research and Improvement, U.S. Department of Education, 1994), 1, 24. 

30. Ibid., 17. 



The Library’s Impact on Campuswide Outcomes 563 

31. For example, see Nancy A. Van House and Thomas Childers, “Dimensions of Public Li­
brary Effectiveness II: Performance,” Library and Information Science Research 12 (1990): 131–152; 
Rowena Cullen and Philip Calvert, “Stakeholder Perceptions of University Library Effective­
ness,” Journal of Academic Librarianship 21 (Nov. 1995): 445; McDonald and Micikas, Academic 
Libraries; Danuta A. Nitecki, “Changing the Concept and Measure of Service Quality in Aca­
demic Libraries,” Journal of Academic Librarianship 22 (May 1996): 181–190. 

32. The author is indebted to Rowena Cullen for providing the mean scores from each of the 
six stakeholder groups for all ninety-nine indicators, which allowed the author ’s analysis and 
presentation of data for figure 1. Using 3.8 as the cutoff was selected because it reveals more than 
the top twenty ranked indicators from the Cullen and Calvert article but still honors what they 
found about the small differences in ranking not being significant in the middle of the ranked 
lists. 

33. Tobin de Leon Clarke, “Output Measures for Evaluating the Performance of Community 
College Learning Resources Programs: A California Case Study,” in Advances in Librarianship, 
vol. 17, ed. Irene P. Godden (New York: Academic Pr., 1993), 199. 

34. McClure and Lopata, Assessing the Academic Networked Environment, 2. 
35. Ibid., 23–54. 
36. Van House, Weil, and McClure, Measuring Academic Library Performance; McClure and 

Lopata, Assessing the Academic Network Environment; Poll, Measuring Quality. 
37. The California Academic and Research Librarians Association Task Force to Recommend 

an Information Literacy Standard to WASC submitted its work to both of WASC’s higher educa­
tion accrediting commissions in fall 1997. As of July 17, 1998, these two documents can be ac­
cessed at http://www.carl-acrl.org/Reports/rectoWASC.html. 

http://www.carl-acrl.org/Reports/rectoWASC.html


564 College & Research Libraries November 1998 

FIGURE 3 
Student Learning Outcomes and Enabling Outputs 

with Selected Performance Indicators 

Student Learning Outcomes
1. All graduates are information literate, prepared to be lifelong learners able to effectively
identify, access, and use a variety of information resources; proficient with appropriate infor­
mation technologies; and able to evaluate and apply information to meet academic, personal,
and job­related needs.
Performance Indicators
a. Description of, number of student participants, and their perceptions of effectiveness and

benefits of independent learning opportunities related to information literacy (e.g., locally
produced tutorials/instructional software; reference transactions involving substantive teach­
ing; term paper or other individual research advising sessions; training videos; Web­based
instruction; printed guides).

b. Documentation of the extent and effects of the integration of library and network resources
use within academic programs and across the curriculum. For example, the number, type,
and results of information literacy-related degree requirements, course requirements, and
assignments in each academic program. Results might include the number of students suc­
cessfully completing assignments or courses, and actual student performance measures, such
as grades, student self­evaluations, search logs/journals, course portfolio scores and tests.

c. Longitudinal data for same sample of students comparing freshmen's or sophomores' rating
of their level of confidence about being able to perform specific information literacy skills to
that of seniors or recent graduates.

d. Perceptions of recent graduates about how their information literacy skills training/experi­
ence from undergraduate study contributes to their success in graduate/professional programs.

e. Success in applying information literacy skills on the job as perceived by alumni and employers.
f. Description of the information literacy program's reach and effects, including measures such

as participation rate in formal and informal instruction/orientation; information literacy course
completion rate and average grade per FTE student population; and an analysis of curricular
penetration based on BI program statistics, student transcript analysis, or syllabi analysis.

g. Student (and for some items faculty and librarian) perceptions of the effects of network use
on becoming information literate and academic performance, such as: Has student use of the
network affected the quality of papers and projects? If yes, how? Have specific network
resources or tools improved one's ability to succeed academically? If yes, which tools/re­
sources, and how? 

2. All graduates possess the skills, abilities, attitudes, and knowledge specified in their
academic programs.
Enabling Instructional Outputs and Good Practice Criteria
a. Undergraduate, specifically the general education program, and graduate programs require 

students to become information literate.
Performance Indicators
i. See above L.b.-d., and f.
ii. Copy of undergraduate and graduate catalog or other program documents that specify

information literacy requirement.
b. Sufficient and appropriate library, network, and other information and learning resources,

equipment, and services are provided and/or made accessible regardless of format or learner 's
physical location, and integrated into educational programs by required usage in courses
across the curriculum.
Performance Indicators 



The Library’s Impact on Campuswide Outcomes 565 

FIGURE 3 (cont.)
 
Student Learning Outcomes and Enabling Outputs
 

with Selected Performance Indicators
 

i.	 Number of hours students spend studying in the library and/or doing library/network­
based assignments. Data could be organized by academic program and correlated with GPA.

ii. To document access, use, and library infrastructure indicators, see selected perfor­
mance measures in figure 5.

c. A computing environment supporting direct­user access for all academic staff and students,
regardless of location or time, is effectively operating and reflects service linkages among
complementary units providing library, computing, and network services.
Performance Indicators
i.	 Data on perceptions of students, faculty, and staff of the effects of network services,

such as: 
� Has the network changed the way you study, teach, or do your job? 
� Has the use of the network affected the quality of learning in the classroom? 
� Has the network affected the quality of your mentoring/advising relationship? 
� Has the network affected the way you do information retrieval, conduct research,
or publish? If yes, how?

ii. rescription, use of, and faculty and student evaluations of the benefits of Web-based
instruction/training that librarians and others have produced/coproduced.

iii. Perceptions of all campus stakeholders of if and how the network affects institutional
image.

iv.	 Also see selected network access and use measures in figure 5, sections I and II.
d. New knowledge products and other instructional and information technology innovations to

improve distance education and on-campus independent and course-related learning have been
acquired and/or created locally by collaborations between library and other academic units.
Performance Indicators
i.	 See l.a. above.
ii. Data on the number of products, use statistics, description of relationship to educa­

tional goals, and student/faculty perceptions of benefits of electronic or multimedia
programs acquired or produced in collaboration with library departments.

iii. Quantitative and qualitative summary of the results of librarian memberships on

instructional development/innovation committees and their collaborations with

disciplinary faculty and other academic staff, particularly describing products or

outcomes related to teaching and learning.


iv.	 See figure 5, sections I and II for selected access and use measures.
e. The academic environment is conducive to learning and promotes an awareness and appre­

ciation of multicultural diversity.
Performance Indicators
i.	 Data on the number of, description of the relationship to institutional goals (e.g.,

multicultural diversity, study skills) and student/faculty perceptions of benefits of
exhibits, programs (lecture or films), multimedia, and Web­based programs acquired
or produced/coproduced by library.

ii. Number of minority staff and student workers employed in the library/learning

resources units.


iii. Number of hours group study and work spaces are used by students for peer learning
and interaction.

iv. See Table 5, section III.B. and C. for indictors to document collections, facilities, and
space. Particularly important to students is C.2. and C.3.

f. Effective instructional practices are employed, such as peer group interaction, problem­solving
assignments, appropriate use of instructional technology, and other active learning methods
that increase the extent and quality of student involvement in learning. 



566 College & Research Libraries	 November 1998 

FIGURE 3 (cont.)
 
Student Learning Outcomes and Enabling Outputs
 

with Selected Performance Indicators
 

Performance Indicators
i.	 Data from syllabi analysis of types of assignments involving library/Internet research.
ii. Student course evaluation ratings of the use and quality of active learning strategies

such as required use of library and network resources, Web-based interactive tutorials,
group projects, problem-solving assignments, etc.

iii. Student and faculty ratings of librarian teaching effectiveness.
iv. User survey data on effectiveness of independent learning programs such as audiovi-

sual, multimedia, and Web-based instruction. 
v. Number of hours group study and work spaces are used by students for peer learning

and interaction.
g. Instructional objectives and student outcomes are clearly specified in academic programs

and services so that what students are expected to know and do is evident.
Performance Indicators
i.	 Copy of the information literacy assessment plan, which includes a description of

expected information literacy outcomes/competencies for general education and other
academic programs and how competency or proficiency is determined.

ii. Summary of learning objectives for various levels of information literacy instruction,
including examples of lesson plans and assignments that specify and illustrate these
objectives.

h. Assessment plans, procedures, and processes are in place to evaluate and improve the qual-
ity and effectiveness of learning and teaching.
Performance Indicators
i.	 Copy of the information literacy assessment plan, which includes performance

indicators for measuring student progress and achievement from college entrance/
transfer-in to graduation; demonstrated application of good assessment practices, such
as faculty involvement in developing plan; use of multiple methods to gather data; and
statement of how assessment results are used for program improvement.

ii. Copy of the library's assessment plan, as well as examples of questionnaire items
included in other units' evaluation instruments related to library/network resources
and services. 

3. Graduates pursuing postbaccalaureate study possess the knowledge and skills to succeed
in graduate/professional programs.
Performance Indicators
a. Survey data of samples of recent graduates about how their information literacy skills train­

ing and experience from undergraduate study contribute to their success in graduate/profes­
sional programs.

b. Self­report data from graduating seniors rating their perceived ability to apply information
literacy skills to graduate study and research.

c. Data from analysis of senior seminar and capstone experiences and portfolios used in spe­
cific academic programs. 

4. All graduates have the knowledge and skills to conduct an effective job search.
Performance Indicators
a. Survey data of recent graduates' perceptions of usefulness of job­seeking library and net­

work resources and library sponsored or cosponsored workshops, exhibits, and services.
b. Number, description of, and student perceptions about the benefits of library, computer and

related information technology work experience programs, and internships.
c. Number of hits library­maintained Web pages receive dealing with careers and job hunting. 



The Library’s Impact on Campuswide Outcomes 567 

FIGURE 4
 
Other Institutional Outcomes and Outputs to Which Libraries Contribute 

I. Faculty/Academic Staff Research and Scholarly Productivity and Professional
Development Outcomes
1. Faculty/academic staff are active professionally and contribute to research, scholarly/
creative works, and community service.
Performance Indicators
a. Data documenting faculty/academic staff perceptions and experiences regarding the

effects of network services and resources, such as: 
� Has the use of the network affected the quality of teaching material used in the

classroom?
 
� If you depend on the network for your work, for what types of uses do you depend
on it? 
� Has the network affected the way you do literature searches, conduct research,

communicate, or publish? If yes, how?


b. cescription, use of, and faculty and student evaluations of the benefits of Web-based
and various instructional innovations in which librarians have been involved in the
planning, development, or production.

c. Total number of grants secured, publications, presentations, creative works, instructional
development projects, and community service projects, with the number and percentage
of those benefiting from the use of library and/or network resources and services, such
as document delivery/ILL, and reference/research-support services.

d. Summary data of promotion, tenure, and merit awards for librarians, faculty, and other
academic staff. 

2. Faculty, librarians, and other academic staff maintain excellence in teaching and
equivalent academic support roles.
Performance Indicators
a. Student course evaluation ratings, including the use of active learning strategies, such as

required use of library and network resources, group projects, and problem-solving
assignments.

b. Student and faculty class evaluations of librarian teaching effectiveness, especially use
of active learning strategies.

c. Data on effects of network services on performance, such as selected data for L.a. above.
d. Description of the faculty/academic staff development program that includes data on the

number, type, and attendee perceptions of the quality and benefits of training and other
staff development opportunities. 

II. Institutional Viability and Vitality Outcomes and Outputs
L. Student and staff recruitment and retention rates meet institutional targets and staffing
needs.
Performance Indicators
a. Percentage of new faculty, librarian, and staff hires and new students who indicated that

specific library resources and network services affected their decision to come to the
institution.

b. Qualitative data describing reasons for student attrition and faculty/staff resignations.
c. Report organized by colleges or academic programs, (including the library) summariz-

ing number, percentage, and race/ethnicity of students (student workers for the library),
faculty, librarians, and staff, as well as staffing needs in relation to enrollments and
program needs. 



568 College & Research Libraries	 November 1998 

FIGURE 4 (cont.)
 
Other Institutional Outcomes and Outputs to Which Libraries Contribute 

2. The campus environment and morale promote operational excellence and effectively
support institutional goals.
Enabling Inputs and Outputs
a. Campus revenue is sufficient to support educational programs and other operations.

Performance Indicators
i.	 Description of library's success in fund­raising activities and grants.
ii. Data on expenditures connected to academic program benefits, possibly also


including costs of unmet demands/needs.

b. The campus governance structure includes appropriate staff and students in its commit­

tee memberships and contributes significantly to campus programs and services.
Performance Indicator
i.	 Quantitative and qualitative summary of the results of librarians' memberships on

campus committees and their collaborations with faculty and other academic staff,
particularly describing products or outcomes relating to teaching, student services,
collection development, information technology planning, and assessment.

c. Institutional units cooperate and the institution collaborates, as appropriate, with
neighboring K-12 schools, community colleges, and other organizations to improve
education at all levels.
Performance Indicator
i.	 Copies of cooperative resource­sharing agreements and contracts, as well as other

cooperative agreements, including documentation of the benefits to students and the
cooperating units.

d. A computing environment supporting direct­user access for all academic staff, regard­
less of location or time, is effectively operating and reflects formalized service linkages
among complementary units to support teaching, research, and administrative functions.
Performance Indicators
i.	 Perceptions of all campus stakeholders of how the network affects campus opera­

tions and institutional image.
ii. Description of computing/network environment focusing on use statistics, benefits,

and service linkages
e. Campuswide assessment plans and procedures, developed by appropriate segments of

the institution, are effectively employed to advance institutional goals and objectives.
Performance Indicator
i.	 Copy of the library's assessment planes), which includes demonstrated application

of good assessment practices as well as examples of questionnaire items included in
other campus units' evaluation instruments related to library and network resources
and services.

f. The faculty and staff professional development program is operating effectively.
Performance Indicator
i.	 Description of the faculty/staff development program, which includes data on the

number, type, and attendee perceptions of the quality and benefits of training and
other staff development opportunities. 



The Library’s Impact on Campuswide Outcomes 569 

FIGURE 5
 
Access, Availability, Use and Infrastructure Measures
 

(Input and Output Measures)
 

I. Access and Availability Measures
(Where applicable, provide data separately for off-campus access/distance learners.)
1. Proportion of collections/materials listed in OPAC.
2. Extent and ease of access to library catalogs and databases for all campus constituents,

but particularly distance learners.
3. User satisfaction/success rate in finding and obtaining materials.
4. ILL/document delivery fill rate and turnaround time.
5. Description and results of cooperative resource-sharing agreements and contracts with

external information and document providers, including the benefits to students and
cooperating units.

6. Speed of recall for items on loan requested by other users.
7. Match of hours open and electronic resources and services availability with user

needs.
8. Speed and accuracy of reshelving of materials.
9. Description of how disabled users can access library resources. 
II. Use Measures
(Includes reference and other user assistance services that facilitate use.)
1. Number of remote and nonremote log-ins to OPAC and other networked resources per

capita.
2. Number of searches from remote and nonremote terminals per capita.
3. Number of hits library-maintained Web pages receive.
4. Number and/or percentages of faculty, students, staff visiting library-produced parts of

the CWIS.
5. Entrance gate counts per FTE and/or number of sign-ups for group study rooms.
6. Number of courses and students by academic program requiring use of library and

network resources.
7. Circulation and in-house use of collections per FTE user and organized by academic

program/major and user category.
8. Number of instructional software items delivered or charged out to faculty for

classroom use in proportion to faculty population by academic program.
9. User satisfaction with use of selected materials, services, and facilities.
10. User satisfaction with availability and quality of reference assistance. 
III. Infrastructure Measures
A. Human & Fiscal

1. Number of professional staff and how they are deployed to support campus and
library mission and goals (could include comparisons to peer institutions and/or
ACRL standards).

2. Ratio of reference and instruction/training services staff to users and/or potential
users (could include data documenting unfilled training demand).

3. Expenditures connected to academic program benefits, documenting costs of unmet
demand/need. 



570 College & Research Libraries November 1998 

FIGURE 5 (cont.)
 
Access, Availability, Use and Infrastructure Measures
 

(Input and Output Measures)
 

B. Quality of Collections and Learning Resources (See also relevant measures in I and II
above)
1. Statistics, organized wherever possible by academic program or broad discipline, of

the number of locally held or accessible collections and learning resources, and a
description and assessment of how these support the curricular, cocurricular, and
faculty research needs.

2. Currency of materials for specific academic programs.
3. Provision of multiple copies of high use items.
4. Flexibility of budget to respond to new subject areas.
5. Adequacy of library collections compared to peer institutions.
6. Description and results of periodic collection evaluations. 

C. FacilitiesiEquipment
1. Description of extent of campus network and its components; number of public

access stations; number and percentage of classrooms, student labs, residence halls,
and faculty offices that have access to campus network; and number of dial-in and
other access modes for off-campus students and staff. 

2. User perceptions of the quietness of the study environment.
3. Number of seats and group studyiworkrooms per FTE and stakeholder perceptions

about adequacy of such space.
4. Documentation on equipment (e.g., photocopiers, computer workstations)


replacement and maintenance, along with user satisfaction ratings.