ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 1 Learning Assessment: Hyperbolic Doubts versus Deflated Critiques Andrew N. Carpenter & Craig Bach ABSTRACT: Arguments against outcomes assessment often provide powerful portrayals of assessment as anathema to quality teaching and learning in higher education. However, we two philosophers, with extensive experience designing, im- plementing, and managing outcomes assessment, find these arguments to be less than convincing. In this paper, we present a philosophical analysis of some of these arguments with the goal of unpacking their exact strengths and weaknesses. In doing so, we are more interested in discussing these arguments in the context of assessment (or conceptions of assessment) well done and well managed rather than reading these arguments as attacks on poorly implemented versions of assessment. In short, we aim to get at the realistic possibilities of using assessment as a tool for improving instruction, curricula, and student learning. We also advocate scholarship of teaching and learning that aims to improve theories of learning assessment and to develop new models and methods of assessment. INTRODUCTION Arguments against learning outcomes assessment are too frequently framed in hyperbolic and overly emo-tional terms: “outcomes-assessment practices in higher education are grotesque, unintentional parodies of both social science and ‘accountability’” (Fendrich, 2007); assessment is the death of the humanities at the hands of the social sciences; assessment is a direct, offensive attack on the authority of the faculty and academic freedom1. These and similar published, sometimes whispered, arguments against learning outcomes assessment portray the practice as an external, overly bureaucratic and compliance-focused process that has little or no re- gard for the academic disciplines, the faculty who practice within them, and their students’ true learning needs. The authors of this article—two philosophers with extensive experience designing, implementing, and managing learning outcomes assessment—find these arguments to be less than convincing as attacks on the enterprise of learning assessment. Indeed, the authors have found that most of these arguments contain important insights that inform new and more nuanced approaches to learning assessment. The sustained resistance to learning outcomes assessment seems to have three main sources. First, and most importantly, is the focus on assessment as a compliance and accountability issue with little or no recognition of its possible benefits. For example, while the current American Philosophical Association’s statement on learning assessment is an improvement over the defensive posturing contained in the previous version, it still focuses on accountability and basic definitions, omitting discussion of the transformative possibilities of doing assessment (APA, 2008). Second, implementing learning assessment drives a shift in cultures. Understanding whether, and how well, our students are learning both in courses and across their programs of study often requires changes in how we teach our courses and how we collaborate with our colleagues. Both involve long- and strongly-held, often implicit, beliefs about the role and responsibilities of the faculty. Third, almost all of the regional accredi- tors develop the conversation on assessment in terms of building a culture of assessment, as if doing assessment were an end in itself. Of course, it is not. Building a culture of effective learning (if one wants to speak in terms of culture building) is most likely a more accurate statement of the end goal of learning assessment. The focus on mechanism and method, instead of the goal of learning, places assessment outside the broader context of all the efforts we already undertake to support improvements in teaching, learning and curricula. The ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 2 conceptual isolation of assessment is inappropriate and damaging, and not dissimilar from asking the field of astronomy to set as its main goal the development of a culture of telescopes (McEachron & Bach, 2010). The shift of focus, perhaps small, has a distinct influence on how learning assessment is presented and leads to many of the current problems found in its implementation2. Reinforcing the impact of these sources of resistance is the fact that persons identified to lead institutional assessment efforts are often hired in the midst of a looming accreditation visit and respond by managing the assessment effort as a compliance exercise. It is not surprising that learning assessment has not been broadly and energetically embraced. At the heart of the push to better articulate and evaluate student learning is a seemingly simple value proposi- tion: Institutions should be able to demonstrate the value of the investments made in them. Here there is noth- ing specifically about learning assessment – value can be expanded in terms of better job prospects, contribution to the economy, or improving social cohesion. Learning assessment enters the conversation when the claim is further articulated by the addition that institutions should be able to demonstrate their value in terms of their core purpose. In the case of colleges and universities that core purpose is learning. On the face of it, neither of these propositions are unexpected or untoward. The former pushes us to communicate the value of an education and the latter to do so in more discrete terms related to the breadth and depth of learning within a curriculum. The problems arise in the specific manner in which the propositions are interpreted and the ways institutions decide to address them. The shift in culture coincident with learning assessment does not impact all departments or institutions evenly. Departmental and institutional differences regarding disciplinary autonomy and authority, academic freedom, and faculty autonomy and satisfaction, as well as differences in mission, play key roles in the perceived threat and encumbrances of implementing learning assessment. Some of these threats are real, but they are not threats from learning assessment, rather they are threats from learning assessment implemented without ap- propriate respect for disciplinary and methodological differences across areas of study. In fact, even in the most silo-structured departments (should any exist) where faculty would not think to discuss their teaching with other faculty members, where sections of the same course are incommensurable, and where all courses are independ- ent of, and not intentionally connected to, a programmatic vision, a robust learning assessment model that meets accountability goals and informs departmental members in meaningful ways about student learning could still be implemented. Additionally, the authors have found that the discussions that are part of implementing learning assessment are often a catalyst for re-evaluating and better articulating a range of assumptions about how departments work, the responsibility of faculty to their students, the level of expected collaboration among faculty, and the roles and responsibilities of students. In this paper, we identify six key arguments that either directly attack outcomes assessment or attempt to mar- ginalize its role in higher education. For each argument, the authors present its full exaggerated and impassioned form and then pull out its core elements to reconstruct the argument into a moderated form. We then analyze the reconstructed argument with the goal of unpacking their strengths and weaknesses. We argue that these emotionally-deflated critiques, while offering significant and meaningful responses, pose no decisive objections to the enterprise of assessment but rather provide useful insights and criticisms that can inspire and inform the development of effective, faculty-driven assessment. In moving the conversation forward, we are more interested in discussing these arguments in the context of assessment (or conceptions of assessment) well done and well managed rather than reading these arguments as attacks on poorly implemented versions of assessment. The authors are not interested in being boosters for learning assessment in a war of rhetoric. We aim to get at the realistic possibilities of rational, thoughtful work on learning assessment to improve instruction, curricula, and student learning —and most especially in our home discipline of philosophy and in other disciplines that may not be well-served by existing assessment practices. ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 3 SIX FUNDAMENTAL CRITIQUES OF ACADEMIC ASSESSMENT Below we address six arguments that present significant and fundamental critiques of academic assessment. Our analysis of these arguments rests on two assumptions: 1) assessing the strengths and weaknesses of these critiques provides conceptual resources that can improve the theory and practice of academic assessment and 2) exaggerated, hyperbolic presentations of the critiques block constructive dialogue and so retard the development of effective assessment of student learning. It is our aim, therefore, to inspire our readers to reject hyperbolic forms of these critiques in favor of more moderate variants about which both opponents and proponents of cur- rent assessment practices can engage in constructive dialogue. We are confident that such dialogue would lead to significant improvements in the theory and practice of learning assessment. The Nature of Higher Education Argument The first argument we consider maintains that learning assessment is a misguided venture that misunder- stands the nature of student learning. According to this objection, there exist vital aspects of higher education that either are not captured by assessment efforts or, worse, are disrupted or otherwise harmed by those efforts. For example, perhaps assessment is incompatible with the holistic nature of a college education: student learning is not reducible to discrete bits of knowledge of the sort that assessment methods seek to measure. Or, it may be the case that assessment practices undermine the mentor/protégé relationship that is important in some higher education classrooms. Finally, and perhaps most powerfully, the project of academic assessment flies in the face of the vital facts that, first, higher education is an ongoing process that extends from one classroom to another throughout students’ academic careers and, second, that much of the benefit and value of higher education is revealed years after graduation and so in principle is immeasurable while students are enrolled. In its strongest form, this objection seeks to demonstrate not only that assessment practices do not measure what is important about student learning, but that they systematically devalue important pedagogical techniques and learning activities and thus may serve to delegitimize and undermine the use of some of the most important aspects of teaching and learning. If this line of thought is correct, then there exists a good reason for educators actively to resist assessment. A weaker variant of this argument maintains that attempts to assess learning will in- evitably fail to capture the most important aspects of students’ college education. Both versions assume—rightly, we think—that much of what matters the most about education does not occur until years after graduation, that the most important forms of learning cannot be reduced to discrete knowledge bits, that the most significant learning often involves close, personal, and unique relationships between students and faculty, and that the deepest and most important way to view learning within higher education is as an ongoing process of education rather than a series of specific destinations corresponding to each class, term, or academic year. We submit that these points present significant challenges to meaningful learning assessment and we ac- knowledge that there exist some types of learning that are difficult, perhaps in some cases impractical or even im- possible, to assess. Yet we need not make a strong impossibility claim about some types of learning to move this argument forward. We can merely recognize that any system of assessment leaves things out whether by necessity, intention, or accident —one cannot capture all learning that occurs in a class or program. This conclusion, how- ever, does not justify abandoning attempts to assess learning that can be measured well. In fact, the conclusion points to the importance of clearly distinguishing the kinds of learning that can be meaningfully and efficiently measured from other, perhaps more important areas, that are not readily amenable to assessment. This effort has two benefits: 1) it creates a space for faculty to work together to define what learning is im- portant, what learning can be assessed with available methods, and what learning cannot be assessed (e.g., too resource intensive, difficult, impractical), and 2) it helps faculty to develop a learning narrative that ensures that efforts to assess the learning that can be measured will neither delegitimize learning that cannot be measured nor undermine educators’ attempts to help their students to secure that learning. ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 4 With respect to understanding exactly which learning is assessable, we see two reasons for optimism about the prospect of educators developing sophisticated methods to measure learning in some of what now appear to be “hard cases.” First, we note that assessment need not always be in the aggregate: measures of student learning can attempt to capture unique uses (student-faculty interaction) and learning across courses and years (program- level assessment). Second, we also note that it is possible to measure some “down the road” learning: these as- sessments are not impossible to conduct, merely difficult because it is hard to secure good access to students 5, 10, or 20 years after they have completed their studies in order to understand what learning remains important over those intervals. There is another way to approach the nature of higher education argument. Through the very acts of con- ducting discussions about which student learning is easily measured and which important learning is impractical or perhaps impossible to measure; by articulating clearly the various types of learning that is expected of their students in specific classes, across programs, and past graduation; by discussing what learning is most meaning- ful to their students, faculty engage in an important aspect of learning assessment that takes seriously the les- sons to be learned by the moderated nature of higher education criticism. If these educators further determine what information would count as evidence of student achievement of any learning priorities that currently seem difficult or impossible to assess, and discuss possible ways of gathering that information and determining how well students have achieved those priorities, then they are engaged in work that advances assessment practices in ways that respect the pluralistic conception of the types of significant learning that we believe constitutes an important lesson to be drawn from the moderated critique. Finally, if faculty members articulate those areas of student learning that cannot be assessed, then it follows that conversations can occur about what learning of these areas might look like and how the assessable areas of student learning relate to them. These discussions can then support a coherent learning narrative that integrates assessed activities and collected evidence with other areas of importance to the faculty – creation of this collaboratively-developed learning narrative is an example of very good and very sophisticated learning assessment. If the optimism we express above is correct—that some significant learning that cannot be measured by current assessment practices will become measurable as higher education faculty develop new assessment methods—it follows that the educators designing assessment processes, first, need to ascertain exactly which types of learning can adequately be measured using existing methods and, second, need to explore creatively and thoughtfully new methods of assessing the important areas of student learning that cannot be measured using currently available methods. Perhaps most importantly, those responsible for planning and conducting assessment need to become proactively engaged in creating productive scholarly conversations about these issues and need to publicize their investigations in ways that prevent external agencies from mandating reporting that devalues learning that is not easily measured or that cannot be measured at all. Social Scientism The next objection we consider focuses on the limitations of a methodology that some critics assume assess- ment typically or even necessarily adopts. According to this objection, the practice of academic assessment is mired in assumptions and methods of social science that are inadequate for measuring student learning. The most direct and emotionally laden presentation of this argument is the remark noted at the beginning of this es- say, “assessment is the death of the humanities at the hands of social science.” This sentiment is further, but no less emotionally, articulated by Fendlich (2007): “what’s currently practiced as outcomes assessment may have a place in the fields of mathematics and the hard sciences (emphasis on the word may), it’s a destructive blunder- buss when applied to the arts and humanities.” In its strongest form, this objection first maintains that the impact and value of one’s own work as an educa- tor can only be properly understood by methods of inquiry located within that discipline and then rejects the idea of trans-disciplinary methods of assessment. Typically, this critique takes the form of an argument against a form of social scientism that is unable to comprehend the most important and distinctive elements of disci- ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 5 plinary teaching; those elements, these critics urge, cannot be understood by generic social science assessment methodologies. A weaker version of this argument maintains that those assessment methodologies can measure some important aspects of learning, but cannot address the disciplinary core of what those outside of the social sciences do as educators. If either of these objections is true, then measurement of learning based on social sci- ence methodologies is so limited that it would be harmful to make important decisions about education and learning on the basis of those assessments. This argument, for example, underlies Andrew Davis’s (1988) skepti- cism about “making judgments about rich cognitive achievement on the basis of limited samples of behavior” (p. 61) —the substantive heart of the critique is that, for many disciplines at least, the data gathered and analyzed through the methodologies of social science is unable to support meaningful conclusions about the forms of student learning about which faculty in many disciplines care the very most. We agree that the objects of knowledge of the social sciences are very different from the things that members of other disciplines would want to understand about the learning that occurs in their classes; this is the case, for example, in our own academic discipline of philosophy. From this, it follows that using methods of inquiry designed around the objects of knowledge studied in the social sciences are unlikely to tell us everything we want to know about learning in our philosophy courses and programs. However, we also maintain 1) that there are some important aspects of learning in our discipline that are well-captured by the methods of social science, and 2) that assessment of learning need not be based solely on social science methods and educators are free to use or reject the research methodologies of social science as they develop discipline-appropriate assessment methods. The arguments to support the first point are overwhelming and frequent and we won’t spend time on them.3 However, the second point gets very little, if any, sustained attention in the literature. During our work on assessment across disciplines, we have developed a working hypothesis related to the social scientism argument that provides a framework for our second response to the argument. The hypothesis goes something like the following: While there are aspects of student learning that are amenable to assessment using common methodologies (including social science methods), for each discipline methods of learning assess- ment can be developed that are aligned to how these disciplines investigate and engage the world. For example, good models of assessment have been developed in engineering departments that bring key engineering concepts (e.g., process improvement, controls, systems analysis) to bear on developing meaningful learning assessment. Moreover, these discipline-based approaches can find meaningful use in other contexts, including the develop- ment of new, cross-disciplinary assessment models and methods. Consider the case of the authors’ home discipline, philosophy. Philosophers too need to develop and inte- grate assessment methods that meet their needs as educators and that are appropriate for the learning for which they hope their students to secure. But this effort is not just about defining appropriate methods for our own discipline: the methods of philosophy can contribute to many aspects of learning assessment, including the re- finement of assessment theories and the development of new assessment practices. The following two examples will illustrate the point: Learning Outcomes A foundational activity of learning assessment is the development of learning outcomes. The outcomes are usually developed within a larger hierarchy of outcomes and may use a range of theoretical frameworks (e.g., general-specific, course-to-program-to-institution, Bloom’s taxonomy, Maslow’s needs hierarchy, Kolb’s learning cycle) or categorical structures (e.g., Gardner’s multiple intelligences, Perry’s categories of knowing) to organize them. These are common practices; however, there are many areas of imprecision that regularly occur in these structures that impact their validity. As Bach and Carpenter point out (2006), “there are inconsistencies within many implementations concerning how program/course/lesson outcomes are defined: What are the appropri- ate levels of generality in which to express them? How do we identify when one outcome falls under another out- come of greater generality? To address both these concerns, we would expect to see a set of criteria, definitions, and necessary and sufficient conditions to refine and delineate the concepts and relationships at issue.” The fact ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 6 is one rarely sees this level of rigor. An appropriate philosophical (e.g., conceptual) analysis in addressing these questions would add a great deal to the literature and current practice. Learning Goal Development One method of developing a set of learning goals for a program is through a process of a faculty-led, concep- tual analysis of current and historical syllabi, assignments and student work as primary texts. The process focuses on what faculty and students are actually doing in their classrooms and can make use of the methodology of philosophical conceptual analysis. This conceptual analysis would be intended to identify the range of learning occurring in a program, ways of supporting student success (e.g., assignment, instructional support), methods of evaluating student work and prioritizing the learning that is evidenced in the documents in ways that extend the rigorous analysis of concepts related to learning outcomes discussed in the preceding paragraph. This extended conceptual analysis would thus use a philosophical methodology to facilitate conversations among faculty in any discipline and could serve to create the kind of scholarly community that we above identify as a hallmark of sophisticated assessment. Simply put, the social scientism critique overlooks the possibilities of constructing diverse assessment meth- ods that are appropriate to the learning that occurs in different subjects and disciplines. Revisiting Fendlich’s comments from the start of this section, we note that instead of proposing that she work with her colleagues to develop new assessment methods suited for their own discipline Fendlich gloomily concludes that “we in higher education —especially those of us who teach fine arts, drama, dance, literature, history, religion, and philosophy —have, of course, brought this plague of pedagogical bean counters upon ourselves” (2007). We maintain that creative disciplinary and cross-disciplinary approaches to developing assessment theory and practice can go a long way towards keeping those bean counters at bay. Reductionism In its strongest form, the reductionism argument maintains that assessment of student learning is an illegiti- mate enterprise because the external authorities who demand that assessment take place erroneously assume that assessment is to be solely equated with objective testing and that assumption embodies a problematically narrow conception of education and learning. According to this objection, the assessment enterprise seeks to reduce and redefine learning in a way that does not capture the sophisticated and rich activities that matter the most to educators and their students. A more moderate version of this objection acknowledges that there exist other methods of assessing student learning, but notes that they at best play a marginal role due to the conflict with the strong orthodoxy that maintains objective tests are the “gold standards” to which all assessment should aspire. Since this argument centers on a fundamental misunderstanding of the nature of learning in higher educa- tion, one can view it as an example of the first criticism we discuss above, the nature of higher education argu- ment —and, indeed, the points we lay out above apply to the specific line of thought upon which we focus here. Even so, we would like to make additional points about the anxieties common in higher education about the misuse of objective tests in learning assessment. We see three reasons to be anxious about the use of objective tests to assess student learning. First, we agree that objective testing cannot capture much of what educators do (this is certainly true of our work with our phi- losophy students). Second, we agree that there is growing pressure from accreditors and the federal government to use standardized tests (e.g., ETS Proficiency Profile, CLA4) to measure student learning. Third—and even though we believe that testing can adequately measure some aspects of student learning in any discipline—we maintain that objective testing is not a sufficient tool for capturing all learning that is important to any discipline. ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 7 However, we reject the charge that the assessment enterprise is hopelessly reduced to objective testing. There exist many other methods of assessing and reporting student learning that can meet accreditation/accountability goals and provide meaningful information to educators and students. There is no reason why educators cannot develop those methods in ways that satisfy the demands of external parties. We further suggest that there is great benefit in educators doing this as a means of ensuring that policy work by the federal department of education and regional accreditors is informed by more complex and meaningful concepts of learning, theories of assess- ment, and assessment practices. Indeed, we submit that a significant reason why we are assailed by inappropriate and reduced conceptions of assessment is that higher education has not responded creatively and with scholarly sophistication to calls for accountability consistently or sufficiently. Further, this lack of innovation in response to calls for accountability is not solely about how assessment is regularly conducted (e.g., testing, portfolios, rubric-based performance assessment) – there exist significant opportunities for theorists and practitioners to build upon existing academic literature that discusses ways to respond innovatively and creatively to calls for accountability and transparency (Astin, 1990; Hernon & Dugan, 2006; Mentkowski & Associates, 2000). It is also about a lack of innovation in our conceptions of institutions of higher education, academic freedom, cur- ricular governance, faculty autonomy, the role of the learner and many other core beliefs about the enterprise of postsecondary teaching and learning. The Perfectionist Fallacy In its starkest form, this objection maintains that, because it is difficult or impossible to assess all learning, assessment of student learning is absurd. In its more moderate forms, this criticism maintains that the most important aspects of learning are not measurable in ways that will meet accreditation or accountability demands and so assessment is fundamentally a bankrupt enterprise. We have already expressed our agreement with the claim that many of the goals of education are difficult to measure, and we have already identified many reasons why this is so: the time frame in which this learning is demonstrated (in many cases, perhaps, extending years beyond graduation), the lack of consensus among academics about what this learning entails, the individual nature of much of this learning, the abstractness and complexity of some learning, and the growing pressure to meet accountability demands through objective tests. As we have also already discussed, each of these difficulties strikes us as tractable. First, there exist many other methods of assessment than objective testing that can meet accreditation and accountability goals. Second, these methods already provide meaningful information to educators and students about significant types of student learning. Third, we are optimistic that thoughtful work by educators can create new assessment models and methods that may be able to measure additional significant forms of learning. To those points we additionally submit that the two claims, 1) that no single method of assessment is perfect, and 2) that not all important learning is easy or even possible to measure, do not imply that we should give up on the enterprise of assessment; instead, they are more meaningfully interpreted as a call for practitioners and scholars to creatively identify opportunities to gain whatever meaningful information they can about student learning using methods they deem appropriate. This pragmatic approach strikes us as a useful counter-model to objections that are based on what strike us as an objectionable perfectionism. Similarly objectionable is the perfectionist demand that assessment be undertaken only after all practical and theoretical problems have been solved and then only by “experts” — the development of faculty-driven assessment methods and models strikes us as a much more constructive response to the difficulty of assessing the various forms of learning that are sig- nificant to us as educators and to our students. We also note with relief that almost all of the regional accredit- ing bodies embrace a pragmatic rejection of perfectionism insofar as they identify the importance of efficiency, usefulness, moderation, and reasonable accurateness in their criteria of good assessment and insofar as they promote incremental implementation and measured continuous improvement of assessment practices at the institutions they accredit. ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 8 The Argument from Authority The fifth criticism we consider maintains that academic assessment constitutes a breach of trust that disre- spects academics’ professionalism. At its most extreme, this line of thought becomes an angry assertion along the lines of ‘the world should trust that I teach well, that my students learn what I say they learn, and that what I teach is good to be taught’ and then concludes—with resentment and rancor—that assessment is a pointless attack on the authority of the faculty. A less extreme version of this critique focuses on professional credentials. A more moderate—but still angry—critic might maintain ‘I have earned my authority in my area of study by my standing in the field and by my Ph.D., so my expert grading of student work should be a sufficient measure of their learning and is sufficient for purposes of accountability —anything more than this constitutes a waste of my time.’ We agree that most college and university faculty are expert educators who have a strong intuitive under- standing of their students’ learning. We also agree that these faculty members are uniquely positioned to un- derstand student learning. However, we maintain that these facts demonstrate that learning assessment should place faculty at the core of the development of assessment models and methods and of the analysis and use of resulting data; far from constituting a challenge to doing systematic learning assessment, educators’ professional authority highlights the importance of faculty-centered and faculty-driven assessment efforts of the sophisticated sort that we advocate in this paper, including collaborative efforts to advance our theoretical understanding of learning assessment and to develop new assessment models and methods. With that said, and taking nothing away from the importance of faculty being at the center of assessment process, solely focusing on instructional inputs (e.g., degrees held, publications, funded research dollars) neither guarantees good outputs (e.g., student learning) nor does it provide much information about the best methods to teach specific topics, the best materials to use, or the kinds of support students need to succeed in a particular area. In order to become collectively smarter about educating students —all kinds of students of different ages, backgrounds, life experiences, motivations, or abilities —we need to first have a better grasp of what students actually learn and how well they learn it. Without this information, most of our efforts to improve teaching and learning will be hit or miss exercises. To add to what we have already said about the development of learning assessment by productive scholarly communities, we maintain that faculty should lead efforts to determine the most meaningful consensus levels (e.g., fields of study, departments, cross-institutional, national) and the most meaningful kinds of learning to measure and report, and should work to develop efficient methods that are meaningful to faculty and students and meet administrative, regulatory, and social needs (noting that the latter usually fallout of the former). The Pigeon Hole Argument The final criticism we address maintains, in its strongest form, that assessment is only about accountability and compliance and therefore is not about improved learning and teaching. Fendrich (2007) offers an example of the pigeon hole argument that is striking for its rhetorical ferocity: Outcomes-assessment practices in higher education are grotesque, unintentional parodies of both social science and “accountability.” No matter how much they purport to be about “standards” or student “needs,” they are in fact scams run by bloodless bureaucrats who, steeped in jargon like “mapping learning goals” and “closing the loop,” do not understand the holistic nature of a good college education... Whatever their purpose, outcomes-assessment practices force-march professors to a Maoist countryside where they are made to dig on- ions until they are exhausted, and then compelled to spend the rest of their waking hours confessing how much they’ve learned by digging onions. ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 9 A more moderate variation of this criticism laments that assessment is being driven by administrators con- cerned about accreditation and accountability requirements and concludes from this that educators have few or no opportunities to connect learning assessment to their concerns about student learning and teaching. Accord- ing to both lines of thought, assessment is a bankrupt, time-wasting, and energy-draining enterprise that claims in bad faith to exist to improve learning and teaching. It is true that most assessment efforts across higher education have been prompted by accreditation require- ments, legislated reporting requirements, or ranking competitions/peer comparisons (e.g., U.S. News and World Report, NASALGC’s College Portrait, President’s Forum Transparency by Design), and we also agree that this sometimes means that persons leading assessment are more connected with those concerns rather than with student learning and good teaching. So, to this extent these criticisms have merit. However, and as our prior comments about scholarly communities of faculty advancing the theory and prac- tice of learning assessment suggest, we believe that what is correct about this line of criticism does not show that assessment is a bankrupt enterprise. To the contrary, it is clear to us that assessment can provide meaningful information about student learning and teaching that can help improve both. That external compliance issues constitute a key motivation for assessing student learning in the first place does not detract from the benefits of assessment for higher education faculty and students. In fact, looking at the argument from the other direction, most of the success the authors have had in implementing learning assessment programs at several institutions is due to a focus on what is meaningful and useful to individual faculty and the departments in which they teach. By responding to what is most important to the faculty and departments, and by developing assessment around their priorities, the odds are much higher assessment data will inform curricular, instructional and budgetary decisions. And, by “closing the loop,” responses to compliance and accountability concerns fall out as a byprod- uct. Fendrich’s argument also points to the downside of the backwards approach promulgated by most accredit- ing agencies and discussed in the introduction of this paper. To revisit, the focus on building a culture of as- sessment (rather than focusing on effective learning) has helped create several of the most pressing problems with which the accreditors are now dealing. These include, the lack of documented use of assessment data (i.e., closing the loop) compared to the amount of effort spent on creating data collection systems, the lack of sustain- ability of most assessment systems away from a looming accreditation visit, and the simple observation that not many institutions are doing assessment well. In short, the overriding purpose of assessment is to support and improve teaching and learning, and we submit that focusing on that priority meets learning and teaching goals, organizational development goals, and accountability goals. However, assessment efforts that focus on the com- pliance aspects of assessment usually fail to meet learning and teaching goals and, therefore, are unsustainable and ultimately cannot meet the very compliance goals upon which they focus. Treating assessment primarily as a compliance matter is perhaps the least effective model of assessment practice (Astin, 1990; Maki, 2004), and for critics to pigeon-hole assessment as being only about accountability and compliance is to make a correspond- ingly large intellectual mistake. CONCLUSIONS AND RECOMMENDATIONS The main goal of this discussion has been to respond thoughtfully to several common and powerful argu- ments against learning assessment. These arguments, though often hyperbolic and blustery, contain insights important for development of learning assessment methodologies and (although this is the topic for a different paper) new approaches to evidence-based teaching and learning. We wish to highlight four conclusions emerging from our responses to these arguments. First, we conclude that the field of learning assessment can be informed by incorporating rational responses to legitimate critiques considered in their deflated, non-hyperbolic forms—avoiding the over-emotionalism and destructive rhetoric of the hyperbolic forms is crucial for this to be possible. Second, we conclude that assessment can be done well and meaningfully with good effects, and need not address all learning in order to be a fruitful exercise. Third, we con- ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 10 clude that disciplinary approaches are an important part of learning assessment as are (following our example of the use of philosophical analysis and critique to improve thinking about learning outcomes) cross-disciplinary efforts to support and influence the development of learning assessment. Fourth, we conclude that the success of learning assessment is dependent upon its focus on improving teaching and learning. We end by expressing our hope that readers will be inspired to join communities of scholarship that will advance the theoretical and practical development of learning assessment. As we have argued above, thought- ful and creative scholarship of teaching and learning is perhaps the most important means of addressing the challenges of assessing the many types of learning that are important to us and to our students, of ensuring that assessment is done well and in ways that benefit us, our students, and our institutions, and to repel the specters of the reductionist bean counters and bureaucrats that too frequently inflame the overheated rhetorical passions of published and whispered criticisms of learning assessment. Endnotes The second statement was communicated to one of the authors during a departmental assessment con-1. versation – original attribution yet undetermined. The last comment is a synthesis of statements both authors have heard repeated over the past six years. The claim being made here is an existence claim about a specific kind of behavior that is prevalent across 2. institutions, accrediting agencies, and most pointedly, in the field of philosophy. In making the claim, the authors are informed by the rich literature connecting assessment to learning (Astin, 1993; Maki, 2004; Dwyer, 2006). To note just a few examples: 1) A range of surveys provide meaningful information about important areas 3. of student learning across disciplines (e.g., IDEA center’s student survey tools and methodologies, the National Survey of Student Engagement), 2) targeted use of standardized testing (e.g., Collegiate Learn- ing Assessment, ETS major field tests, and 3) longitudinal, mixed-method studies and content analysis of learning documentation can provide meaningful insights into areas of student learning. The Measure of Academic Proficiency and Progress (MAPP) assessment has been renamed to the ETS® 4. Proficiency Profile. The CLA is the Collegiate Learning Assessment initially develop through RAND’s Value Added Assessment Initiative. References American Philosophical Association. (2008). APA Statements on the Profession Outcomes Assessment. Retrieved from http://www.apaonline.org/documents/governance/APA_Outcomes_2008.pdf. Astin, A. W. (1990). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education. American Council on Education Series on Higher Education. New York: Maxwell Macmil- lan International. Astin, A. W. (1991). The Philosophy and Practice of Assessment and Evaluation in Higher Education. New York: Ameri- can Council on Education/Macmillan Publishing. Bach, C. & Carpenter, A. (2006). The benefits of philosophical analysis for criminological research, pedagogy and practice. Professional Issues in Criminal Justice: A Professional Journal, 2 (1), 3-16. Bach, C. & Carpenter A. (2008, August). A Philosophical Assessment of the Strongest Arguments Against Academic Assessment. Paper presented at the American Association of Philosophy Teachers Seventeenth International Workshop-Conference on Teaching Philosophy, Guleph, Ontario. Davis, A. (1998). The Limits of Educational Assessment. Journal of the Philosophy of Education, 32(1), 1-192. Dwyer, P. (2006). “The Learning Organization: Assessment as a Change Agent,” in Revisiting Outcomes As- sessment in Higher Education, edited by Peter Hernon, Robert Dugan, and Candy Schwartz (pp. 165-180). Westport, CT: Libraries Unlimited. Erhmann, S. C. (2009). Frequently Made Objections to Assessment and How to Respond. The TLT Group Flash- light Evaluation Handbook. Retrieved from http://www.tltgroup.org/Flashlight/Handbook/FMO.html. Ewell, P. (2009). Assessment, Accountability, and Improvement: Revisiting the Tension. National Institute of Learning Outcomes Assessment Occasional Paper, 1, 1-24. Urbana, IL. Retrieved from http://www.learningout- comeassessment.org/documents/PeterEwell.pdf. ANALYTIC TEACHING AND PHILOSOPHICAL PRAXIS Vol. 30 No.1 11 Fendrich, L. (2007). A Pedagogical Straitjacket. The Chronicle Review Of Higher Education. Washington, DC. Re- trieved from http://chronicle.com/article/A-Pedagogical-Straitjacket/20446. Finkelstein, M. (2003). The Morphing of the American Academic Profession. Liberal Education, 89(4), 6-15. Gosling, D. (2000). Using Habermas to Evaluate Two Approaches to Negotiated Assessment. Assessment and Evaluation in Higher Education, 25(3), 293-304. Hernon, P. & Dugan, R. (2006). “Future Directions in Outcomes Assessment,” in Revisiting Outcomes As- sessment in Higher Education, edited by Peter Hernon, Robert Dugan, and Candy Schwartz (pp. 367-396). Westport, CT: Libraries Unlimited. Mackenzie, J. (2008). Conceptual Learning in Higher Education: Some Philosophical Points. Oxford Review of Education, 34(1), 75-87. Maki, P.L. (2004). Assessing for learning: Building sustainable commitment across the institution. Sterling, VA: Stylus. McEachron, D & Bach, C. (2010). Drexel EdApps: Freeing Faculty for Innovative Teaching, forthcoming in the Proceedings of the The 8th International Conference on Education and Information Systems, Technologies and Applica- tions, Orlando, FL. Mentkowski, M., & Associates. (2000). Learning that lasts: Integrating learning, development, and performance in col- lege and beyond. San Francisco, CA: Jossey-Bass. Romer, T. (2003). Learning and Assessment in Postmodern Education. Educational Theory, 53(3), 313-327. Shumar, W. (1997). College for sale: A critique of the commodification of higher education. Bristol, PA: Palmer Press, Taylor & Francis, Inc. Slaughter, S., & Rhoades, G. (2004). Academic capitalism and the new economy: Markets, state, and higher education. Baltimore: The Johns Hopkins University Press. Address coresspondence to: Andrew N. Carpenter Professor of Philosophy Director, Center for Teaching and Learning Ellis University acarpenter@ellis.edu and Craig Bach Teaching Professor, Goodwin College Associate Vice Provost Office of Curriculum and Assessment Drexel University bachcn@drexel.edu