69Fall 2002 • Volume 10, Number 1 CAMPUS NOTES Denise L. Rode, Senior Associate Editor Evaluating Evaluations Jeb Branin About three years ago I realized that in coordinating the orientation program at Southern Utah University I was spending so much time in pre-program preparation that I was neglecting spending enough time in post-program reflection. Specifically, I wasn’t learning from the program evaluations administered to all students and parents participating in orientation. Simply put, the evaluations didn’t tell me what I wanted to know. The evaluations administered to participants in the program needed to be reevaluated and retooled so that they were of greater value in program assessment. This re-examination led to a realization that I needed to expand my thinking in regards to evaluations. There was more to a useable and valuable evaluation than trying to determine if students and parents thought orientation was “good.” Identifying the Primary Indicators In working with colleagues to re-examine program evaluations, I determined that there are three primary indicators that a participant’s evaluation of orientation should examine. On the surface the evaluation should indicate if the participant felt the program was effective, an indicator I call program quality. This indicator is a reflection of the aforementioned question about orientation being “good” or, in other words, did the participants see the program as being of high quality. A deeper indicator is perception of usefulness. This indicator assumes that program evaluations should reflect whether students or parents thought the orientation was useful. This is necessary because students may feel some aspect of orientation was good but may not feel that it was useful to them; for example, an entertaining skit may be “good” but the message delivered may not be perceived as “useful” by the participants. The third indicator is defining learning outcomes through mission alignment. This assumes an evaluation should indicate if the students and parents are meeting the learning outcomes defined in the program’s mission statement. In my experience this proves to be the most difficult type of analysis but also may be the most important. Current trends in university accreditation standards, including those of the Commission on Colleges & Universities of the Northwest Association of Schools and Colleges (CCNASC) as quoted in the Institutional Self-Study at Southern Utah University (2001), state that, “Using the various methods of assessment, namely those appropriate to the activity or program, the institution and its units and programs must be engaged in assessing on a regular basis how well they are attaining their stated goals and objectives....” Or in simple terms “Are we doing what we say we’re doing and how do Jeb Branin is the Orientation Coordinator/Academic Advisor at Southern Utah University. 70 The Journal of College Orientation and Transition we know?” This type of indicator is mission driven as described by CCNASC’s Standards (2002) which state, “...evaluation proceeds from the institution's own defini- tion of its mission and goals.” Practical Ways to Incorporate each Primary Indicator in Participant Evaluations In this article, I look individually at each of the three indicators–program quality, perception of usefulness, and defining learning outcomes through mission alignment–and based on what I have learned in our orientation program, explore practical ways they can be used in designing a participant’s program evaluation. Program quality attempts to determine how “good” a program, or any aspect of a program, is. It examines quality as perceived by program participants. Questions for this indicator are simple inquiries as to whether participants “liked” what they experienced. Either a forced-choice or odd-numbered Likert scale can be used and a place for “additional comments” after each question can generate suggestions for improvements. A forced-choice scale is even numbered and forces the responder to lean either to the positive or negative because there is not an option that evenly splits the scale. Odd-numbered scales allow for neutral responses, for example, on a scale of five a response of three would “split the scale” and not indicate a leaning towards either the positive or negative. Either scale can be effective depending on the questions asked and the goals of the assessment. In evaluation questions with this type of indicator we discovered that we got more helpful responses if we sought feedback after each individual session at orientation. Students and parents were so exhausted and overwhelmed at the end of orientation they were apt to rush through their evaluations just to get them over with. I recommend either administering individual session evaluations that can be compiled–not unlike the evaluation method used at most conferences–or giving the participants the complete evaluation instrument at the beginning of orientation and then providing time at the end of each session for them to fill out the appropriate section of the evaluation. Program quality questions are often important components of a participant evaluation, especially in determining information like whether or not participants like the type of food served, but I learned the hard way that an evaluation consisting mostly, or entirely, of questions of quality is not very useful and does not lend itself well to comprehensive program assessment. The perception of usefulness indicator is a good complement to questions of effectiveness. It helps determine whether or not participants found orientation to be useful. A good idea in framing evaluation questions of this sort is to combine them with questions of effectiveness. For example, you pair a two-part question like “How good was the campus tour?”or “Did you like the campus tour?” with either a forced-choice or odd-number Likert scale and then the accompanying question, “How useful was the campus tour?” followed by a matching forced-choice or odd-numbered scale. It would be advisable to explain to participants the distinction between these two questions to offset confusion. In our orientation program we evolved from program effectiveness indicator questions to perception of usefulness indicator questions about three years ago 71Fall 2002 • Volume 10, Number 1 and were pleased by the results. However, we discovered that although we could now better use our evaluations to make program improvements, we still were not able to use them very effectively when asked for program assessment that was mission driven. As part of our university’s accreditation preparation, all programs on campus were assessed by defining learning outcomes through mission alignment indicators. This was necessary since it was the standard by which CCNASC assessed the university. This type of indicator requires a clear institutional mission and a clear department or program mission that supports the institutional mission. The department or program mission is to include or infer specific learning outcomes. These learning outcomes will vary from institution to institution although there are some guidelines recommended by the Council for the Advancement of Standards. With clear learning outcomes it is possible to evaluate if a program is doing what is says it is doing. The first step in creating evaluation questions of this type is to clearly understand the mission of the university. We held department meetings to closely examine the university’s mission and make sure that we all understood what it was saying. We then decided to rewrite our department and program mission statements to better align them with the university’s mission. We added to our mission strategic priorities and specific plans as to how we would measure each priority and what the learning objective was for each priority when applicable. We then designed evaluation questions that addressed each learning outcome. The end result was an evaluation that indicated what orientation participants perceived they had learned. As part of the overall program assessment we could then identify strengths and weaknesses of the program in teaching the participants what we claimed they would learn in our mission. Although we are still in the process of fine-tuning our evaluations using defining learning outcomes through mission alignment indicators, they are proving to be far more useful to us as both assessment tools and catalysts for programmatic improvements. Conclusion In the process of evaluating participant evaluations, we have explored our orientation program deeper and more thoroughly than ever before. That in itself has been exciting and rejuvenating, and has helped us implement new programmatic components. We have been able to provide better end-of-program assessment reports to the administrators who, as a result, seem to have a better understanding of what we do in our program. Most importantly, we are developing a much clearer picture of what orientation participants are learning–and their learning is always our bottom line. 72 The Journal of College Orientation and Transition References Commission on Colleges & Universities of the Northwest Association of Schools and Colleges Standards. (2002). Standard One - Institutional Mission and Goals, Planning and Effectiveness. Available at: http://www.cocnasc.org/policyprocedure/standards/standards1.html Institutional Self-Study. Southern Utah University. (2001). Questions and Answers about Accreditation. Available at: http://www.suu.edu/ad/accreditation/questions.html