Australasian Journal of Educational Technology 2009, 25(5), 627-644 Examining barriers in faculty adoption of an e-portfolio system Gerry Swan University of Kentucky This paper is a report on the findings of a study conducted on the implementation of a portfolio system at the University of Kentucky. Interviews were conducted with faculty members and university supervisors about the use of a portfolio management tool that had been implemented in the teacher education program. Factors such as small program size and ability to engage in frequent interpersonal communication decreased the perceived need for a management information system. Faculty placed more value in high resolution data sources, but tended to use low resolution data when program size increased. Introduction Portfolios have gained popularity in teacher education over the last two decades as a means of assessing pre-service teachers. Many researchers believe that portfolios are well suited to the task of documenting the general competencies required of teachers, including knowledge of subject matter, intellectual ability and problem solving, pedagogical skills, curriculum knowledge, knowledge of learning and learners, and attitudes and dispositions considered appropriate for teachers (Anderson & Demeulle, 1998; Naizer, 1997; Reis & Villaume, 2002; Simmons, 1996). Moreover, national accreditation agencies for teacher education programs have become increasingly rigorous in their evidence requirements. These recent changes in standards require colleges of education to systematically collect, analyse, and use data from a wide variety of sources across the duration of a student’s experience in a teacher preparation program (National Council for Accreditation of Teacher Education, 2003). Teacher preparation programs, in addition to evaluating preservice teachers’ use of standards based curriculum, academic rigor and classroom performance, must also use these data to inform their own practice, program quality and outcomes. Many colleges of education have employed portfolio assessment as a means to comply with the NCATE standards. While researchers have documented numerous benefits associated with use of conventional hard copy portfolios in teacher education, an inescapable issue faced in portfolio construction is the amount of time needed to create portfolios and the space needed to store them (Anderson & DeMeulle, 1998; Reis & Villaume, 2002; Snyder, Lippincott, & Bower, 1998; Stone, 1998). In addition to filing, storing, and indexing artifacts, considerable time must be spent formatting the portfolio so that reviewers can navigate through the voluminous materials. Participation of multiple persons such as mentoring teachers, faculty, and peers in the construction and subsequent evaluation of a portfolio is time intensive because of the difficulty associated with distributing and sharing a portfolio. The difficulties associated with the management and assessment of conventional, text based 628 Australasian Journal of Educational Technology, 2009, 25(5) portfolios, coupled with the changes in accreditation have led to the development and use of electronic portfolio systems that have become prevalent in the teacher preparation world (Gibson & Barrett, 2002). Electronic portfolio systems The availability of computer technologies such as the world wide web in educational settings has made the use of electronic portfolios highly feasible. Electronic portfolios are digital collections of artifacts used to provide a picture of a pre-service teacher’s potential efficacy in the classroom. For the discussions in this paper, these artifacts can best be thought of as being either high resolution or low resolution types of data. Low resolution sources of data give a sense of student performance but really do not differentiate between students. A course in which most students get the same grade might be considered a low resolution artifact. High resolution data sources on the other hand give more detail of a student’s performance. Lesson plans could be considered a high resolution data source because they can vary in quality not just between students, but within a collection of one student’s planning materials. This variation could be used to help demonstrate professional growth, or lack there of, by a student in relation to some overall standard or in comparison with other students. The difference between high and low resolution types of data is a function of granularity, which ultimately leads to a better ability to discriminate one student from another. Scaled up electronic portfolio systems used for robust assessment applications differ from the first generation of digital portfolio ‘collections’ in that they are database- driven applications delivered through either the Internet or an intranet. The defining characteristic of this kind of portfolio system is that all portfolios are built around a central database (Baston, 2002). Students upload digitised artifacts to a web server and the artifacts are tagged and recorded in a database according to some scheme for storage and retrieval. Reflections and feedback from various sources can be added to the original artifact. These artifacts can then be filtered and sorted according to their ‘data tags’ to generate completed portfolios comprised of all related artifacts for an individual student, course, or an entire program. Gibson and Barrett (2003) offered a general description for such portfolio systems, which they referred to as “customized systems (CS)”: CS application maximizes cross-portfolio comparisons and, in general, seems more "top-down," controlled by an educational program. Much of the process in a CS system - the flow of work and creation of linkages and documentation - takes place within the application, with most of the work products being made externally to the application using generic tools. (p. 1) Trent Baston (2002), director of Information and Instructional Technology Services at the University of Rhode Island, cites three reasons for the emergence of electronic portfolio systems: 1) Most student work (at the collegiate level) is now available in electronic form; 2) The web is rapidly becoming ubiquitous; and 3) The dynamic web site that is database-driven, instead of HTML link-driven, has become a more common skill among web developers. In addition to technological advances that have put development of these tools within the reach of schools of education, he describes the potential that database-driven systems bring to the portfolio assessment process: We seem to be beginning a new wave of technology development in higher education. Freeing student work from paper and making it organized, searchable, and transportable opens enormous possibilities for re-thinking whole curricula: the Swan 629 evaluation of faculty, assessment of programs, certification of student work, how accreditation works. In short, ePortfolios might be the biggest thing in technology innovation on campus. Electronic portfolios have a greater potential to alter higher education at its very core than any other technology application we've known thus far. (Baston, 2002, p. 1) Rationale for study One need only look at the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE) conference Proceedings from over the past five years to see that these types of systems are of increasing interest to the field – 2004 (0 presentations), 2005 (2 presentations), 2006 (3 presentations), 2007 (3 presentations) and 2008 (11 presentations). Since the initial implementations of electronic portfolio (e-portfolio) systems that occurred in the early 2000s, there have been a number of publications related to e-portfolio systems. The student’s experience is a common theme among articles focusing on electronic portfolios (Bartlett, 2002; Barlett & Sherry, 2004; McKinney, 1998; Smith, P. L., Wickersham & Chambers, 2006; Wright, Stallworth & Ray, 2002). More recently, the Canadian Journal of Technology and Learning (Fall 2008, volume 34(3)) ran a theme issue on e-portfolios. Of the four empirical studies in the issue, three were specifically focused on student perceptions or use of the e-portfolio. The fourth one included findings related to student use, but also had a secondary focus on instructor use of the e-portfolio. Within the other two manuscripts in the issue, one described a model for implementing e-portfolios (i.e. program description) and the other was published in French. Publications describing implementations and frameworks for designing e-portfolios are also found in the literature. Evans and Powell (2007) offer cautions about using communities of practice as a metaphor in the discussion of e-portfolio implementations. Meeus, Questier and Derks (2006) describe the creation of a portfolio system at the Vrije Universiteit Brussel, but offer no empirical data for evaluating the implementation. Lambert and Corrin (2007) describe the design process and protocol for the campus-wide roll out that is being undertaken at Wollongong University. The authors note that as programs begin to “integrate the ePortfolio into new programs… subjects there will be further opportunity to evaluate the tool”. This speaks to the emergent status of e-portfolio practice and research and the need for critical analysis of projects that are happening across the globe, including the use of e-portfolios in examining how these tools interact with existing practices of faculty and organisations. Understanding the schisms created by the intersection of faculty practice and system design will not only inform the use of e-portfolio tools, but other MIS projects in general. Empirical studies of e-portfolio system implementations at several institutions are available, but findings from these studies focused primarily at the organisational level and do not detail the specifics of the implementation at the user-utility, program assessment level (Beishuizen, Van Boxel, Banyard, Twiner, Vermeij & Underwood, 2006; Strudler & Wetzel, 2005; Strudler & Wetzel, 2008; Wetzel & Strudler, 2005; Wilhelm, Puckett, Beisser, Wishart, Merideth & Sivakumaran, 2006). While the findings in these studies provide important heuristics for guiding implementation, the details of e-portfolio system implementations will help leaders who are spearheading electronic portfolio and database deployment to be more cognisant of the specific nature of the issues that will inevitably rise. For example, Wetzel and Strudler (2005) identify the need for sufficient training and support and the strong commitment from teachers and technology educators’ clarity of purpose. In addition, there needs to be 630 Australasian Journal of Educational Technology, 2009, 25(5) research that guides programs as they try to operationalise these broad reforms into practice. This study complements and elaborates the evolving body of literature, by delving into the faculty side of the implementation equation and how the use of such systems intersects with and informs faculty practice. The study presented in this article is a qualitative analysis of the implementation of an e-portfolio management tool at a college of education based in a research institution in the southeastern United States. An examination of the conditions, processes, and consequences that have influenced this implementation will inform a more complex understanding of how to optimise use and reduce barriers related to the use of online portfolios in educational environments. Conceptual framework E-portfolio systems are a specific example of what is termed a management information system (MIS). While there is little research in education on the implementation of these systems, researchers in business have been studying the implementation of MIS in non-academic settings for some time. One model of MIS implementation that has been described in this literature is the emergent perspective (Robey & Marcus, 1984). The emergent perspective is guided by the assumption that people and technologies interact in symbiotic and unpredictable ways. The emergent perspective of MIS implementation is tied to the interaction resistance theory, which is rooted in the belief that the intersections of designer intention and user perception cause implementation problems. Resistance occurs when there are differences between the two. In accordance with the interaction theory: Systems that centralize control over data are resisted in organizations with decentralized authority” and “systems that alter the balance of power in organizations will be resisted by those who lose power and accepted by those who gain it”. Resistance arises from the interaction of technical design features of systems with the social context in which the systems are used. New information systems may prescribe a division of roles and responsibilities that are at odds with the prevailing organizational culture. The greater the implied change, the greater the resistance. (Markus, 1983, p. 431) This model of implementation requires continual adjustment to uncovered and often unpredictable barriers. Adaptation is necessary by both the developer of system and the members of the organisation to facilitate adoption. The key to adoption in this model hinges on the initial deployment of a system, followed by orchestrated monitoring and adaptation. When implementing MIS in environments that follow patterns predicted by this model, a high level of conflict management is crucial to achieve success. With the high degree of autonomy that comes with faculty practice and the multiple purposes that e-portfolios serve, the emergent perspective seems to be an appropriate scholarly lens through which to view system implementation. The e-portfolio system and research setting The portfolio system that was used in this study had grass root origins in the teacher education program for which it was designed and initially used. The faculty member in charge of field placements for student teachers wanted to move to an electronic medium to help her manage the hundreds of reports that are processed for those field placements each year. During the student teaching placement, midterm and end of Swan 631 term evaluations are completed for each student. This documentation system results in the processing of over 300 revaluation reports each semester. When including the reporting requirements from the other major field experience, this number increases to over 450. Additionally, each sub unit or content area in the teacher preparation unit (e.g. science, mathematics, English, social studies, foreign languages, etc.) utilises some sort of portfolio. While each sub unit has its own procedures for handling student work, the underlying mechanisms of collection, sharing and feedback are something that any portfolio related process has in common. It is important to note that the Director of Teacher Education at the study site supported the project. After the two- year exploratory implementation, the data collection for this study took place. The online portfolio system The following sections describe the interface of the tool so that the reader is familiar with the features, what users do within the system and the layout of the online environment. To begin, the online portfolio system is designed to compile individual electronic portfolios from various materials that fit in one of two major components: field placement evaluation reports and student teacher artifacts. Field placement evaluation reports Evaluations from the student’s field experiences are composed and stored within the system. The evaluation summary page is set up as a split screen where reports that are still being written are shown in the top part of the screen while reports that have been completed are shown on the bottom portion of the screen (See Figure 1). Figure 1: Evaluation authoring screen showing both in-progress and completed evaluations 632 Australasian Journal of Educational Technology, 2009, 25(5) When the user is ready, he or she can submit the report so that it becomes available for inclusion within the student’s portfolio. The report will disappear from the “in progress” section and appear in the “completed” section of the layout. Once a report has been submitted, it becomes accessible to the other persons who have privileges to view the report such as the University Supervisor or faculty members. Faculty can access summaries of the field placement reports to perform cross group comparisons and assess student growth over time. This new system allowed faculty members to do potentially in minutes what previously would take weeks of clerical work. Figure 2: A sample portfolio. Student loaded artifacts are shown on the right side of the screen. Faculty and supervisor reports and a student’s reflections on them are on the left. Artifact construction Artifacts are created in a similar method to composing an email. Reflections and comments about the artifact are entered into a text area. The student then attaches digitised materials that are associated with the artifact. Examples of artifacts include items such as lesson plans, PowerPoint presentations, ancillary classroom materials, scanned or photographed certificates, etc. - evidence that the student teacher has fulfilled the requirements of the teacher education program. The artifact remains in an “in-progress” state until the student marks it as “complete”. The e-portfolio construction page is also set up as a split screen similar to the evaluation summary page where artifacts that are still being built are shown in the top part of the screen Swan 633 while artifacts that have been completed are shown on the bottom portion of the screen. The student teacher uses a series of pull down menus to control which of the available artifacts are included in their portfolio and how they fit into the overall structure provided by the teacher education program. The classification scheme that students use to organise their artifacts correlates with the areas on which they are formally evaluated during their field placement experiences. Once the student completes this, she or he may hit the update button to automatically generate the portfolio (see Figure 2). A person viewing the portfolio gets an overall picture of the student and/or can drill down to see specific artifacts and the descriptions and feedback that accompany them (see Figure 3). Figure 3: A sample artifact with feedback In the United States, student teaching typically consists of a student teacher and triad of individuals who play both mentoring and supervisory roles. The student teacher is the pre-service teacher who is completing a field based practicum before graduating from the program and earning a teaching credential. The support triad is made up of a faculty member, a cooperating teacher whose classroom the student is assigned to and a university supervisor, a role often filled by a doctoral student or retired teacher, who acts as an extension of the faculty member by visiting the classroom and meeting with the student teacher. In the context of this implementation the student uploads artifacts, which are accessed primarily by the university supervisor and faculty members to gauge performance and provide the student teacher with guidance for improvement. The cooperating teacher completes midterm and end of term evaluations of the student teacher, which can be reviewed by the student teacher, university supervisor and faculty members. 634 Australasian Journal of Educational Technology, 2009, 25(5) Methodology After completion of the two-year implementation, interview data were collected from faculty members as well as written feedback from university supervisors in the teacher education program, regarding their experiences with the portfolio tool as well as their perceptions about using an electronic medium rather than a paper-based one. The research questions that framed this study were: Was the e-portfolio system used by faculty? If so, what were the perceived benefits and drawbacks? If not, what were the barriers to implementation and use? Interviews were selected as the primary data source for this study because the focus of this study is to examine how the tool intersects with faculty practices. Interviews allow the researcher to follow up on occurrences relevant to the area to area of study that may be left out with surveys. University supervisors were surveyed because they serve as extensions of the faculty during the field experiences. Analytic induction techniques (Patton, 2002) were employed to review, code and understand patterns in the data. Participants For this study, eight of the ten faculty members involved in the undergraduate level field placement coursework experiences were interviewed. The sample was restricted to faculty directly involved in these field experiences because the initial functionality of the tool was devised to collect evaluation data from the field placements. Of the two faculty members that did not participate in the interviews, one participant did not wish to be interviewed, but did offer his thoughts via e-mail while another was unavailable during the scheduled site visit. Of the faculty members interviewed for this study, three were non-tenure track general faculty. Of the six tenure-track participants, three were tenured (the faculty member who offered his thoughts by email is included in this number). Of these faculty participants, three were male, while six were female. Additionally, the faculty members who were interviewed included the Coordinator of Field Placements and the Director of Teacher Education. Fourteen university supervisors were also invited to participate in the study. Each supervisor was sent an email outlining specific questions to address. They were also invited to provide any additional feedback on how the system was affecting their supervision responsibilities. Six of the fourteen university supervisors responded to these email requests. Table 1 provides a summary of the study participants. Interviews with faculty One extensive semi-structured interview lasting between 45-75 minutes was conducted with each faculty member who participated in the study. Patton (2002) outlines the combination of a conversational interview approach with a more guided style. This hybrid approach insures that all participants address the same key issues, while still allowing for flexibility in asking probing questions and capturing the unique context of each approach. Faculty members were asked to describe their use or non-use in a discussion structured around the following framework of questions: 1) How do you currently prepare for accreditation? 2) What are the reasons that caused you to use or defer use of the portfolio system? 3) Are there any conditions under which you might envision that an electronic portfolio system could enhance the teacher education program? and 4) If so, are there any substantive aspects of the teacher education program that might change if the portfolio system is fully integrated into the teacher education program? Swan 635 Table 1: Participant demographics: Role, gender, rank, years at institution and tenure status Participant Role Gender Rank Yrs. at institution Tenure track 1 Faculty F Assistant <5 Y 2 Faculty F Assistant <5 Y 3 Faculty M Associate >10 Y 4 Faculty F Assistant <5 Y 5 Faculty F Associate >10 Y 6 Faculty M Assistant <5 Y 7 Faculty M Associate >10 Y 8 Faculty F Associate >10 N 9 Faculty F Associate >10 N 10 U. Supervisor Unknown N/A N/A 11 U. Supervisor Unknown N/A N/A 12 U. Supervisor Unknown N/A N/A 13 U. Supervisor Unknown N/A N/A 14 U. Supervisor Unknown N/A N/A 15 U. Supervisor Unknown N/A N/A Depending on the initial responses each participant gave, follow up questions were offered. For example, Participant 8 talked about her use of the tool to create a shared repository of lesson plans. Follow up questions were then asked about comparisons with previous attempts to do this. Notes were taken during the interview sessions as the conversation proceeded. With the participants’ permission, each interview was audio recorded to supplement the field notes information. The notes and audio recordings from each interview session were used to create summaries of the session, which were given to the participant as a form of member checking. Email feedback from university supervisors At the end of the semester during which the faculty were interviewed, university supervisors were asked to give feedback on the lesson plans submitted by student teachers using the e-portfolio system. The message sent to university supervisors was as follows: I am doing a study on the implementation of [the e-portfolio tool] as well a generating a list of revisions for the next version. Because the University Supervisor is the lynchpin for making all of this work, I am interested in your perceptions on the use of the tool this past semester, both positive and negative. Please feel to address any component of the use of the tool (policy, training, support, interface, working with faculty etc. etc.). Of particular interest: 1. What is the added benefit, if any, in picturing student performance? 2. Is giving the students feedback online was useful? 3. How often did you meet with your TA's If you could reply to this message with any feelings, comments, suggestions or complaints, I would be greatly appreciative of your input. If you do not wish to have your data included in this study, your input is still valued as it will help produce a better product for use in the teacher education program. 15-20 minutes of your time will help tremendously in producing a far superior product. This message was sent to the supervisors twice and six university supervisors provided commentary on their use of the e-portfolio tool. All of the university 636 Australasian Journal of Educational Technology, 2009, 25(5) supervisors who responded consented to having their comments used in the study. The semester had ended when the email requests were sent and was most likely a factor affecting response rate. Additionally, the narrative response format requiring the composition of an email could have also contributed to a lower than desired response rate. Because the data were being used to complement the faculty interview data, these limitations were considered acceptable. Findings Several key themes associated with the research questions emerged. First, and perhaps not surprisingly, with a newly implemented system, faculty members rely on previously established ‘interpersonal’ systems of communication to obtain data on students’ performance. However, as the data show, the number of students supervised limited the practicality of this method. Secondly, university supervisors found the storage capacity of the e-portfolio useful, though they noted that redundancy of documentation could be problematic. Finally, accreditation requirements for longitudinal data collection at the program level, in theory an on going process, is in fact for most faculty a just in time endeavor. The interview excerpts below elaborate these assertions, and give a greater and more detailed awareness of what took place during implementation of the e-portfolio database system. Faculty tend to rely on existing interpersonal networks and conversation: Evaluation utility increases as the number of student teachers increases The e- portfolio system replaced the previous paper-based system that was used to evaluate student teachers during their field experiences. University supervisors and cooperating teachers now submitted their midterm and end of term evaluations using the e-portfolio system. In order to access these evaluations, faculty would have to log onto the e-portfolio system. During the first year fall semester, there were no requests from faculty requesting access to the system (i.e. passwords, user ids, etc.) suggesting that many faculty members did not utilise this source of data. This inference was supported by interview data from the teacher education faculty. For example, Participant 5 described what she perceived as the attitude of faculty members towards the end of year evaluations: You know the unfortunate thing is that faculty are very busy and they get to the point where it’s the end of the semester and these students have done well and there are no complaints. They think, ‘It’s a pass/fail class and I think I know the student… so why would I spend time going in and looking at them [the evaluations]’. However, Participant 1 mentioned that she thought that interesting information could be gained from the student teaching evaluations although she had not used the system for analytical purposes because she was not in the habit of using the evaluations. The classroom observation reports, the data sources she primarily employed as a measure of students’ performance, were not yet integrated into the portfolio system because there was no departmental standard on the protocol. The following excerpt is from her interview wherein she describes the ways in which she assesses the performance of student teachers in her program. Researcher: Have you ever used the field evaluations in the past? Participant 1: No, I haven’t. I’ve relied more on the actual feedback and emails from the Cooperating Teacher. These are people that I know. Most of the Cooperating Teachers, Swan 637 I knew them when I was a teacher in the area. If there were something I needed to know about they would let me know. I think for me the biggest thing is that I am just not in the habit of using it. I think because I know these people because our communication has been more person-to-person. Their observations are paper, hard copy observations; that’s the information that I am accustomed to dealing with as opposed to the information that is currently in [the system]. Researcher: The observations. Would you say that is the key data source you probably pay attention to? Participant 1: If you mean right now, yes. I’m getting those observations from the Cooperating Teacher on a weekly basis and I read them. So I can tell a lot from that. What the Cooperating Teacher is seeing, what the University Supervisors is seeing. Researcher: So you like formative feedback on the day-to-day classes? [This] is the most important to get a pulse on how the students are doing? Participant 1: Yes. And like I said at this point I think this is largely a question of habit. I have not been accustomed to getting this kind of information on [the system] and it takes a jump. Participant 1 had worked in local school systems for fourteen years before joining the faculty. Because of the small number of students participating in student teaching at a given time and her strong professional relationship with the cooperating teachers in the local schools, she believed she had access to much more detailed performance data on her students than that available through the mid-term and end of term evaluations. Importantly, this participant also had the fewest number of student teachers during the field placement term. Participant 6 had similar comments. He noted that he did not use the evaluations in the past because he could get a richer picture of a student’s performance through conversations with university supervisors. He indicated that he knew his students well enough that the evaluation would not add anything new. This participant had weekly meetings with his university supervisors to discuss student progress. However, he also noted that as the size of his program increased, it had become harder to keep track of student teachers. As a result, he noted that he had begun looking up evaluations on the portfolio system during the most recent term. Participant 5 echoed the discrimination problem with the filed, end of term evaluations which occurred during the last accreditation visits. We did go back and aggregate all the evaluations and it was very labor intensive to do that. I had doctoral students and secretaries working on it. We went back over and did three years and we got very little out of it. … it was so lumped and it was so hard to separate meaningful items out because we never did an item analysis or factor analysis. The experience of Participant 5 provides important insights to the problems with the evaluation system. The first is that while they can be aggregated, the manual process of doing that is time prohibitive. Second, this clerical overhead prevents the meaningful analysis needed to refine the instrument to increase its utility. Table 2 provides an overview of the nine faculty participants and the number of students that they oversaw during the field placement. When looking at which faculty made use of the midterm and end of term student teaching evaluations, it appears that the faculty members dealing with larger numbers of students (Participants 2, 3, 4, 8 and 9) were more likely to use the evaluations than those with smaller numbers of students (Participants 1, 5, 6 and 7). 638 Australasian Journal of Educational Technology, 2009, 25(5) Table 2: Program faculty use of midterm/end of term student teaching evaluations versus program size Use of student evaluation No. of student teachers Participant 1 N 8 Participant 7 N 11 Participant 5 N 13 Participant 6 N 13 Participant 4 Y 24 Participant 8 Y 50 Participant 9 Y 70 Participant 3 Y 64 Participant 2 Y >100 As the data show, when there are 13 students or less, the respondents in this sample could rely on tacit and non-documented exchanges of information. This form of evaluation was prevalent among this group. However, as the numbers increased, the need to document, store and manipulate documented reports was in evidence. University supervisors in the field: Storage capabilities a plus, redundancy of documentation a drawback All six university supervisors who provided comments about the system and the effect it has had on their responsibilities mentioned the data storage capabilities of the portfolio system as being advantageous. The portfolio system worked very well — great place for record keeping and tracking progress. (University supervisor 1) It was extremely useful to have all of their lessons and responses in one electronic, paperless space. (University supervisor 3) The record of student work on [the system] was beneficial to me in that it kept all submitted lesson plans in one place for me to view and review. I didn't have to keep paper copies of all lesson plans and my responses to them in increasingly thick file folders, as I had in the previous two years. In posting the lessons and lesson responses, the students and I could access the information - and check on changes to it - when it was convenient for us rather than always having to meet to exchange information. (University supervisor 2) I found it helpful to view lessons online and add feedback. (University supervisor 4) For those supervisors with conflicting schedules and time constraints, I think that the system is much more beneficial. (University supervisor 5) I do see the need to a place to collect some of the Student Teachers work. It is a good way to collect documents and show a student's progress. It's hard to coordinate schedules, so I don't see any other way of putting my hands on a lesson plan and responding to it. (University supervisor 6) Four of the six university supervisors expressed some level of frustration with the use of the tool for documenting feedback on student teacher lesson plans. University supervisors did not object to giving feedback to student teachers. In fact, their comments indicated that they found this part of their position rewarding, but their preference was through less formal means, opting for verbal communication and face Swan 639 to face meetings rather than documentation. Again, there seemed to be a perception that the online system was less personal, and therefore less attractive to use. Moreover, for two of the university supervisors, documenting of feedback through the portfolio system was seen as a redundant process. I did meet with my students on a regular basis - typically once a week even if it was just to stick my head in the door and ask how it was going. In this way we were able to talk about concerns with the lessons face-to-face, which I actually prefer. (University supervisor 2) I just found myself "transcribing" the conversations with the student teachers onto [the system] to meet this [giving feedback] requirement. So, in my situation it was redundant to me but that's because I kept in close contact with my student teachers and we communicated often by email and phone in addition to meetings (University supervisor 5) The other two university supervisors felt that the number of times they were required to give feedback to student teachers’ lesson plans yielded diminishing returns. In other words, they felt that they should not have to comment on every lesson plan but rather intermittent feedback was sufficient. The evidence suggests that both faculty and university supervisors preferred to employ personal communication when feasible. As the feasibility of using personal communication decreased, in the cases of higher numbers of student teachers, the perceived benefit of the portfolio system increased. Accreditation is a just in time process for many faculty As described earlier in the paper, the potential to facilitate processes such as accreditation is one of the reasons that e-portfolio systems have garnered so much attention at teacher preparation programs. Consequently, these study participants were asked to describe how they were preparing or had prepared for accreditation in the past. Participant 1 and Participant 4 have both been at the university for at least three years and are the sole faculty members in their programs. Both Participants 1 and 4 indicated that they were not presently preparing for accreditation data compilation and had given it little thought. Participant 1 and Participant 4 indicated that they had received neither training nor expectations regarding the process of accreditation data aggregation. These statements clearly indicated that accreditation data aggregation has a limited effect upon what happens on the day to day practices of these teacher educators. It also suggests that there is not a significant effort by the leadership to keep the accreditation issues front and center during the day to day operation of the program. When asked about the accreditation process and what type data were used, faculty indicated the use of low resolution data, such as student teaching evaluations and course syllabi. The lack of student artifacts from field placements in part reflects the labour intensive process associated with the collection and storage of these documents. During her interview, Participant 2 described the use of course based materials and the difficulty in gathering student artifacts. Well, it would be your paper evaluations. We would go through after the fact and get students to put together their portfolios. With it not being online we were [retroactively] collecting all sorts of lesson plans or unit plans… We were all trying to do that after the fact, which made it challenging, and you didn’t get a nice representative sample. 640 Australasian Journal of Educational Technology, 2009, 25(5) Participant 2 was involved in the accreditation process as a graduate assistant and as a faculty member. She noted that the types of artifacts typically used in accreditation, such as lesson plans and students’ philosophy of teaching statements, were often hard to locate because there was no way of methodically storing and retrieving this type of student performance data. Searching for these artifacts was time consuming and thereby weakened the accreditation process. Participant 3 stated, “Students give a wealth of information that is punishing to use due to the tremendous clerical overhead involved with processing it”. The participant described the data collected in the program as “available, but not accessible.” Comments from several of the faculty members interviewed during this study indicated that faculty members are often frustrated with the accreditation preparation process. Participant 6 also indicated during his interview that there is a wide range of opinions about the usefulness of accreditation. While he did not denounce the practice, he did not express any supportive comments regarding the process. Participant 2 felt there was “no institutional learning” associated with the process, rather “You just go through and give them the information they want”. Whether it is from the significant amount of clerical work involved in the accreditation process or an ideological chasm between the faculty and the accrediting body, it is clear that accreditation is not viewed an integral part of routine practice by this group of reporting faculty. Instead, it is viewed as Participant 4 put it, “a lot of hoops you have to jump through”. Discussion and implications for departmental and college leadership While the assessment of individual students is vital for any program, an equally important goal for any sort of data collection or assessment system should really be the improvement of the unit. Niguidula (1997) observes that school culture is the most crucial element “for making the digital portfolio a reform tool, rather than an electronic filing cabinet” (p.26). However, the findings in this study illustrate the culture of assessment in this research setting is far from ideal for meeting the loftier goals of an e-portfolio system. While each department and university has its own characteristics, the structural similarities of academic institutions make it unlikely that the findings in this study are unique. In this section, the policy implications and a series of recommendations for the implementation of e-portfolio information data systems in colleges of education are discussed. Consensus building is one of the most common startup problems with portfolios (Anderson & DeMeulle, 1998; Reis & Villaume, 2002; Zidon, 1996; Strudler & Wetzler, 2008). Just as portfolio assessment requires faculty consensus to be successfully implemented on a program wide basis, the use of systems such as the one used in this study would presumably be strengthened by greater faculty consensus about the elements of supervision. However, leadership must be willing to provide the centrifugal force to bring the faculty together and work through what potentially could be a contentious process. As seen by the variability in how faculty members go about dealing with the student teaching experience, there is no uniform definition of how work is performed in the unit. Leadership must take a proactive and potentially authoritative role in making these different facets of the program connect. As stated earlier, the emergent perspective is largely about managing the conflict that emerges Swan 641 with the implementation of a MIS. It may be that project leadership must not shy away from conflict and possibly be courageous enough to actually initiate and see it through to resolution. Warning: Formalising tacit procedures may cause tensions within programs The example of end of year evaluations and their perceived limited utility by faculty in this study points to a challenge for developers. Existing practices in the program may not yet be optimised to take full advantage of the robust capabilities of a MIS application such as the e-portfolio system described here. There are clearly parallel systems of how faculty “do the work” of assessing performance and how the department documents performance. There is a much higher clerical overhead associated with managing high resolution data such as lesson plans, digital movies or presentations and classroom observations that faculty often use in the course of field placements. While these are clearly more valued, a global evaluation form fits better into the department’s previous data infrastructure. Additionally, past methods of meeting accreditation did not require the types of evidence that are required in recent years. Accrediting bodies, such as National Council for Accreditation of Teacher Education (NCATE) and Teacher Education Accrediting Council (TEAC) are increasingly asking colleges of education to effectively assess developing clinical practice of their students along a continuum and to use these data to refine their programs. From Rogers’ theory of Diffusion of Innovations (2003), we know that people tend to use new tools in ways that conform to existing practices; that is to say, people do not reinvent practice in the short term. A first step may be the automation of existing practices as a way to ease faculty into a new medium. This may be a realistic goal for a program to become familiar with the software environment being deployed, though it will probably not provide the “killer application” that is often sought with new technologies. In this study, for instance, faculty and university supervisors used the student teaching evaluations. They all understood the purpose for having them as this was just an automation of existing practice. The tension in the implementation was generated from the artifact collection and documentation of feedback. On the surface, this seemed like a logical extension of what was already occurring in the program. However, this transition represented not just a change in medium, but also a change in defining how work was done, or at the very least, documented in the program. A task that had been totally decentralised and left to the discretion of the individual was now being prescribed by a “administrative system”. While some supervisors saw a benefit from the system, others saw it as a “paper trail” for an external audience. Although there are shifts in practice that need to occur if teacher education is to utilise MIS to employ program level decision making, project leadership needs to be mindful of existing norms and values to make the adoption of such MIS tools a smoother process; one that enhances, rather than erodes or competes with professional identity and practice. Challenge: Explore ways to make high resolution data more accessible The extensive use of undocumented data, such as personal communication with university supervisors and student teachers, suggest that faculty focus on individual student assessment rather than on program level assessment. This shift from evaluating individuals at a specific point in time to monitoring the health and vitality 642 Australasian Journal of Educational Technology, 2009, 25(5) of the program over a number of years is clearly unfamiliar, or perhaps a daunting task, to faculty as the interview data in this study show. Several of the participants, both faculty and university supervisors, were accustomed to using high resolution, but undocumented forms of assessment and data. These non-documented sources, while effective at keeping track of individual students, would be very difficult to employ in any sort of formal assessment or inter-program evaluation. Wetzel and Strudler (2005) found in the programs with e-portfolios they studied that, “from an administrative perspective, an important next step was to move forward with efforts to aggregate data, both for accreditation purposes and for program review and improvement” (p. 242). From the faculty perspective, “important next steps were to streamline the process, making it more ‘do-able’ and sustainable” (p.242). While these two perspectives can certainly be antithetical they need not be mutually exclusive. Active development and involvement from academic institutions in both the design and evaluation of these tools is needed. Inevitably these tools will evolve and research is needed to ensure that changes make practice more robust and not just different. Conclusion The use of e-portfolios can introduce expectations that significantly change the dynamics of a professional community. As faculty begin to negotiate these new expectations, it will be important to facilitate the changes so that more parallel, redundant practices (that will then simply be more work) are not created. Rather, new policies and technological tools must support those involved as they develop new dimensions to their professional identities as data documenters, analysts, program assessors and communicators in the online digital world. These new roles, however, must not obscure the personal interactions and judgments that are prevalent and valued in this field based, extended community of practice in the schools. Figure 4: A summary of high resolution classroom observations To this end, based on the data collected for this study, the open source, e-portfolio system was modified to include more resolute data, including classroom observations. As part of an ongoing research agenda, we are exploring how to make the use of such high resolution sources more viable, especially in larger programs. Figure 4 shows a Swan 643 reporting screen, currently being studied, that takes scale items from classroom observations and uses conditional color-coding to create a “heat map” of student performance in the field. Rows of red represent individual students who may need additional support, while columns of red indicate areas of cohort level intervention. As new observations are entered, they are incorporated into the summary. Initial research has shown that the processing of this high resolution data can provide the type of separation and granularity that is valued (Swan, 2009). This example illustrates one perspective on how to address the issue of making high resolution data more useable. Similar efforts are underway in other project sites to design procedures and interfaces that will focus on student artifacts). Interfaces that provide a benefit to the everyday workflow could help e-portfolios thrive as an integral part of the instructional process rather than another thing for instructors to do. A statement from one of the participants of this study speaks to this need. As he noted, “students give a wealth of information that is punishing to use” and data in programs is “available, but not accessible.” The challenge of addressing this issue will be critical to making e- portfolios a transformational practice rather than merely a more robust filing cabinet. References Anderson, R. S. & DeMeulle, L. (1998). Portfolio use in twenty-four teacher education programs. Teacher Education Quarterly, 25(1), 23-31. Bartlett, A. (2002). Preparing preservice teachers to implement performance assessment and technology through electronic portfolios. Action in Teacher Education, 24(1), 90-97. Bartlett, A. & Sherry, A. (2004). Non-technology-savvy preservice teachers' perceptions of electronic teaching portfolios. Contemporary Issues in Technology and Teacher Education, 4(2), 225-247. Baston, T. (2002). The electronic portfolio boom: What's it all about? Syllabus, December. [viewed 4 June 2008, verified 18 Oct 2009] http://campustechnology.com/articles/39299 Beishuizen, J., van Boxel, P., Banyard, P., Twiner, A., Vermeij, H. & Underwood, J. (2006). The introduction of portfolios in higher education: A comparative study in the UK and the Netherlands. European Journal of Education, 41(3/4), 491-508. Evans, M. & Powell, A. (2007). Conceptual and practical issues related to the design for and sustainability of communities of practice: The case of e-portfolio use in preservice teacher training. Technology, Pedagogy and Education, 16(2), 199-214. Gibson, D. & Barrett, H. (2003). Directions in electronic portfolio development. Contemporary Issues in Technology and Teacher Education, 2(4). [viewed 8 Sep 2007, verified 17 Oct 2009] http://www.citejournal.org/vol2/iss4/general/article3.cfm Lambert, S. & Corrin, L. (2007). Moving towards a university wide implementation of an ePortfolio tool. Australasian Journal of Educational Technology, 23(1), 1-16. http://www.ascilite.org.au/ajet/ajet23/lambert.html Lawton, S. B. (2001). The ideal educational information system: Is it possible... or desirable? School Business Affairs, 67(6), 14-18. McKinney, M. (1998). Preservice teachers' electronic portfolios: Integrating technology, self assessment, and reflection. Teacher Education Quarterly, 25(1), 85-103. Markus, M. L. (1983). Power, politics and MIS implementation. Communications of the ACM, 26(6), 430-444. Meeus, W., Questier, F. & Derks, T. (2006). Open source eportfolio: Development and implementation of an institution wide electronic portfolio platform for students. Educational Media International, 43(2), 133-145. 644 Australasian Journal of Educational Technology, 2009, 25(5) Naizer, G. L. (1997). Validity and reliability issues of performance-portfolio assessment. Action in Teacher Education, 18(4), 1-9. Niguidula, D. (1997). Picturing performance with digital portfolios. Educational Leadership, 55(3), 26-29. Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc. Potthoff, D. (1996). Striving for integration: A portfolio content analysis. Action in Teacher Education, 18(1), 48-58. Reis, N. K. & Villaume, S. K. (2002). The benefits, tensions, and visions of portfolios as a wide scale assessment for teacher education. Action in Teacher Education, 23(4), 10-17. Robey, D. & Markus, M. L. (1984). Rituals in information system design. MIS Quarterly, 8(1), 5-15. Rogers, E. M. (2003). Diffusion of innovations. 5th ed. New York: Free Press. Simmons, J. (1996). Control the purpose, not the contents: Coaching the creation of teaching portfolios. Action in Teacher Education, 18(1), 71-81. Snyder, J., Lippincott, A. & Bower, D. (1998). The inherent tensions in the multiple uses of portfolios in teacher education. Teacher Education Quarterly, 25(1), 45-60. Stone, B. A. (1998). Problems, pitfalls, and benefits of portfolios. Teacher Education Quarterly, 25(1), 105-114. Strudler, N. & Wetzel, K. (2005). The diffusion of electronic portfolios in teacher education: Issues of initiation and implementation. Journal of Research on Technology in Education, 37(4), 411-433. Strudler, N. & Wetzel, K. (2008). Costs and benefits of electronic portfolios in teacher education: Faculty perspectives. Journal of Computing in Teacher Education, 24(4), 135-141. Swan, G. (2009). Tools for data-driven decision making in teacher education: Designing a portal to conduct field observation inquiry. Journal of Computing in Teacher Education, 25(3), 107-113. Wenger, E. (1998). Communities of practice: Learning as a social system. Systems Thinker. [viewed 20 May 2008 at http://www.co-i-l.com/coil/knowledgegarden/cop/lss.shtml, verified 17 Oct 2009 at http://www.open.ac.uk/ldc08/sites/www.open.ac.uk.ldc08/ files/Learningasasocialsystem.pdf] Wetzel, K. & Strudler, N. (2005). The diffusion of electronic portfolios in teacher education: Next steps and recommendations from accomplished users. Journal of Research on Technology in Education, 38(2), 231-243. Wickersham, L. E. & Chambers, S. (2006). ePortfolios: The first semester. Education, 127(1), 738- 746. Wilhelm, L., Puckett, K., Beisser, S., Wishart, W., Merideth, E. & Sivakumaran, T. (2006). Lessons learned from the implementation of electronic portfolios at three universities. TechTrends: Linking Research & Practice to Improve Learning, 50(4), 62-71. Wright, V. H., Stallworth, J. B. & Ray, B. (2002). Challenges of electronic portfolios: Student perceptions and experiences. Journal of Technology and Teacher Education, 10(2), 49-61. Zidon, M. (1996). Portfolios in preservice teacher education: What the students say. Action in Teacher Education, 18(1), 59-70. Gerry Swan, Assistant Professor of Instructional Systems Design Curriculum & Instruction/Mechanical Engineering, University of Kentucky, Lexington, KY 40506-0017, USA. Email: gmswan3@email.uky.edu Web: http://www.otisonline.org/ http://education.uky.edu/EDC/content/faculty#GSwan