Microsoft Word - [4]1175-5212-3-CE_MM.docx Australasian Journal of Educational Technology, 2014, 30(5). 533 ascilite An interactive digital platform for preservice secondary English teachers’ reading: First iteration findings Judine Ladbrook University of Auckland, New Zealand Preservice teachers of secondary English need sustained and confident experiences of the pedagogical affordances of information and communication technologies (ICTs), to overcome the constraints perceived by their secondary teaching colleagues. They also need to rapidly develop an extensive knowledge of adolescent fiction titles for progressing the reading engagement and success of their future students. Building on the students’ acceptance of ICTs, this study examines the impact of using an interactive digital platform within preservice secondary English courses, for adolescent reading requirements. This paper reports results of the first iteration of the innovation. Data were collected via a questionnaire and results show that using an interactive digital platform with social networking characteristics for writing and reading book summaries, augmented knowledge of titles, increased motivation for reading, developed a repository of titles for use in the first year of teaching, and demonstrated how a digital platform might be successfully used in schools. Recommendations for the second iteration of the innovation are also reported. Introduction Recommending young adult fiction titles for adolescent readers can be a challenge to early career English teachers who have not had years to read a range of titles. Many secondary school English departments have long supported students’ wide or extensive reading via initiatives such as Sustained Silent Reading (SSR) programmes, wide reading programmes, “Hooked on Books” units and reading room provisions. In addition, English Achievement Standards at Levels One and Two of the National Certificate of Educational Achievement (NCEA) in New Zealand, (that is, the first 2 years of assessment for national qualifications in the secondary school system, when students are typically between the ages of 15 and 17), include Standards where students independently read texts. It is, therefore, important for preservice subject English teachers to rapidly increase their knowledge of titles that will appeal to a variety of interests and abilities, and help to enhance the engagement and reading success of their future students. For this reason, a reading log has been a long-standing requirement in the secondary preservice English courses at the institution where this research is taking place. Students must complete the requirement in order to complete the courses, even though it does not earn them marks. The rationale for this is that they should be able to recommend titles to students but do not necessarily need the same depth of knowledge of the texts compared to those selected for classroom study. The log requires extensive rather than intensive reading, and encourages quantity over quality in terms of responses. Not giving marks for the log is to reinforce the idea that this extensive reading is basic to the role of being an English teacher, that to not do it would put them at a disadvantage when they enter the profession, and that it is an enjoyable activity. If their future role in schools includes fostering a positive attitude to reading and enthusing their students about the value of reading for leisure, then having a log as a requirement rather than a course assessment in their pre-service year, is congruent with this. Doing the reading is the reward. Context In the Faculty of Education where this research has begun, there are three courses in the preservice secondary teacher education one-year programme for subject English. The first two courses are co- requisites, concentrate on the first 3 years of secondary subject English (Years 9–11) and are delivered in the first semester. The third course is focussed on Years 12 and 13 and delivered in the second semester. Historically each course has had two streams, each having its own lecturer for 36 hours of face-to-face work. Recently, delivery has changed. Since 2011, the two streams in each of the first semester courses have become one stream in the second semester course. In 2013 the two first semester courses have had both streams coming together in an Interactive Large Class Teaching (ILCT) initiative for half of the time Australasian Journal of Educational Technology, 2014, 30(5). 534 (36 hours), (with the two course lecturers alternating between the sessions), and workshop sessions with their stream lecturer for the other half of the time. The Faculty undertook the ILCT initiative for several reasons. Some reasons for moving to more lecture- style sessions with larger numbers are to increase student independence and increase teaching efficiency. But, in a faculty tasked with teacher education, it is paramount that sound pedagogical practices are associated with innovation. Many past initiatives into ILCT have narrowed interactivity to a variety of student response devices such as clickers, electronic voting systems and answering multiple-choice questions or giving feedback via hand held devices (Boyle & Nicol, 2003; Draper & Brown, 2004; Liang et al., 2005). However, small group interaction and collaboration, the ability to quickly change group sizes depending on learning activity needs, the capacity for small groups to present to each other, and differentiating the work of small groups, all contribute to recognising student needs and the social nature of learning. Also, in large teaching spaces it is paramount that everyone sees and hears all that is happening. The ILCT spaces are, therefore, flat floored with mobile tables in hexagonal groups but easily halved into groups of three, and provided with a large data show screen that mirrors onto smaller screens around the room, with this mirroring occurring for all screens or half of them for alternative information. Recent technological affordances have seen schools purchasing iPads for school student use to facilitate our students’ interactions as well as provide them with practical learning experiences using recent technology. The spaces are equipped with these at a ratio of 1:2. These can be pushed to airplay on the data show and the small screens, which enable small group report backs, using iPad tools. Despite significant resourcing to schools, there are still assertions that the uptake of information and communication technologies (ICTs) is uneven (Bate, 2010; Ertmer & Ottenbreit-Leftwich, 2010; Hurd, 2009; Levin & Wadmany, 2008; Somekh, 2008; Wachira & Keengwe, 2011) and it can be argued that “the key to assisting teachers with effective technology integration is to make meaningful changes in teacher preparation" (Hur, Cullen, & Brush, 2010; p. 162). There is also evidence that our tertiary students may not have the ICT and information literacy skills to ready them for the classroom (Doiron & Asselin, 2011; Kennedy & Judd, 2011; Kumar & Vigil, 2011) and research has suggested models that are successful for preservice teacher ICT development (Cavin, 2008; Lambert & Gong, 2010; Steketee, 2005). The ILCT initiative, the suggested skills of tertiary students, the uptake of ICT in schools and the role preservice teacher education can take in this, coupled with the need to increase our students’ familiarity with adolescent fiction and create a collaborative community across two teaching streams, provided the incentive to seek an online tool for the reading requirement. Peerwise (http://peerwise.cs.auckland.ac.nz) was chosen as the online tool because it alerted the English students to some of the affordances of using technologies for similar activities with their future students, provided an opportunity to use technology at an individual pace, used some features of social networking platforms with which many students are familiar (for instance following a contributor and giving ratings), was easy to access, required no technical skills and, unlike a blog, provided a tool where students answer each other’s questions and, therefore, read each other’s contributions. This latter aspect was seen as a positive when considering ways to increase preservice students’ knowledge of adolescent reading titles. Further ICT and collaborative leverage happened when students initiated a Facebook page to discuss content, assignments and social activities. Course lecturers chose not to be involved in these conversations so that between-student collaborations could be totally in the students’ control. The Peerwise Tool Developed at the University of Auckland by Paul Denny, Peerwise is a web-based interactive tool where students (anonymously to peers) generate multiple-choice questions based on the content of their courses and write an explanation for their correct answer. They can also answer, rate and comment on the questions submitted by peers. Contributors can alter questions based on reader feedback, upload images, follow other contributors, give their submissions subject tags and earn a score based on their authoring, answering and rating. They can also earn a variety of 22 badges for such things as being a good question author, asking and answering questions, following others, being followed, and so on. The main purposes for using the site for reading logs was to provide a tool for students to be able to write log entries and read entries of others. The course requirement asked students to write and read 6 logs per course (equating to 12 logs written and 12 read over the first semester). The site did not allow for this Australasian Journal of Educational Technology, 2014, 30(5). 535 without students generating a multiple-choice question for each piece they wrote. For the multiple-choice question to be answerable it had to be based on what students wrote in their log, rather than based on the book they reviewed (because their peers would not have necessarily read it) or on course content. The purpose was not related to an ability to write questions, distractors and answers, but the site made it necessary for students to do this and to rate each other’s questions. The site’s tracking system allowed lecturers to monitor the number of entries students had written (and therefore books read) and the number of multiple-choice questions they had answered (and therefore the number of logs they had read). This provided the means for lecturers to check that course log requirements were met. This was less time consuming than in previous years where, over the two courses, lecturers had read each log entry (for 2013 this would have meant reading 492 log entries). The requirement, which was written in the course booklets, was explained to students at the first lecture and Peerwise was demonstrated on the data show. Examples of writing, generating questions and responding had been entered into the site for this demonstration. In addition, the tool’s define course messages instructor option was used to reiterate what should be in student logs. Students also had the site’s online student instructions to which they could refer. They were asked to look at the instructions and log themselves in after the first lecture. Literature review Preservice student teachers and ICT Although most preservice student teachers have grown up surrounded by ICTs, there is evidence that framing them as ‘digital natives’ (Prensky, 2001) – that is, ubiquitous and savvy users of digital technologies – is erroneous. They are not an homogeneous group (Jones, Ramanau, Cross, & Healing, 2010; Kennedy & Judd, 2011; Valtonen et al., 2011), nor necessarily proficient ICT users (Hammond, Reynolds, & Ingram, 2011; Lei, 2009). There is evidence that today’s tertiary students need assistance to increase their information and critical literacy skills (Doiron & Asselin, 2011), that they rely heavily on Google (Rowlands & Nicholas, 2008) and their research is driven by “efficiency and predictability” (Head & Eisenberg, 2010, p. 1). Kumar and Vigil (2011) assert that even when preservice students are adept users of Web 2.0 and social networking sites, their skills do not automatically transfer to their professional lives. In fact, there is some evidence that in this regard they are reluctant to merge their personal and professional abilities (Jones & Lea, 2008; McNeill, Diao, & Gosper, 2011). However, an argument can be made that their acceptance of ICTs should be built on, because there is clear evidence that a positive attitude to ICTs does affect what happens in classrooms with school students (Abbitt, 2011; Bai & Ertmer, 2008; Ertmer & Ottenbreit-Leftwich, 2010). The first year of teaching and the school context Despite the significant ICT outlay in schools, there is still evidence of teacher perceived constraints in utilising ICTs in the classroom. Access, as identified by Ertmer (1999) as a first order constraint, is still problematic for some (Bate, 2010; Hew & Brush, 2007; Hurd, 2009), as is teacher confidence (Ladbrook & Probert, 2011; Levin & Wadmany, 2008). Recent research illustrates that the time allocated for sustained professional learning on the pedagogical affordances of ICTs (Cowie, Jones, & Harlow, 2011; Ertmer & Ottenbreit-Leftwich, 2010; Wachira & Keengwe, 2011), teacher beliefs around what their subject area entails (Belland, 2009; Ertmer & Ottenbreit-Leftwich, 2010; Ladbrook, 2009) and their teaching and learning beliefs (Bate, 2010), are still acting as barriers to the widespread use of ICTs with students. This is the context into which the first year teacher moves. The early years of teaching have been characterised as ones of anxiety, frustration and a lack of agency (Nias, 1989; Wolters & Daugherty, 2007), of seeking advice from and imitating colleagues (Sikes, 1985), and of associating “good practice with that of the veteran teacher, whose practice and cache of resources they seek to emulate” (Allen, 2009; p. 647). It is therefore important that preservice teachers have a sustained and confident experience of the affordances that ICTs can offer in their future classrooms, because the “reinforcement of salient beliefs, such as ease of use, usefulness and facilitation, will enhance preservice teachers’ intentions to adopt new technologies” (Cheon, Song, Jones, & Nam, 2010; p. 53). Preservice ICT models It has been posited that there are four main models of teaching ICT in preservice teacher education (Steketee, 2005). These are ICT skills based courses, ICT pedagogical courses, ICT practice driven, Australasian Journal of Educational Technology, 2014, 30(5). 536 process to product courses, and ICT integrated into curriculum subject courses. According to Steketee the first might increase user confidence, but learning how to use technology is not sufficient for successful implementation in the classroom. The pedagogical model does not take into account that “different disciplines have different organizational frameworks, established practices, ways of acknowledging evidence and proof, and approaches for developing knowledge” (Harris, Mishra, & Koehler, 2009; p. 395) and the practice driven courses (such as those in which students prepare teaching portfolios), while increasing motivation, lack a pedagogical focus (Steketee, 2005). However, the subject specific courses, which have students learning “about, through and with technology-based media” (Cervetti, Damico, & Pearson, 2006; p. 383), and where “subject-specific pedagogy can exert a lasting positive influence on beginning teachers during induction” (Wang, Odell, & Schwille, 2008; p. 143), seem to address the shortcomings of the other three models. Mishra and Koehler (2006), building on Schulman’s (1986) pedagogical content knowledge model (PCK), have described in their T[technological]PCK model (more recently called TPACK) what this type of course might encompass. It is one that considers “the dynamic and complex relationship among content, technology, pedagogy, and context … [which are] interdependent aspects of teachers’ knowledge necessary to teach content-based curricula effectively with educational technologies” (Harris et al., 2009; p. 393). The literature above was instrumental in the decision to utilise Peerwise for the preservice English students’ reading logs. Not only does the platform enable students to write their own logs, but provides an online tool that facilitates students reading the logs of others. Its application is embedded into the curriculum courses, student driven, can be completed off site in student chosen time, reflects many of the aspects of social networking with which students are familiar, and provides a model that the students can replicate in their own classrooms when they begin teaching. It was also decided that there would be an ongoing evaluation of the innovation at the end of each semester in 2013 and 2014. Research questions The study is guided by two overarching questions: 1. How do students use the Peerwise tool? 2. What are the affordances and limitations of Peerwise? The subquestions that make up the second overarching question are: • Does Peerwise enable students to comment on their reading without greatly reducing time available for reading and does it increase their knowledge of young adult fiction titles? • Does the provision of an authentic audience and peer driven tool increase student motivation for the reading log requirement and depth of thinking about their reading? • Do students want to use Peerwise in their first year teaching? • Does the use of a collaborative digital tool contribute to the development of a community of learners? Overarching question 1 will be answered by analysis of anonymised statistics generated by the Peerwise site. Overarching question 2 will be answered using the site statistics and an anonymous questionnaire. Ethical considerations The study has approval of the university’s Ethics Committee and the Dean of Faculty for 2013 and 2014. The major ethical consideration for the study is that a course lecturer is also the researcher and therefore there is a power relationship with the respondents. To protect students, the questionnaire is anonymous and administered by an independent third party. In addition, the third party keeps the student consent forms so that the researcher does not know who participated. The questionnaires are not given to the researcher until all student work is marked and grades are entered into the university system at the end of each semester. Australasian Journal of Educational Technology, 2014, 30(5). 537 Method Site statistics The site gives participation dates for when students write their logs and when they read the contributions of their peers. It also gives data on how many entries, responses to entries (through the answering of the multiple-choice questions) and comments individuals contribute. The number of badge rewards earned for different responses is also reported. The participation dates will show whether students completed their entries when the course requirement was due for completion or whether they were writing and responding over a length of time. A Likert-scaled question in the questionnaire will also add to this data. The number of logs completed and responses to the entries of their peers will give information about whether students were motivated to read more than the required number of books and peer logs (the requirement for both courses over the semester being a total of 12 of each). The questionnaire The questionnaire contains both open and closed questions (the questionnaire is available from the author upon request). It has been asserted that using open questions in questionnaires “should be avoided wherever possible” (Bethlehem, 2009; p. 51), that “open-ended questions are best used when people can answer verbally rather than in writing” (de Vaus, 2002; p. 129) and that they can “produce responses that may be ambiguous, wide ranging and difficult to categorise” (Simmons, 2008; p. 193). However, so as not to escalate any frustration the students might feel with having their responses shortened and limited by only having closed response opportunities, and taking into account Cargan’s (2007) warning that, “the ease of simply checking of a response may lead to a lack of thinking through the issue or conditioning the response” (p. 93), three open questions are included. These ask about likes and dislikes for the log tool, with a section for respondents to add any other comments. Closed questions are in the form of 5-point agreement scales. While the issue of the number of points on a rating scale seems to be unresolved, a trial of 2 to 11 response categories (Preston & Colman, 2000) suggests that anything less than a 5-point scale performs poorly. The scale for each question is identical (strongly disagree to strongly agree) ensuring “pattern recognition” (Fanning, 2005; p. 3), and questions on similar aspects of the topic are grouped together. Questions 1 and 2 ask whether it is important for English teachers to have a wide knowledge of adolescent fiction titles and whether participating in the digital tool took time away from reading. Questions 3, 4 and 5 are designed to ascertain the impact of reading other people’s summaries, while questions 8, 9 and 10 seek the impact of having other’s reading the respondent’s summaries. Questions 6 and 7 ask whether the innovation contributes to a community of learners. Question 8 and 11 seek feedback on whether an authentic audience and being able to collect participation badges contributes to motivation, and question 8, coupled with question 11, gives feedback on the impact of having an authentic readership rather than a lecturer audience. Respondent perceptions of how useful the innovation will be to their future teaching are measured by questions 14 and 15. Respondent numbers All students in the two first semester courses used the Peerwise site (N=41). Out of the possible 41 respondents, 37 consented to be part of the questionnaire research. The remaining 4 respondents were absent from the lecture when the questionnaire was administered and they were not followed up. Defining the response rate as the number of eligible sample members who complete a questionnaire divided by the total number of eligible sample members (American Association for Public Opinion Research, 2008; Czaja & Blair, 2005), this gave a 90.25% response rate. With such a high response rate, it is assumed that there is no non-response bias, and therefore no reason for the non-respondents to differ from the respondents on any dimensions relevant to the study. It might be interpreted that the high response rate (100% of the students who attended the last lecture) reflected the salience of the Peerwise innovation to the participants and also, taking into account that “response rates are an important measure of survey quality” (Czaja & Blair, 2005; p. 38), the quality of the questionnaire itself. Australasian Journal of Educational Technology, 2014, 30(5). 538 Analysis The 16 agreement-scaled questions were coded and entered into the Statistics Package for the Social Sciences (SPSS). A Cronbach’s alpha coefficient for internal consistency reliability was calculated for the first 15 Likert-type scales. The result was (α=.803) with a scale mean of 58.57 (SD=7.56). According to George and Mallery (2003) this is between good (.8) and excellent (.9) and Gliem and Gliem (2003) posit that “an alpha of .8 is probably a reasonable goal” (p. 87). A constant comparative method, similar to that described by Glaser and Strauss (1967) and Lincoln and Guba (1985) was used to analyse the three open questions. This inductive and comparative process consolidated, reduced and aided interpretation (Merriam, 2009). Category construction began with each comment being given a number and note as to a possible category name, with key words that would help define the category and establish rules for the allocation of further comments. This process was not arduous, as the students had written very brief comments, if any at all. Coding reliability was cross- checked by an independent coder. Six randomly chosen questionnaires (16.2%) were used. The independent coder used the already constructed categories and allocated them to respondent comments. Because of the similarities in respondent comments, there was a 100% agreement in the category allocations. Findings and discussion How do students use the Peerwise tool? Students were asked to complete their reading requirement on two different dates for the two courses, 19 April and 21 June. The site graphs show when students wrote their summaries of their reading (Figure1) and when they read and commented on peers’ summaries (Figure 2). Figure 1. Dates students wrote their reading summaries Australasian Journal of Educational Technology, 2014, 30(5). 539 Figure 2. Dates students read and commented on peer summaries Both graphs show peaks in activity around the dates the requirement was to be completed. However, there is more even activity in reading peers’ summaries over the semester than in writing summaries. The students were on their teaching practicum between the dates of 6 May to 14 June, and this is reflected in the lack of activity at this time. The courses started on the 4 March and activity prior to then is a course lecturer posting examples for the first lecture demonstrating Peerwise. In the questionnaire, only 16.2% (6, N=37) students indicated that they went into Peerwise when the work was due for completion. By comparing the two graphs it is obvious that students went into the site more frequently to read and comment on their colleagues’ work, than to write their own summaries. This might indicate an interest in what their peers were reading and also show the time individuals needed to read books before posting summaries. All students completed their log requirements for the two courses. All did the minimum 12 reading entries, with one doing 13. However, the picture was markedly different for reading the logs of others. Site generated statistics showed how many logs individual students read. The requirement to read 12 was exceeded by 85.4% (35, N=41) students. 18 students (44%) read between 13 and 20 logs of others, 6 (14.6%) read between 21 and 30, 2 (4.9%) read between 51 and 60, 3 other individuals read 75, 101, and 113 respectively, and 3 students (7.3%) read between 141 and 150. That is, nearly 20% of the students read over 50 log entries written by peers, more than 4 times the requirement. The questionnaire asked respondents whether they were motivated to earn badges and icons. This was not a motivational factor for 48.6% of the students (18, N=37), while 35% (13) indicated it was. Students were more motivated by the content of each other’s summaries than by the extrinsic reward of badges. Questionnaire results Does Peerwise enable students to comment on their reading without greatly reducing time available for reading and does it increase their knowledge of young adult fiction titles? All of the students agreed that it was important for English teachers to develop a wide knowledge of adolescent fiction, with 89.2% strongly agreeing. However, 10.8% (4 students) thought that participating in Peerwise reduced their time for reading. Twenty-seven students (73%) thought it did not reduce reading time and over 50% ‘strongly’ agreed that it did not reduce reading time. Australasian Journal of Educational Technology, 2014, 30(5). 540 Over 80% were interested in other people’s summaries and title recommendations and 70.3% (26 students) thought that other people’s summaries increased their repertoire of adolescent fiction titles. 13.5% neither agreed nor disagreed with this. Responses to the open questions, discussed later in this paper, indicate that the anonymity aspect may have contributed to this. However, there is a positive impact on knowledge of adolescent fiction titles by reading peer summaries. Figure 3 shows this impact. Figure 3. The impact of reading other people’s summaries Does the provision of an authentic audience and peer driven tool increase student motivation for the reading log requirement and depth of thinking about their reading? Having an authentic audience increased motivation for the reading log for 54% of the students. It did not for 27%. Responses showed 64.8% thought that Peerwise supported them to think more deeply about their reading, with nearly 20% disagreeing. There was an indication in the open-ended responses that students felt that some of their peers did not put a lot of effort into their entries. This perception that peers were not giving their best might be a reason for the lack of motivation to please their reading audience. The fact that entries were anonymous may have also contributed to this. Anonymity may have meant that students did not have to do their best work. Results also showed 67.5% of the students liked that the log was driven by students rather than by lecturers, with 27% neither agreeing nor disagreeing. The results of responses to questions relating to peers reading the logs can be seen in Figure 4. Figure 4. The impact of peers reading log entries Australasian Journal of Educational Technology, 2014, 30(5). 541 A Pearson product-moment correlation coefficient was computed to assess the relationship between student contributions driving the reading log (rather than lecturers being the drivers) and the importance of multiple peers reading the logs rather than getting one comment from a lecturer. The two variables, (related to questionnaire statements “having multiple peers read my posts was more important than one comment from a lecturer” and “I liked that the reading log was driven by student contributions rather than by my lecturers”), had a weak relationship with no statistically significant correlation (r = -.029, n = 37, p = .866). It is assumed then, that students want the communal aspect of the logs, want to drive them themselves, but also want lecturer input on their contributions. Does the use of a collaborative digital tool contribute to the development of a community of learners? When asked whether they discussed their reading summaries with each other, 54% indicated that they did. In the comments sections, however, some said they did not do this because entries were anonymous. Using Peerwise helped 67.5% of students feel part of an English teaching community, with 16.2% neither agreeing nor disagreeing. Whether requiring entries to be named in the second iteration changes this will be shown at the end of the second semester. Do students want to use Peerwise in their first year teaching? Thirty-four respondents (92%, N=37) thought that the log entries on Peerwise would be a useful resource for their first year teaching. Only 2 students thought it would not. In addition, 35 students (95%) thought that using Peerwise had demonstrated how their school student reading logs could be managed in a digital environment. This support for the initiative shows the value of the Peerwise innovation, that access to peer logs is perceived as a useful tool for making recommendations to adolescent readers and that these students are now familiar with how they might begin to utilise ICTs in their own classrooms. Open question responses What do students like most about using an online interactive tool for their reading requirement? Seven interconnecting themes emerged from 70 different comments. Themes and the number of comments can be seen in Table 1. Table 1 Themes and number of comments about most liked aspects of using an online interactive tool for the reading requirement Theme Number (N=70) Using an interactive digital platform with social networking characteristics 20 comments Motivation 18 comments Peer summaries and opinions about different young adult fiction titles 14 comments The competitive nature of collecting badges and comments 9 comments Using beyond the courses 7 comments Anonymity 1 comment No lecturer participants 1 comment The students liked the ease of use, the ability to comment and reply to posts, the convenience of a whenever-wherever platform, the interactivity, and the flexibility of being able to return and edit their posts. One student commented that it was “easy to monitor progress and obtain feedback for further submissions before completion” while another liked “interacting with others and getting nice feedback.” They found the reading log requirement a good motivator to read adolescent fiction. Several commented on their enjoyment: “writing and reading responses was fun, ” “a fun course requirement,” and that “it’s Australasian Journal of Educational Technology, 2014, 30(5). 542 great and I think it should be used again.” Motivation and encouragement were mentioned in comments: “it encouraged me to read and find really helpful and interesting reading,” “it gave me motivation to get a good amount of reading completed and recorded,” and “Peerwise encouraged me to read and often in the past I would read books only if I needed to. I’m very proud to see that I have read a lot of novels this semester.” The ability to read peers’ book summaries was also well received. The “exposure to so much reading material,” “the new books I discovered,” “the variety of texts,” and that “some summaries were extremely well structured and worded which helped model good use of a reading log,” were among these positive comments. Knowing that their entries were going to be read by their peers also meant that many “took care” over their entries and had to “think deeper about texts.” The ability to collect badges was also a motivational factor for some students. One summed this up, “I loved the badges! It made it fun and interactive. The competition was intense.” There were seven comments about being able to use the logs beyond the courses. These comments were in addition to those that were allocated to the digital platform theme about the permanent nature of the log during the semester. One student said “I incorporated some things I had read into my teaching practice on practicum,” another that “it helped make a database of literature I would recommend to my future students,” and there were several pleas to be able to access the entries after they have secured teaching positions. What do students like least about using an online interactive tool for their reading requirement? Five themes emerged from 34 comments, as illustrated in Table 2. Table 2 Themes and number of comments about least liked aspects of using an online interactive tool for the reading requirement More than 40% of the comments about the least liked aspect were around the anonymity of contributions and how this led to a lack of quality in some summaries. Students said that some entries “were hardly worth reading,” “were done without thought or consideration for the point of Peerwise,” and “were blatantly cut and pasted from the Internet.” Multiple question authoring and rating were also unpopular for several reasons. Some writers forgot that their readers had only read the summaries and not the book, so their questions were unanswerable. Some found the questions were not challenging and two comments were around the harshness of ratings when they had tried “very hard to write challenging questions.” Eight respondents thought that having marks for the course reading requirements would increase peers’ effort and also reward those who had made thoughtful and valuable entries. Changes for the next round of using a digital tool for the course requirement There are three aspects around using this interactive web based tool for reading logs that the initial evaluation has shown as either unpopular or in need of improvement –anonymity and its impact on quality contributions; writing and rating multiple-choice questions; participation and completion not contributing to course marks. Theme Number (N=34) Anonymity and contributions of others 14 comments Question authoring and rating 10 comments No marks given 8 comments The competitive nature of collecting badges and comments 1 comment Ease of using tags 1 comment Australasian Journal of Educational Technology, 2014, 30(5). 543 The next iteration of using Peerwise for the reading requirement in the English curriculum course will ask students to put names on their entries. This might increase the professional usefulness of all log entries and reader interest in them. With entries no longer anonymous, students might think more deeply about their reading and having an authentic audience that they are responsible to, might increase motivation to write thoughtfully. Student perceptions that others did not put effort into their entries because they were unidentifiable will no longer be valid. On the other hand, being identifiable and therefore publicly responsible for their entries might affect questionnaire responses on whether the log takes time away from reading, because of the time necessary for making thoughtful entries. With entries no longer anonymous, there may also be more face-to-face discussions amongst the students about the reading and entries, which could have a positive impact on feeling part of a community of learners. Generating multiple-choice questions was problematic from the beginning. The Peerwise site was constructed around this idea. It was necessary to remind students that their questions needed to be on their written summaries rather than on the book, because their readers would not have read the book. This has proved challenging. In addition, rating other people’s questions and receiving ratings has not been popular with all of the students. Unfortunately, this is an important part of the site. To try to alleviate any frustration around the ratings, in the second iteration students will be asked to rate the summaries, rather than rate the questions. This too might contribute to an increase of quality in some of the summaries and also take pressure off the students to generate quality questions. Should participation in the course requirement contribute to student marks? All students agree that having a broad knowledge of adolescent fiction is important for English teaching. All see a value in reading other people’s summaries. Not everyone, at the end of their first semester in preservice teacher education, sees the innovation contributing to a community of learners. In many ways the preservice year is a student journey from being solely responsible for their own learning to also being responsible for the learning of others, and it is the beginning of the long journey in the development of an autonomous professional life. There is currently a tension between the Peerwise initiative encouraging the formation of a collaborative community of learners and a student demand for their individual performance to be recognised by lecturers. In past years students’ reading logs have not been assessed and there would be an irony if, by initiating the Peerwise innovation, assessment was an outcome – an irony summed up by Fahser-Herro and Steinkuehler (2009-10) when they assert that “post-secondary institutions embrace the rhetoric of diversity and collaboration yet continue to consistently reward individual, lone accomplishments” (p. 60). Whether being unconstrained by lecturers (and assessment) is more welcomed as students reach the end of their year of preservice education, having had an additional semester to build their community of learners, develop their professional autonomy and experience the changes to Peerwise use planned for the second iteration, remains a question for the next stage in this project. For this reason, and also because of the rationale for not giving marks (as outlined in the introductory section of this paper), the second iteration will continue with the log as a course requirement and not as an assessed activity. Limitations Four students were not available to participate in the evaluation. However, with 100% of the students who attended the last lecture session participating, and a 92.5% response rate, it might be assumed that most views are reflected in the results. Not all respondents wrote in the open question boxes and some preferred to write “all good” or similar. These students might have assumed that their Likert responses gave ample feedback. However, for the next evaluation at the end of the second iteration, respondents will be encouraged to give details. Conclusions Students are very positive about using a digital platform with social networking characteristics. They like the interactive nature, are motivated by this, and find it easy to use with its anytime-anywhere opportunities for participation. In addition, 95% think that using Peerwise has demonstrated how their future students’ reading logs could be managed in a digital environment and how they might begin to use ICTs in their own classrooms. There is no doubt that it has contributed to their growing knowledge of adolescent fiction. Nearly 90% of the students see Peerwise as a useful platform for the reading Australasian Journal of Educational Technology, 2014, 30(5). 544 requirement in the two courses in this first semester. With ongoing changes, it is hoped that the effectiveness of this innovation increases in the next iteration. Acknowledgements I would like to acknowledge the student participants who are contributing to the use, evaluation and changes to the way the digital platform is used in the courses. References Abbitt, J. T. (2011). An investigation of the relationship between self-efficacy beliefs about technology integration and technological pedagogical content knowledge (TPAK) among preservice teachers. Journal of Digital Learning in Teacher Education, 27(4), 134-143. Retrieved from http://www.editlib.org/p/54211. Allen, J. M. (2009). Valuing practice over theory: How beginning teachers re-orient their practice in the transition from the university to the workplace. Teaching and Teacher Education, 25, 647-654. doi: 10.1016/j.tate.2008.11.011 American Association for Public Opinion Research. (2008). Response rate: An overview. Retrieved from http://www.aapor.org/Response_Rates_An_Overview.htm Bai, H., & Ertmer, A. P. (2008). Teacher educators' beliefs and technology uses as predictors of preservice teachers' beliefs and technology attitudes. Journal of Technology and Teacher Education 16(2), 93-112. Retrieved from http://vnweb.hwwilsonweb.com/hww/jumpstart.jhtml?recid=0bc05f7a67b1790ea50545894de012dc81 c8af92c3ac962b90e7569ccf9459c3779471e64048594a&fmt=H Bate, F. (2010). A bridge too far? Explaining beginning teachers' use of ICT in Australian schools. Australasian Journal of Educational Technology, 26(7), 1042-1061. Retrieved from http://www.ascilite.org.au/ajet/ajet26/bate.html Belland, B. R. (2009). Using the theory of habitus to move beyond the study of barriers to technology integration. Computers & Education, 52, 353-364. doi: 10.1016/j.compeduu.2008.09.004 Bethlehem, J. (2009). Applied survey methods: A statistical perspective. Hoboken, NJ: John Wiley & Sons. Boyle, J., & Nicol, D. (2003). Using classroom communication systems to support interaction and discussion in large class settings. Research In Learning Technology, 11(3), 43-57. doi: 10.3402/rlt.v11i3.11284 Cargan, L. (2007). Doing social science research. Lanham, Maryland: Rowman and Littlewood Publishers. Cavin, R. (2008, March). Developing technological pedagogical content knowledge in preservice teachers through microteaching lesson study. Paper presented at the Society for Information Technology & Teacher Education International Conference (SITE), Las Vegas, Nevada. Retrieved from http://www.editlib.org/p/28106. Cervetti, G., Damico, J., & Pearson, P. D. (2006). Multiple literacies, new literacies, and teacher education. Theory into Practice, 45(4), 378-386. doi: 10.1207/s15430421tip4504_12 Cheon, J., Song, J., Jones, D. R, & Nam, K. (2010). Influencing preservice teachers' intention to adopt Web 2.0 services. Journal of Digital Learning in Teacher Education, 27(2), 53-64. Retrieved from http://www.iste.org/store/product.aspx?ID=1727 Cowie, B., Jones, A., & Harlow, A. (2011). The distribution of leadership as an influence on the implementation of a national policy initiative: The example of the Lap Tops for Teachers scheme. School Leadership and Management, 31(1), 47-63. doi: 10.1080/13632434.2010.540561 Czaja, R., & Blair, J. (2005). Designing surveys: A guide to decisions and procedures. Thousand Oaks, CA: Sage. de Vaus, D. A. (2002). Surveys in social research (5th ed.). Crows Nest, NSW: Allen & Unwin. Doiron, R., & Asselin, M. (2011). Exploring a new learning landscape in tertiary education. New Library World, 112(5/6), 222-235. doi: 10.1108/03074801111136266 Draper, S., & Brown, M. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94. doi: 10.1111/j.1365-2729.2004.00074.x Australasian Journal of Educational Technology, 2014, 30(5). 545 Ertmer, P. (1999). Addressing first- and second-order barriers to change: Strategies for technology integration. Educational Technology Research and Development, 47(4), 47-61. doi: 10.1007/BF02299597 Ertmer, P., & Ottenbreit-Leftwich, A. T. (2010). Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42(3), 255-284. Fahser-Herro, D., & Steinkuehler, C. (2009-10). Web 2.0 literacy and secondary teacher education. Journal of Computing in Teacher Education, 26(2), 55-62. Fanning, E. (2005). Formatting a paper-based survey questionnaire: Best practices. Practical Assessment, Research and Evaluation, 10(12). Retrieved from http://parkdatabase.org/files/documents/2005_Formatting-a-paper-based-Survey-Questionnaire_E- Fanning.pdf George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference, 11.0 update (4th ed.). Boston: Allyn & Bacon. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine. Gliem, J. A., & Gliem, R. R. (2003). Calculating. interpreting, and reporting Cronbach's alpha reliability coefficient for likert-type scales. Paper presented at the Midwest Research to Practice Conference in Adult, Continuing, and Community Education, The Ohio State University, Columbus, OH (2003, October). Retrieved from https://scholarworks.iupui.edu/bitstream/handle/1805/344/Gliem%20&%20Gliem.pdf Hammond, M., Reynolds, L., & Ingram, J. (2011). How and why do student teachers use ICT? Journal of Computer Assisted learning, 27(3), 191-203. doi: 10.1111/j.1365-2729.2010.00389.x Harris, J., Mishra, P., & Koehler, M. J. (2009). Teachers' technological pedagogical content knowledge and learning activity types: Curriculum-based technology integration reframed. Journal of Research on Technology in Education, 41, 4. doi: 800.336.5191 Head, A. J., & Eisenberg, M. B. (2010). Truth be told: How college students evaluate and use information in the digital age. Project information literacy (pp. 42). doi:10.2139/ssrn.2281485 Hew, K. F., & Brush, T. (2007). Integrating technology into K-12 teaching and learning: Current knowledge gaps and recommendations for future research. Educational Technology Research and Development, 55, 223-252. doi: 10.1007/s11423-006-9022-5 Hur, J. W., Cullen, T., & Brush, T. (2010). Teaching for application: A model for assisting pre-service teachers with technology integration. Journal of Technology and Teacher Education, 18(1), 161-182. Hurd, S. (2009). Why has computer assisted learning made so little impact in secondary education? Lessons from an economics and business subject case-study. Curriculum Journal, 20(2), 139-159. doi: 10.1080/09585170902948780 Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or digital natives: Is there a distinct new generation entering universities? Computers & Education, 54(3), 722-732. doi: 10.1016/j.compedu.2009.09.022 Jones, S., & Lea, M. R. (2008). Digital literacies in the lives of undergraduate students: Exploring personal and curricular spheres of practice. The Electronic Journal of e-Learning, 6(3), 207-216. Kennedy, G. E, & Judd, T. S. (2011). Beyond google and the "satisficing" searching of digital natives. In M. Thomas (Ed.), Deconstructing digital natives (pp. 119-136). New York: Routledge. Kumar, S., & Vigil, K. (2011). The net generation as preservice teachers: Transferring familiarity with new technologies to educational environments. Journal of Digital Learning in Teacher Education, 27(4), 144-153. Ladbrook, J. (2009). Teachers of digikids: Do they navigate the divide? Australian Journal of Language and Literacy, 32(1), 68-82. Ladbrook, J., & Probert, E. (2011). Information skills and critical literacy: Where are our digikids at with online searching and are their teachers helping? Australasian Journal of Educational Technology, 27(1), 105-121. Retrieved from http://www.ascilite.org.au/ajet/ajet27/ladbrook.html Lambert, J., & Gong, Y. (2010). 21st century paradigms for pre-service teacher technology preparation. Computers in Schools, 27(1), 54-70. doi: 10.1080/07380560903536272 Lei, J. (2009). Digital natives as preservice teachers: What technology preparation is needed? Journal of Computing in Teacher Education, 25(3), 87-97. Levin, T., & Wadmany, R. (2008). Teachers' views on factors affecting effective integration of information technology in the classroom: Developmental scenery. Journal of Technology and Teacher Education, 16(2), 233-263. Australasian Journal of Educational Technology, 2014, 30(5). 546 Liang, J. K, Liu, T. C., Wang, H. Y., Chang, B., Deng, Y. C., Yang, J. C., . . . Chan, T. W. (2005). A few design perspectives on one-on-one digital classroom environment. Journal of Computer Assisted learning, 21, 181-189. doi: 10.1111/j.1365-2729.2005.00126.x Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage. McNeill, M., Diao, M., & Gosper, M. (2011). Student uses of technology in learning: Two lenses. Interactive technology and Smart Education, 8(1), 5-17. doi: 10.1108/17415651111125478 Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017-1054. doi: 10.1111/j.1467- 9620.2006.00684.x Nias, J. (1989). Primary teachers talking: A study of teaching as work. London: Routledge. Prensky, M. (2001). Digital natives, digital immigrants Part 1 On The Horizon, 9(5). Retrieved from http://www.marcprensky.com/writing/Prensky - Digital Natives, Digital Immigrants - Part1.pdf Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104, 1-15. doi: 10.1016/S0001-6918(99)00050-5 Rowlands, I., & Nicholas, D. (2008). Understanding information behaviour: How do students and faculty find books? The Journal of Academic Librarianship, 34(1), 3-15. doi: 10.1016/j.acalib.2007.11.005 Schulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher 15(2), 4-14. Sikes, P. J. (1985). The life cycle of the teacher. In S. T. Ball, & I. F. Goodson (Eds.), Teachers' lives and careers (pp. 27-58). London: The Falmer Press. Simmons, R. (2008). Questionnaires. In N. Gilbert (Ed.), Researching social life (3rd ed., pp. 183-204). London: Sage. Somekh, B. (2008). Factors affecting teachers’ pedagogical adoption of ICT. In J. Voogt, & G. Knezek (Eds.), International handbook of information technology in primary and secondary education (pp. 449-460). New York: Springer. Steketee, C. (2005). Integrating ICT as an integral teaching and learning tool into pre-service teacher training courses. Issues in Educational Research, 15(1), 101-113. Valtonen, T., Pontinen, S., Kukkonen, J., Dillon, P., Vaisanen, P., & Hacklin, S. (2011). Confronting the technological pedagogical content knowledge of Finnish net generation student teachers. Technology, Pedagogy and Education, 20(1), 3-18. doi: 10.1080/1475939X.2010.534867 Wachira, P., & Keengwe, J. (2011). Technology integration barriers: Urban school mathematics teachers perspectives. Journal of Science Education and Technology, 20, 17-25. doi:10.1007/s10956-010- 9230-y Wang, J., Odell, S. J., & Schwille, S. A. (2008). Effects of teacher induction on beginning teachers' teaching: A critical review of the literature. Journal of Teacher Education, 59(2), 132-152. doi: 10.1177/0022487107314002 Wolters, C. A., & Daugherty, S. G. (2007). Goal structures and teachers' sense of efficacy: Their relation and association to teaching experience and academic level. Journal of Educational Psychology, 99, 181-193. doi: 10.1037/0022-0663.99.1.181 Corresponding author: Judine Ladbrook, j.ladbrook@auckland.ac.nz Australasian Journal of Educational Technology © 2014. Please cite as: Ladbrook, J. (2014). An interactive digital platform for preservice secondary English teachers’ reading: First iteration findings. Australasian Journal of Educational Technology, 30(5), 533-546.