COUNS-EDU The International Journal of Counseling and Education Vol.6, No.2, Month 2021, pp. 44-54 | p-ISSN: 2548-348X- e-ISSN: 2548-3498 http://journal.konselor.or.id/index.php/counsedu DOI: 10.23916/0020210635020 Received on 07/14/2021; Revised on 08/16/2021; Accepted on 08/25/2021; Published on: 11/14/2021 44 The construction of a quick TPACK evaluation tool and comparison of an integrative and transformational model Ratnawati Susanto *)1 1 Pendidikan Guru Sekolah Dasar, Fakultas keguruan dan dan Ilmu Pendidikan, Universitas Esa Unggul, Jakarta, Indonesia *) Corresponding author, e-mail: ratnawati@esaunggul.ac.id Abstract It is well recognized that the model of Technological Pedagogical Content Knowledge (TPACK) is one of the most prominent frameworks for describing teachers' skills to successfully educate students via the use of technology. Self-report questionnaires are often used in TPACK assessment, limiting the measures' validity, reliability, and practical application. The TPACK framework's underlying structure also causes confusion among participants. An integrated or transformational picture of how the TPACK knowledge domains interact was determined by the framework's inherent linkages. Methods and findings One hundred and seventeen pre-service elementary school teachers were issued a 42-item pilot questionnaire. Reliability analysis and confirmatory factor analysis were used to reduce the number of items on each subscale and to ensure that the model was well- fitting. Structural equation modeling was used to analyze the internal connections that existed between the components. In conclusion, the 28-item final TPAC questionnaire is a feasible and reliable tool for assessing pre-service teachers' knowledge, skills, and attitudes toward learning (TPACK). The intrinsic links between knowledge components in the TPACK paradigm also allow for a transformational interpretation to be applied. Keywords: TPACK, Pre-service teachers, Educational technology. How to Cite: Susanto, R. (2021). The construction of a quick TPACK evaluation tool and comparison of an integrative and transformational model. COUNS-EDU: The International Journal of Counseling and Education, 6(2). doi:http://dx.doi.org/10.23916/0020210635020 This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©2021 by author. Introduction Pedagogical content knowledge, Shulman (1986, 1987) claims, has dominated the discussion for the last three decades, and it's a combination of both pedagogical and content knowledge that makes a good teacher. Despite the fact that Mishra and Koehler (2006) expanded this approach in recent years to add technological competence as a third crucial component of successful teaching in the digital world, this is not the first time this has been expressed. PCK (pedagogical content knowledge), CK (content knowledge), and TK are the three fundamental components of TPACK, which also includes four hybrid components formed at their intersections: PCK (pedagogical content knowledge), TCK (technology content knowledge) and TCK (technology pedagogical knowledge) (TPCK). Even though numerous adaptations (Lee & Tsai, 2010) and extensions have been proposed (Porras-Hernández & Salinas-Amescua, 2013), the original framework is still the consistent core for representing teacher knowledge today. TPACK has inspired a tremendous amount of research. Many of the many components of knowledge are not clearly linked or how they may be quantified, which has sparked debate and worry in recent years. Each of these themes will be explored in further depth in the next chapters. mailto:ratnawati@esaunggul.ac.id http://dx.doi.org/10.23916/0020210635020 COUNS-EDU  The International Journal of Counseling and Education Vol.6, No.2, Month 2021 The construction of a quick TPACK evaluation tool ... | 45 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 Contrasting integrative and transformative views on TPACK It has been said that the TPACK framework is one of the most widely used concepts in educational technology research (e.g., Angeli & Valanides, 2009; Brantley-Dias & Ertmer, 2013) and that it has "fuzzy boundaries," despite the fact that it has been widely used in educational technology research Even though the TPACK framework has been widely used, it has been criticized for containing information that is "fuzzy" (Graham, 2011; Kimmons, 2015). A wide range of TPACK definitions and interpretations have emerged as a result of these considerations (e.g., Voogt, Fisser, Pareja Roblin, Tondeur, & van Braak, 2013; Petko, 2020). TPACK components have been linked in a variety of ways, resulting in two opposing viewpoints. Particularly (e.g., Angeli & Valanides, 2009; Graham, 2011). When it comes to the first perspective, referred to as an integrative view, the main component, TPCK, emerges as a result of the integration and relationship of all the other components of teacher knowledge and is thus tied to each domain. As a result of high levels of TK, PK, PCK, CK, and CKK, TPCK is likely to be present, as are TPK, TCK, PCK, TK, PK, and PC. The transformational approach, on the other hand, explains how knowledge components interact to produce unique bodies of knowledge that are greater than the sum of their individual parts. To put it another way, the transformational approach maintains that the TPCK components cannot simply be combined to describe TPCK as a distinct kind of knowledge; rather, it transcends the components that provide its basis. According to this view, TK, PK, and CK have no direct effect on TPCK, but that TPK, TCK, and PCK do. In spite of the fact that Mishra and Koehler (2006) theoretically proposed the concept of TPACK using the transformative perspective, only a few researchers have attempted to empirically test their assumptions up to this point. This question has been examined in a small number of studies using structural equation modeling, but the results have been inconclusive when it comes to finding an answer. Rather than establishing the fundamental TPACK model, the ambiguities have led to multiple extensions to the original model, further confusing the fundamental difficulties. TPACK stands (Barbar, & Abourjeili, 2012). There is a need for empirical data to reconcile TPACK and the paradigm proposed by Mishra and Koehler (2006). As a researcher in the field of TPACK, it is imperative that assessment methodologies are both credible and easy to administer. For example, TPACK can be evaluated across many settings and in conjunction with other critical factors, which could result in significant advantages (e.g., beliefs, self- efficacy). Developing valid, reliable, and economical TPACK measures When thinking about the integration of technology into educational institutions, it is critical to establish a theoretical framework that can be traced back to and make sense of the real world (Frigg & Hartmann, 2018; Grnfeldt Winther, 2015). Technology integration into the classroom necessitates the development of a theoretical framework. An important part of TPACK research is determining the validity and reliability of the instruments used in the study of theory and practice (Koehler, Shin, & Mishra, 2012; Niess, 2012). There are numerous theoretical perspectives available on TPACK, and empirical evidence from TPACK data is critical for developing consensus and bridging the gaps between them (Fisser, Voogt, van Braak, & Tondeur, 2015). Self-reporting and performance-based instruments are the two main categories of instruments that have been used so far in this study to evaluate the TPACK of students (Fisser et al., 2015). First, we can distinguish between self-report surveys and interviews (which may or may not include open-ended or closed-ended questions). Lesson preparation, classroom performance, and specific task performance are all included in performance-based assessments. Self-report approaches are now the most frequently used methodology for TPACK assessment (Koehler et al., 2012; Willermark, 2018). When properly randomized, self-report instruments, according to Demetriou and colleagues (2015), are an effective way to collect large amounts of quantitative data that can be used to generate generalizations. But there are a number of disadvantages to taking this approach. Although self-reporting has some obvious drawbacks, there are also a number of advantages. In order to accurately measure TPACK, current self-report methods must be improved (Abbitt, 2011). The proliferation of self-report instruments is a cause for concern because of the lack of standard criteria and imprecise boundaries. Most lack proof of reliability and validity. Koehler and colleagues COUNS-EDU  Vol.6, No.2, Month 2021 Available online: http://journal.konselor.or.id/index.php/counsedu Susanto, R. The construction of a quick TPACK evaluation tool ... | 46 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 (2012) revealed that roughly two-thirds of TPACK research utilizing self-report lacked validity and reliability. Therefore, empirical evidence supporting a seven-factor structure is inconsistent in the literature because of this issue; However, despite several studies that successfully confirmed the TPACK's seven- factor structure, other studies have found that these components are highly correlated and thus distinguish different factor structures (e.g. Deng, Chai, so Qian, and Chen in 2017 and Pamuk and colleagues in 2015; Sahin and colleagues in 2011; Schmidt and colleagues in 2009), despite the fact that the seven-factor structure of TPACK has been successfully confirmed in several studies (Jang & Tsai, 2012). Concerns have been raised about the framework's construct and discriminant validity as a result of these findings. Because of this, many current self-report tools do not evaluate TPACK holistically, but rather simply TK or the T- dimensions (Scherer, Tondeur, and Siddiq, 2017). (Archambault & Barnett, 2010). One of the most widely used self-report instruments in the field of teacher training is Schmidt et al. (2009).'s TPACK knowledge assessment survey. Several authors have validated the survey, either in its original form or an adaptation, and reported high reliability (Cronbach's alpha >.80; see, for example, Chai et al., 2010; Chai, Koh Tsai and Tan, 2011; Chai Koh Tsai and Tan 2010, Chai Koh Tsai and Tan 2010). The survey is unique in that it evaluates all seven components. As with other TPACK self-report instruments, it has three drawbacks. TPACK's self-report measures can be lengthy and, as a result, inconvenient for use in the real world (Valtonen et al., 2017). Because the items in each subscale are distributed asymmetrically, these instruments produce instruments with differing degrees of measurement accuracy (for an overview, see Pamuk et al., 2015). Some of these tools can only be used by specific types of teachers, which limits their usefulness as a general tool. According to Schmidt et al. (2009), their questionnaire only allows (pre-service) instructors who teach all four of these subjects to complete it (math and literacy, as well as social studies and science). More specific examples include Jimoyiannis (2010), Doering & Veletsianos (2008), and Archambault & Barnett (2009) "online habitats" (Archambault & Barnett's (2010) "online habitats"). Schmidt et alsurvey,utility .'s can be greatly increased by incorporating these three features into the survey. This will provide the TPACK research community with a long-needed legitimate, trustworthy, and practical tool. Besides that, the debate over whether to use an integrative or transformational approach to teaching and learning has implications for how assessment tools are constructed (Graham, 2011). TPACK's transformational model by Mishra and Koehler (2006) has been the subject of a few research aiming at proving the validity of the idea (Angeli & Valanides, 2009; Jang & Chen, 2010; Jin, 2019). Despite the fact that their approaches were diverse, they all produced fruitful results. Despite this, no research looked at whether their transformational models were more effective than their integrative counterparts. None of this. The present study Teachers' knowledge of digital technology in the classroom is one of the most well-known TPACK models, and it is also used by a wide range of other organizations. TPACK research, on the other hand, places a significant emphasis on a wide range of theoretical and methodological challenges. Aim: The purpose of this project is to create and verify a self-report questionnaire in light of these characteristics (TPACK). For this project, we want to build a succinct instrument that correctly examines all seven components of TPACK while taking parsimony and practicality into mind. When doing large-scale research, TPACK may be more easily integrated because of the shorter scale, which maintains accuracy and reliability at acceptable levels while also decreasing respondent fatigue during survey answers (Rammstedt & Beierlein, 2014; Schweizer, 2011). The second purpose of this research is to use TPACK to examine the internal structure of the TPACK framework and the relationships between its components. Method Sample In this research, pre-service elementary school teachers took a compulsory teaching technique course at a Esa Unggul University with their consent. Participation in the study was only possible if the participants had given their informed permission. In all, 117 people from two cohorts (63 women, 52 men, and two who didn't offer their gender information) were included in the final sample because of the 54.2 percent response rate (fall semester 2018: n 14 49; spring semester 2019: n 1468). 31.8 years old was the COUNS-EDU  The International Journal of Counseling and Education Vol.6, No.2, Month 2021 The construction of a quick TPACK evaluation tool ... | 47 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 average age of the sample (standard deviation: 14.3 years; age range: 22–56). Before being accepted into a teacher training program, all prospective instructors had to have a bachelor's or master's degree in the subject area in which they wanted to concentrate. Total in all, there were 17 different educational disciplines to pick from in the sample. 70 (59.8%) of the pre-service teachers in the sample had no previous teaching experience, 31 (26.5%) had one to two years, 11 (9.4%) had three to six years, and 5 (4.3%) had more than six years. Only seven pre-service teachers completed an optional educational technology program as part of the research. Measures Pre-service elementary school teachers were recruited for this study as part of an obligatory teaching style course at a Esa Unggul University. Participation in the study required informed consent. With 54.2 percent participation, the final sample contains 117 respondents (63 females, 52 men, and two who did not identify their gender) divided into two cohorts (fall semester 2018: n 14 49; spring semester 2019: n 1468). There were 22–56 participants in the sample, with a mean age of 31.8 (standard deviation, 14.3) years. Prior to being accepted into the teacher training program, all pre-service instructors were expected to obtain a bachelor's degree (or be in the process of completing a master's degree) in the subject area in which they want to specialize before beginning their training. In all, the sample included representatives from 17 different educational sectors. In terms of past teaching experience, 70 (59.8 percent) of pre-service teachers in the sample lacked it; 31 (26.5 percent) lacked one to two years of experience, 11 (9.4 percent) lacked three to six years of experience, and 5 (4.3 percent) lacked more than six years of experience. It was also addressed in the optional module on educational technology, with just seven pre-service teachers (6.0 percent) completing it, which was consistent with the study's focus on pre-service teachers' contact with educational technology. Data analysis An initial reliability study was carried out, followed by a confirmatory factor analysis (CFA) to see if the data matched the theoretically predicted structure and to build a short-scale questionnaire in order to answer the first research question (Schmitt, 2011). To begin, using statistical techniques, the dependability of the whole collection of items for each of the seven subscales was calculated in the first phase. In addition to Cronbach's alpha, McDonald's omega was developed as a complement to the latter, which has been criticized for underestimating internal consistency in certain circumstances (Deng & Chan, 2017). Cronbach's alpha is a dependability measure that is often used in business (McDonald, 1999). Another step involves completing a CFA on a comprehensive scale in order to determine the structural and internal coherence of the system. Item discrimination and factor loadings were reduced until each subscale comprised just the bare minimum number of items required to properly reflect all significant properties of each knowledge component (i.e., face validity) in the reliability study (i.e., face validity). A second round of testing was done to ensure the final model's dependability. This time, Cronbach's alpha and McDonald's omega were utilized, along with a CFA that permitted certain elements within subscales to correlate where reasonable (Schmitt, 2011). To address the second study objective, SEM was utilized to examine the TPACK components' interactions (SEM).Our study used the likelihood ratio test to create and evaluate models that reflected both an integrative approach (i.e., core components and first-level hybrids that predict TPCK) and a transformational point of view (TPCK prediction models). A mediation analysis was carried out in order to determine the indirect effects of core components on TPCK via their respective first level hybrids. Psych (version 1.8.12; Revelle, 2018), lavaan (version 0.6–3; Rosseel, 2012), and semPower (version 0.6–3; Rosseel, 2012) were used to conduct all of our analyses in the R software environment (version 3.6.0; R Core Team, 2019). A huge number of CFA and SEM goodness of fit indices are needed for each model. Our model's goodness of fit will be assessed using Chi-square (X2), Bentler Comparative Fit Index (CFI), Tucker Lewis Index (TLI), Steiger-Lind Root Mean Square of Approximation (RMSEA), and, due to our small sample size (N 150), Standardized Root Mean Square Residual (SRMR) for the CFA (Hooper, Coughlan, & Mullen, 2012). This study's cut-off criteria were CFI > 0.95, TLI > 0.95, RMSEA 0.05 with a confidence range of 0.05–0.10, and SRMR 0.08. Schreiber et al. (2006); Hooper et al (2012). All analyses were conducted at a 0.05 level of statistical significance. Recent CFA recommendations show that even small sample sizes may give adequate models provided the number of variables per factor is not too small and COUNS-EDU  Vol.6, No.2, Month 2021 Available online: http://journal.konselor.or.id/index.php/counsedu Susanto, R. The construction of a quick TPACK evaluation tool ... | 48 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 internal consistency is strong. (Wolf, Harrington, Clark, & Miller, 2013). In addition, we performed a post- hoc power analysis on each structural equation model in order to determine its efficacy and efficiency. Results and Discussions One of the primary goals of this study was to develop a brief questionnaire that may be used in therapeutic settings to assess patients (TPACK). All scales are reliable, however a CFA using the whole set of 42 items, which are divided into seven categories, does not result in a model that fits reasonably well (X2(798) 1223.8, p.000; TLI 0.819; CFI 0.832; RMSEA 0.068, 90 percent confidence interval [0.060; 0.074]); and SRMR 0.084. We eliminated items from each subscale based on item discrimination and factor loadings, as well as theoretical concerns about the building of facets (wordings that lead to item repetition or constraints on generalizability), among other considerations (wordings leading to item redundancy or limitations of generalizability). Despite the fact that item pck4 had lower factor loadings than item pck6, the authors considered it to be more comprehensive than identifying student faults since it addressed the component of student assessment (pck4) and the component of student assessment (pck6) (pck6). For these reasons, pck4 was selected as the preferred candidate above pck6 Ultimately, the model was composed of seven components, each of which had four subscale portions, for a total of 28 components. It was possible to get acceptable fit indices for the final model by allowing five residuals from independent subscales to correlate with one another (X2(324) 391.2, p.006; TLI 0.951; CFI 0.950; RSEA 0.042, 90 percent confidence interval [0.025; 0.056]; and SRM 0.071). Figure 2 depicts the loadings of items onto their respective subscales, as well as the correlations between latent variables and associated residuals, among other things. It is possible to further reduce the number of items on each subscale to three, while maintaining acceptable reliabilities (Cronbachs alphas between 0.75 and 0.90; McDonalds omegas between 0.76 and 0.91), and CFA model fits (X2(166) 208.4, p.014; TLI 0.955, CFI 0.964). Both the first level hybrid components PCK (0.30, p.04) and TPK (0.30, p.04) in Model 2 were shown to be accountable for the indirect effects of core components on TPCK. This was the first time this had been observed ( 0.57, p .02). (0.57, p.02) is a positive number. There was just one substantial mediation in relation to TK: TPK (0.38, p.00), whereas TCK did not mediate in any relevant manner (0.04, p .30). The significance of this result is 0.04 (p.30). As for CK, neither PCK (p.15) nor TCK (p.33) were discovered as relevant mediators (0.09 and 0.15, respectively) in this study (0.09 and 0.15, respectively). To begin with, the goal of this study was to develop a simple questionnaire that could be used to measure TPACK cheaply and realistically. In the reliability analysis, the CFA, and the SEM models, the seven TPACK knowledge components may be evaluated using TPACK with four questions per subscale. Cronbach's alphas ranged from 77 to 91 and McDonald's omegas ranged from 79 to 92 for each of the seven subscales. The CFA showed that there is enough differentiation between the subscales. There were large correlations between PK and PCK, PCK and TPCK, and TPK and TPCK, indicating that there were substantial links between the various subscales. All of these trends are in agreement with earlier results and may be explained conceptually (Valtonen et al, 2019). It is generally accepted that TPACK is a legitimate and reliable method for assessing teachers' knowledge. TPACK measures may now be more easily included into research with limited questionnaire space thanks to the short-scale. It is also a generic scale containing vocabulary that is appropriate to a wide range of fields, which makes it subject-specific. For the purpose of bringing together study data from different subjects and grade levels, this will be implemented. The second goal of investigating the internal connections between the various TPACK components may provide different conclusions. The structural differences between the integrative and transformational models were indistinguishable. Since core components were no longer considered relevant, ties between them were severed, notwithstanding the integrative model's characterization of TPCK as the point at where core and first level hybrid components meet. Thus, the integrative model's structure reflected the transformational model's structure automatically. According to Mishra and Koehler's (2006) basic notion of TPACK, as well as the existing body of evidence supporting a transformational viewpoint, the results given are congruent with the findings (Jin, 2019). This suggests that when TPCK is assessed in its current COUNS-EDU  The International Journal of Counseling and Education Vol.6, No.2, Month 2021 The construction of a quick TPACK evaluation tool ... | 49 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 form, the hybrid components TPK and PCK have the most influence. There is a strong correlation between TPK and TPCK. We did not see a significant impact of TCK on TPCK, contrary to the theoretical model. When comparing our results to previous TPACK structural equation models, it becomes clear that there are differences. TPK, TCK, and PCK were all significant predictors of TPCK, according to Pamuk et al (2015). According to Dong et al. (2015) and Koh et al. (2013), although PCK was not a good predictor of TPCK, TPK and TCK were. It was found that PCK and TCK had a positive influence on TPCK but not TPK, according to Celik and colleagues in 2014. In our study, TCK had no significant impact on TPACK, although TPK and PCK did. The question of how these outcomes may be explained arises. TCK items have been reformed in the present study to be more recognizable from TPCK, which might account for this. Another possibility is that the TPACK knowledge components interact differently in different contexts. Conclusion Teachers' education and professional development might benefit from the outcomes of this study. When looking at TPCK as transformative, a rise in TK or PK does not always imply an increase in TPCK (Angeli, Valanides, & Christodoulou, 2016). As a result, teacher preparation programs focusing on TK will likely not make the switch to TPCK right once. Instead, the flow of information from one place to another must be deliberate. Because of this, teacher preparation programs must provide students many opportunities to learn and practice the various components of knowledge and, more importantly, the combinations of these components. TPCK development relies heavily on high-quality technology experiences throughout the teaching profession's preparation process, as this study's findings verify. Limitations and future research Future research will need to overcome some of the shortcomings of this study. There are several limitations to this study, the most prominent of which are the sample size, survey instrument, and cross- sectional study design. Pre-service upper-secondary school teachers who are currently engaged in their initial stages of teacher training are included in this sample. This suggests that, except from those who are well-versed in the subject matter, these instructors have just a limited grasp of TPACK's many components (Koehler et al., 2014). Additionally, additional research is needed in bigger samples with better statistical power, across cultures and teacher demographics in order to establish the questionnaire's overall validity and the validity of its specific subscales. Further research on the specific location of TPACK, which is presently being researched, may benefit from increasing the sample size. Other limitations should be highlighted in relation to the survey instrument. Contextual knowledge was not examined in this research, as was the case with other TPACK investigations. Few empirical studies have tried to examine context as part of instructors' knowledge despite the fact that it has been considered as an important body of information in some research (Mishra and coworkers, 2019). According to Porras-Hernandez & Salinas-Amescua (2013), the context and its many levels (micro, meso, macro) should be included in future study to better understand the structure and practical application of this information. As well since a lack of contextual references, a further issue with the instrument's content is that evaluating teacher ability at the topic level might be an overly wide approach, as knowledge can differ between disciplines. Therefore, future research should focus on determining a more accurate way to measure TPACK. This test relies only on pre-service teachers' self-reported knowledge, which raises the issue of how reliable they are in describing their own knowledge (Drummond & Sweeney, 2017). A future research could compare self-declarations with other TPACK variables, such as lesson observations or performance evaluations, to eliminate these biases (Koehler et al., 2012). This study may also demonstrate the validity of self-reported TPACKs by providing important evidence of their convergent validity (Jung, & Baser, 2014). To further investigate the complicated link between TPACK and self-efficacy, self-regulation, beliefs, or attitudes toward educational technology, it will be necessary to investigate the interaction between TPACK and these important categories. It was shown that beliefs influence the links between self- reported TPACK and other factors by Krauskopf and Forssell (2018). In order to increase the effectiveness of technology integration in teacher education and the classroom, it may be necessary to understand how these factors interact. COUNS-EDU  Vol.6, No.2, Month 2021 Available online: http://journal.konselor.or.id/index.php/counsedu Susanto, R. The construction of a quick TPACK evaluation tool ... | 50 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 Our findings support a transformational interpretation of TPACK, however the reported effects are only correlational rather than causal. The precise interaction of TPACK components and their reciprocal impacts on one another will need a long-term examination. A first step toward an assessment of TPACK in survey research that takes into account the transformational character of the model's mix of knowledge domains that is simpler but still trustworthy, even with these limitations, may have been taken in developing TPACK. It is possible that the rapidity and efficiency of TPACK assessments will make it easier to include TPACK evaluations into bigger studies with a varied instructor demographic. References Abbitt, J. (2011). Measuring technological pedagogical content knowledge in preservice teacher education: A review of current methods and instruments. Journal of Research on Technology in Educational Technology & Society, 43(4), 281–300. https://doi.org/10.1080/15391523.2011.10782573. Angeli, C., & Valanides, N. (2005). Preservice elementary teachers as information and communication technology designers: An instructional systems design model based on an expanded view of pedagogical content knowledge. Journal of Computer Assisted Learning, 21(4), 292–302. https://doi.org/10.1111/j.1365- 2729.2005.00135.x. Angeli, C., & Valanides, N. (2009). Epistemological and methodological issues for the conceptualization, development, and assessment of ICT-TPCK: Advances in technological pedagogical content knowledge (TPCK). Computers & Education, 52(1), 154–168. https://doi.org/10.1016/j.compedu.2008.07.006. Angeli, C., Valanides, C., & Christodoulou, A. (2016). Theoretical considerations of technological pedagogical content knowledge. In M. C. Herring, P. Mishra, & M. J. Koehler (Eds.), Handbook of technological pedagogical content knowledge for educators (2nd ed., pp. 11–32). New York, NY: Routledge. Archambault, L., & Barnett, J. H. (2010). Revisiting technological pedagogical content knowledge: Exploring the TPACK framework. Computers & Education, 55(4), 1656–1662. https://doi.org/10.1016/j.compedu.2010.07.009. Archambault, L., & Crippen, K. (2009). Examining TPACK among K–12 online distance educators in the United States. Contemporary Issues in Technology and Teacher Education, 9(1), 71–88. Bilici, S. C., Yamak, H., Kavak, N., & Guzey, S. S. (2013). Technological pedagogical content knowledge self-efficacy scale (TPACK-SeS) for pre-service science teachers: Construction, validation, and reliability. European Journal of Educational Research, 52, 37–60. Brantley-Dias, L., & Ertmer, P. A. (2013). Goldilocks and TPACK. Journal of Research on Technology in Education, 46(2), 103–128. https://doi.org/10.1080/ 15391523.2013.10782615. Celik, I., Sahin, I., & Akturk, A. O. (2014). Analysis of the relations among the components of technological pedagogical and content knowledge (TPACK): A structural equation model. Journal of Educational Computing Research, 51(1), 1–22. https://doi.org/10.2190/ec.51.1.a. Chai, C. S., Koh, J. H. L., & Tsai, C.-C. (2010). Facilitating preservice teachers’ development of technological, pedagogical, and content knowledge (TPACK). Journal of Educational Technology & Society, 13(4), 63–73. Chai, C. S., Koh, J. H. L., & Tsai, C.-C. (2011a). Exploring the factor structure of the constructs of technological, pedagogical, content knowledge (TPACK). The Asia- Pacific Education Researcher, 20(3), 595–603. Chai, C. S., Koh, J. H. L., Tsai, C.-C., & Tan, L. L. W. (2011b). Modeling primary school pre-service teachers’ technological pedagogical content knowledge (TPACK) for meaningful learning with information and communication technology (ICT). Computers & Education, 57(1), 1184–1193. https://doi.org/10.1016/j. compedu.2011.01.007. Cho, E., & Kim, S. (2015). Cronbach’s coefficient alpha: Well known but poorly understood. Organizational Research Methods, 18(2), 207–230. https://doi.org/ 10.1177/1094428114555994. https://doi.org/10.1080/15391523.2011.10782573 https://doi.org/10.1111/j.1365-2729.2005.00135.x https://doi.org/10.1111/j.1365-2729.2005.00135.x https://doi.org/10.1111/j.1365-2729.2005.00135.x https://doi.org/10.1016/j.compedu.2008.07.006 https://doi.org/10.1016/j.compedu.2008.07.006 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref4 https://doi.org/10.1016/j.compedu.2010.07.009 https://doi.org/10.1016/j.compedu.2010.07.009 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref6 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref7 https://doi.org/10.1080/15391523.2013.10782615 https://doi.org/10.1080/15391523.2013.10782615 https://doi.org/10.2190/ec.51.1.a http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref10 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref11 https://doi.org/10.1016/j.compedu.2011.01.007 https://doi.org/10.1016/j.compedu.2011.01.007 https://doi.org/10.1016/j.compedu.2011.01.007 https://doi.org/10.1177/1094428114555994 https://doi.org/10.1177/1094428114555994 COUNS-EDU  The International Journal of Counseling and Education Vol.6, No.2, Month 2021 The construction of a quick TPACK evaluation tool ... | 51 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 Demetriou, C., Uzun Ozer, B., & Essau, C. A. (2015). Self-report questionnaires. In R. L. Cautin, & S. O. Lilienfeld (Eds.), € The encyclopedia of clinical psychology (pp. 1–6). New York: JohnWiley & Sons. https://doi.org/10.1002/9781118625392.wbecp50. Deng, F., Chai, C. S., So, H.-J., Qian, Y., & Chen, L. (2017). Examining the validity of the technological pedagogical content knowledge (TPACK) framework for preservice chemistry teachers. Australasian Journal of Educational Technology, 33(3), 1–14. https://doi.org/10.14742/ajet.3508. Deng, L., & Chan, W. (2017). Testing the difference between reliability coefficients alpha and omega. Educational and Psychological Measurement, 77(2), 185–203. https://doi.org/10.1177/0013164416658325. Doering, A., & Veletsianos, G. (2008). An investigation of the use of real-time, authentic geospatial data in the K–12 classroom. Journal of Geography, 106(6), 217–225. https://doi.org/10.1080/00221340701845219. Dong, Y., Chai, C. S., Sang, G.-Y., Koh, J. H. L., & Tsai, C.-C. (2015). Exploring the profiles and interplays of pre-service and in-service teachers’ technological pedagogical content knowledge (TPACK) in China. Journal of Educational Technology & Society, 18(1), 158–169. Drummond, A., & Sweeney, T. (2017). Can an objective measure of technological pedagogical content knowledge (TPACK) supplement existing TPACK measures? British Journal of Educational Technology, 48(4), 928–939. https://doi.org/10.1111/bjet.12473. Fisser, P., Voogt, J., van Braak, J., & Tondeur, J. (2015). Measuring and assessing TPACK (technological pedagogical content knowledge). In J. Spector (Ed.), The SAGE encyclopedia of educational technology (pp. 490–492). Thousand Oaks, CA: SAGE. https://doi.org/10.4135/9781483346397.n205. Foulger, T. S., Graziano, K. J., Schmidt-Crawford, D., & Slykhuis, D. A. (2017). Teacher educator technology competencies. Journal of Technology and Teacher Education, 25(4), 413–448. Frigg, R., & Hartmann, S. (2018). Models in science. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy (Summer 2018 Ed.) Retrieved from https://plato.stanford. edu/archives/sum2018/entries/models- science. Gagne, P., & Hancock, G. R. (2006). Measurement model quality, sample size, and solution propriety in confirmatory factor models. Multivariate Behavioral Research, 41(1), 65–83. https://doi.org/10.1207/s15327906mbr4101_5. Graham, C. R. (2011). Theoretical considerations for understanding technological pedagogical content knowledge (TPACK). Computers & Education, 57(3), 1953–1960. https://doi.org/10.1016/j.compedu.2011.04.010. Grønfeldt Winther, R. (2015). The Structure of scientific theories. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy (Winter 2016 Ed.) Retrieved from https:// plato.stanford.edu/entries/structure- scientific-theories/. Hew, K. F., Lan, M., Tang, Y., Jia, C., & Lo, C. K. (2019). Where is the “theory” within the field of educational technology research? British Journal of Educational Technology, 50(3), 956–971. https://doi.org/10.1111/bjet.12770. Hooper, D., Coughlan, J., & Mullen, M. (2012). Structural equation modelling: Guidelines for determining model fit. Electronic Journal of Business Research Methods, 6 (1), 53–60. https://doi.org/10.21427/D7CF7R. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. https://doi.org/10.1080/10705519909540118. Jang, S.-J., & Chen, K.-C. (2010). From PCK to TPACK: Developing a transformative model for pre- service science teachers. Journal of Science Education and Technology, 19, 553–564. https://doi.org/10.1007/s10956-010-9222-y. Jang, S.-J., & Tsai, M.-F. (2012). Exploring the TPACK of Taiwanese elementary mathematics and science teachers with respect to use of interactive whiteboards. Computers & Education, 59(2), 327– 338. https://doi.org/10.1016/j.compedu.2012.02.003. Jimoyiannis, A. (2010). Designing and implementing an integrated technological pedagogical science knowledge framework for science teachers professional development. Computers & Education, 55(3), 1259–1269. https://doi.org/10.1016/j.compedu.2010.05.022. https://doi.org/10.1002/9781118625392.wbecp50 https://doi.org/10.14742/ajet.3508 https://doi.org/10.14742/ajet.3508 https://doi.org/10.1177/0013164416658325 https://doi.org/10.1080/00221340701845219 https://doi.org/10.1080/00221340701845219 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref18 https://doi.org/10.1111/bjet.12473 https://doi.org/10.1111/bjet.12473 https://doi.org/10.4135/9781483346397.n205 https://doi.org/10.4135/9781483346397.n205 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref21 https://plato.stanford.edu/archives/sum2018/entries/models-science https://plato.stanford.edu/archives/sum2018/entries/models-science https://plato.stanford.edu/archives/sum2018/entries/models-science https://doi.org/10.1207/s15327906mbr4101_5 https://doi.org/10.1207/s15327906mbr4101_5 https://doi.org/10.1016/j.compedu.2011.04.010 https://doi.org/10.1016/j.compedu.2011.04.010 https://plato.stanford.edu/entries/structure-scientific-theories/ https://plato.stanford.edu/entries/structure-scientific-theories/ https://plato.stanford.edu/entries/structure-scientific-theories/ https://doi.org/10.1111/bjet.12770 https://doi.org/10.1111/bjet.12770 https://doi.org/10.21427/D7CF7R https://doi.org/10.21427/D7CF7R https://doi.org/10.1080/10705519909540118 https://doi.org/10.1080/10705519909540118 https://doi.org/10.1007/s10956-010-9222-y https://doi.org/10.1016/j.compedu.2012.02.003 https://doi.org/10.1016/j.compedu.2012.02.003 https://doi.org/10.1016/j.compedu.2010.05.022 COUNS-EDU  Vol.6, No.2, Month 2021 Available online: http://journal.konselor.or.id/index.php/counsedu Susanto, R. The construction of a quick TPACK evaluation tool ... | 52 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 Jin, Y. (2019). The nature of TPACK: Is TPACK distinctive, integrative or transformative?. In Society for information technology & teacher education international conference (pp. 2199–2204). Association for the Advancement of Computing in Education (AACE). Kabakci Yurdakul, I., Odabasi, H. F., Kilicer, K., Coklar, A. N., Birinci, G., & Kurt, A. A. (2012). The development, validity and reliability of TPACK-deep: A technological pedagogical content knowledge scale. Computers & Education, 58(3), 964–977. https://doi.org/10.1016/j.compedu.2011.10.012. Kimmons, R. (2015). Examining TPACK’s theoretical future. Journal of Technology and Teacher Education, 23(1), 53–77. Koehler, M. J., & Mishra, P. (2008). Introducing TPCK. In AACTE Committee on Innovation and Technology (Ed.), Handbook of technological pedagogical content knowledge (TPCK) for educators (pp. 2– 29). New York, NY: Routledge. Koehler, M. J., Mishra, P., Kereluik, K., Shin, T. S., & Graham, C. R. (2014). The technological pedagogical content knowledge framework. In J. M. Spector, D. M. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 101–111). New York, NY: Springer. Koehler, M. J., Shin, T. S., & Mishra, P. (2012). How do we measure TPACK? Let me count the ways. In R. N. Ronau, C. R. Rakes, & M. L. Niess (Eds.), Teacher knowledge, and classroom impact: A research handbook on frameworks and approaches (pp. 16–31). Hershey: IGI Global. Koh, J. H. L., Chai, C. S., & Tsai, C.-C. (2010). Examining the technological pedagogical content knowledge of Singapore pre-service teachers with a large-scale survey. Journal of Computer Assisted Learning, 26, 563–573. https://doi.org/10.1111/j.1365-2729.2010.00372.x. Koh, J. H. L., Chai, C. S., & Tsai, C.-C. (2013). Examining practicing teachers’ perceptions of technological pedagogical content knowledge (TPACK) pathways: A structural equation modeling approach. Instructional Science, 41(4), 793–809. https://doi.org/10.1007/s11251-012-9249-y. Kopcha, T. J., Ottenbreit-Leftwich, A., Jung, J., & Baser, D. (2014). Examining the TPACK framework through the convergent and discriminant validity of two measures. Computers & Education, 78, 87– 96. https://doi.org/10.1016/j.compedu.2014.05.003. Krauskopf, K., & Forssell, K. (2018). When knowing is believing: A multi-trait analysis of self-reported TPCK. Journal of Computer Assisted Learning, 34, 482–491. https://doi.org/10.1111/jcal.12253. Lee, M.-H., & Tsai, C.-C. (2010). Exploring teachers’ perceived self efficacy and technological pedagogical content knowledge with respect to educational use of the World Wide Web. Instructional Science, 38, 1–21. doi:0.1007/s11251-008-9075-4. McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Lawrence Erlbaum. https://doi.org/10.4324/9781410601087. McDonald, R. P., & Ho, M.-H. R. (2002). Principles and practice in reporting structural equation analyses. Psychological Methods, 7(1), 64–82. https://doi.org/ 10.1037/1082-989X.7.1.64. Mishra, P. (2019). Considering contextual knowledge: The TPACK diagram gets an upgrade. Journal of Digital Learning in Teacher Education, 35(2), 76–78. https://doi. org/10.1080/21532974.2019.1588611. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x. Moshagen, M. (2018). semPower: Power Analyses for SEM. R package version 1.0.0. https://CRAN.R- project.org/package¼semPower. Moshagen, M., & Erdfelder, E. (2016). A new strategy for testing structural equation models. Structural Equation Modeling: A Multidisciplinary Journal, 23(1), 54–60. https://doi.org/10.1080/10705511.2014.950896. Niess, M. L. (2012). Teacher knowledge for teaching with technology: A TPACK lens. In C. R. Rakes, R. N. Ronau, & M. L. Niess (Eds.), Educational technology, teacher knowledge, and classroom impact: A research handbook on frameworks and approaches (pp. 1–15). Hershey: IGI Global. Pamuk, S. (2012). Understanding preservice teachers’ technology use through TPACK framework. Journal of Computer Assisted Learning, 28, 425–439. https://doi.org/ 10.1111/j.1365-2729.2011.00447.x. http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref32 https://doi.org/10.1016/j.compedu.2011.10.012 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref34 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref34 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref34 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref34 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref34 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref34 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref34 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref35 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref36 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref37 https://doi.org/10.1111/j.1365-2729.2010.00372.x https://doi.org/10.1007/s11251-012-9249-y https://doi.org/10.1016/j.compedu.2014.05.003 https://doi.org/10.1016/j.compedu.2014.05.003 https://doi.org/10.1111/jcal.12253 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref42 https://doi.org/10.4324/9781410601087 https://doi.org/10.1037/1082-989X.7.1.64 https://doi.org/10.1037/1082-989X.7.1.64 https://doi.org/10.1080/21532974.2019.1588611 https://doi.org/10.1080/21532974.2019.1588611 https://doi.org/10.1080/21532974.2019.1588611 https://doi.org/10.1080/21532974.2019.1588611 https://doi.org/10.1111/j.1467-9620.2006.00684.x https://doi.org/10.1111/j.1467-9620.2006.00684.x https://cran.r-project.org/package=semPower https://cran.r-project.org/package=semPower https://cran.r-project.org/package=semPower https://cran.r-project.org/package=semPower https://cran.r-project.org/package=semPower https://doi.org/10.1080/10705511.2014.950896 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref49 https://doi.org/10.1111/j.1365-2729.2011.00447.x https://doi.org/10.1111/j.1365-2729.2011.00447.x COUNS-EDU  The International Journal of Counseling and Education Vol.6, No.2, Month 2021 The construction of a quick TPACK evaluation tool ... | 53 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 Pamuk, S., Ergun, M., Cakir, R., Yilmaz, H. B., & Ayas, C. (2015). Exploring relationships among TPACK components and development of the TPACK instrument. Education and Information Technologies, 20, 241–263. https://doi.org/10.1007/s10639-013-9278-4. Petko, D. (2020). Quo vadis TPACK? Scouting the road ahead. In Proceedings of EdMedia þ Innovate Learning (pp. 1277-1286). Online. The Netherlands: Association for the Advancement of Computing in Education (AACE). Retrieved from https://www.learntechlib.org/primary/p/217445/. Porras-Hernandez, L. H., & Salinas-Amescua, B. (2013). Strengthening TPACK: A broader notion of context and the use of teache r’s narratives to reveal knowledge construction. Journal of Educational Computing Research, 48(2), 223–244. R Core Team. (2019). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/. Rammstedt, B., & Beierlein, C. (2014). Can’t we make it any shorter? Journal of Individual Differences, 35(4), 212–220. https://doi.org/10.1027/1614-0001/a000141. Revelle, W. (2018). psych: Procedures for personality and psychological research. Evanston, Illinois, USA: Northwestern University. https://CRAN.R-project.org/ package¼psych Version ¼ 1.8.12. Rosenberg, J. M., & Koehler, M. J. (2015). Context and technological pedagogical content knowledge (TPACK): A systematic review. Journal of Research on Technology in Education, 47(3), 186–210. https://doi.org/10.1080/15391523.2015.1052663. Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. http://www.jstatsoft.org/v48/i02/. Saad, M. M., Barbar, A. M., & Abourjeili, S. A. R. (2012). Introduction of TPACK-XL: A transformative view of ICT-TPCK for building pre-service teacher knowledge base. Turkish Journal of Teacher Education, 1(2), 41–60. Sahin, I. (2011). Development of survey of technological pedagogical and content knowledge (TPACK). The Turkish Online Journal of Educational Technology, 10(1), 97–105. Scherer, R., Tondeur, J., & Siddiq, F. (2017). On the quest for validity: Testing the factor structure and measurement invariance of the technology-dimensions in the technological, pedagogical, and content knowledge (TPACK) model. Computers & Education, 112, 1–17. https://doi.org/10.1016/j.compedu.2017.04.012. Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK). Journal of Research on Technology in Education, 42(4), 123–149. https://doi.org/10.1080/15391523.2009.10782544. Schmitt, T. A. (2011). Current methodological considerations in exploratory and confirmatory factor analysis. Journal of Psychoeducational Assessment, 29(4), 304–321. https://doi.org/10.1177/0734282911406653. Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. Journal of Educational Research, 99, 323– 338. https://doi.org/10.3200/JOER.99.6.323-338. Schweizer, K. (2011). Some thoughts concerning the recent shift from measures with many items to measures with few items. European Journal of Psychological Assessment, 27(2), 71–72. https://doi.org/10.1027/1015-5759/a000056. Shulman, L. S. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. https://doi.org/10.3102/0013189X015002004. Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–22. https://doi.org/10.17763/haer.57.1. j463w79r56455411. Valtonen, T., Sointu, E., Kukkonen, J., Kontkanen, S., Lambert, M. C., & Makitalo-Siegl, K. (2017). TPACK updated to measure pre-service teachers€ ’ twenty-first century skills. Australasian Journal of Educational Technology, 33(3), 15–31. https://doi.org/10.1111/j.1365-2729.2012.00487.x. Valtonen, T., Sointu, E., Kukkonen, J., Makitalo, K., Hoang, N., H€ akkinen, P., et al. (2019). Examining pre-service teachers€ ’ technological pedagogical content knowledge as evolving knowledge domains: A longitudinal approach. Journal of Computer Assisted Learning, 35, 491–502. https://doi.org/10.1111/jcal.12353. Valtonen, T., Sointu, E., Makitalo-Siegl, K., & Kukkonen, J. (2015). Developing a TPACK measurement instrument for 21st century pre-service teachers.€ Seminar.Net, 11(2), 87–100. https://doi.org/10.1007/s10639-013-9278-4 https://www.learntechlib.org/primary/p/217445/ https://www.learntechlib.org/primary/p/217445/ http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref52 https://www.r-project.org/ https://www.r-project.org/ https://doi.org/10.1027/1614-0001/a000141 https://doi.org/10.1027/1614-0001/a000141 https://cran.r-project.org/package=psych%20Version%20=%201.8.12 https://cran.r-project.org/package=psych%20Version%20=%201.8.12 https://cran.r-project.org/package=psych%20Version%20=%201.8.12 https://cran.r-project.org/package=psych%20Version%20=%201.8.12 https://cran.r-project.org/package=psych%20Version%20=%201.8.12 https://cran.r-project.org/package=psych%20Version%20=%201.8.12 https://doi.org/10.1080/15391523.2015.1052663 http://www.jstatsoft.org/v48/i02/ http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref58 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref59 https://doi.org/10.1016/j.compedu.2017.04.012 https://doi.org/10.1080/15391523.2009.10782544 https://doi.org/10.1080/15391523.2009.10782544 https://doi.org/10.1177/0734282911406653 https://doi.org/10.3200/JOER.99.6.323-338 https://doi.org/10.3200/JOER.99.6.323-338 https://doi.org/10.1027/1015-5759/a000056 https://doi.org/10.1027/1015-5759/a000056 https://doi.org/10.3102/0013189X015002004 https://doi.org/10.17763/haer.57.1.j463w79r56455411 https://doi.org/10.17763/haer.57.1.j463w79r56455411 https://doi.org/10.17763/haer.57.1.j463w79r56455411 https://doi.org/10.1111/j.1365-2729.2012.00487.x https://doi.org/10.1111/j.1365-2729.2012.00487.x https://doi.org/10.1111/jcal.12353 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 http://refhub.elsevier.com/S0360-1315(20)30165-2/sref69 COUNS-EDU  Vol.6, No.2, Month 2021 Available online: http://journal.konselor.or.id/index.php/counsedu Susanto, R. The construction of a quick TPACK evaluation tool ... | 54 Indonesian Counselor Association (IKI) | DOI: 10.23916/0020210635020 Voogt, J., Fisser, P., Pareja Roblin, N., Tondeur, J., & van Braak, J. (2013). Technological pedagogical content knowledge – a review of the literature. Journal of Computer Assisted Learning, 29, 109–121. https://doi.org/10.1111/j.1365-2729.2012.00487.x. Wang, W., Schmidt-Crawford, D., & Jin, Y. (2018). Preservice teachers’ TPACK development: A review of literature. Journal of Digital Learning in Teacher Education, 34 (4), 234–258. https://doi.org/10.1080/21532974.2018.1498039. Willermark, S. (2018). Technological pedagogical and content knowledge: A review of empirical studies published from 2011 to 2016. Journal of Educational Computing Research, 56(3), 315–343. https://doi.org/10.1177/0735633117713114. Wolf, E. J., Harrington, K. M., Clark, S. L., & Miller, M. W. (2013). Sample size requirements for structural equation models: An evaluation of power, bias, and solution propriety. Educational and Psychological Measurement, 73(6), 913–934. https://doi.org/10.1177/0013164413495237 https://doi.org/10.1111/j.1365-2729.2012.00487.x https://doi.org/10.1080/21532974.2018.1498039 https://doi.org/10.1177/0735633117713114 https://doi.org/10.1177/0013164413495237