International Journal of Interactive Mobile Technologies (iJIM) – eISSN: 1865-7923 – Vol. 16, No. 04, 2022 Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational Authoring System https://doi.org/10.3991/ijim.v16i04.28373 Mohammed Amraouy1(), Mostafa Bellafkih1, Abdellah Bennane2, Mohammed Majid Himmi3 1National Institute of Posts and Telecommunications, Rabat, Morocco 2Inspectors Training Center for Education (CFIE), Mohammed V University, Rabat, Morocco 3Faculty of Science, Mohammed V University, Rabat, Morocco amraouy.mohamed1@gmail.com Abstract—Online learning has been growing continuously in the last decade, accelerated by the pandemic Coronavirus, in order to offer learners new possi- bilities for learning and to guarantee pedagogical continuity. But some aspects, including the learning assessment, are still in development and require profound changes to meet the demands of the 21st century. However, by taking advan- tage of artificial intelligence techniques and new learning theories, it would be possible to create an assessment system in accordance with the active learning approach. This paper illustrates the models, algorithms and design used to create a formative and summative assessment system that will be mobilized to evaluate learner’s knowledge, skills and competencies. Keywords—assessment, competency-based approach, authoring system, e-learning, feedback 1 Introduction ICTs and E-learning are becoming more and more integrated in education and train- ing process. This integration is accelerated by the COVID-19 pandemic. Digital tech- nologies make teaching and learning process available and disregards time restrictions or geographic proximity, permits liberating interactions between learners and teachers or among learners [1], [2]. There are numerous commercial and open source e-learning platforms with vari- ous functionalities and services. Despite these technological advances, certain aspects, including learning assessment, are still in the classic test format. Assessment remains central and is an integral part of any teaching and learning situation [3]–[5]. In an online situation, the assessment becomes a complex, even complicated, and time-consuming process because of the temporal and spatial distances that separate the pedagogical staff and the learner [6], [7]. Teachers will be invited to mobilize more effort into developing assessment activities and providing effective feedback. 46 http://www.i-jim.org https://doi.org/10.3991/ijim.v16i04.28373 mailto:amraouy.mohamed1@gmail.com Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… Competency-based assessment is still insufficiently implemented or even not addressed [8]–[12]. In most e-learning platforms, the assessment process is primarily oriented to verify knowledge acquisition. For this, we are invited to rethink the way we design the assessment process so that it is able to meet the demands of the 21st century skills [13] and reflect learner’s competencies. In this contribution, we will describe a model of formative and summative online learning assessment adapted to the competency-based approach. This article will also be an opportunity to report the main results in terms of pedagogical and ergonomic relevance following the implementation of this model. 2 Competency-based assessment In terms of pedagogical design and practice, most education systems adopt the com- petencies-based approach as a framework of reference for all educational and training practice [12]. This approach pushes the educational actor to adopt active, efficient and learner-centered methods, in order to promote his/her autonomy and facilitate his/her social and economic integration. The constructivist, socio-cognitivist and connectiv- ist conception are the three theoretical frameworks that best meet these requirements [14], [15]. In fact, competency-based assessment is a complex process, which inherits its complexity from assessment using complex situations that confront the learner with the unexpected and lead him to produce unfinished and complex responses [16]. We are facing a profound educational change, the shift from a pedagogical para- digm oriented teaching to a paradigm focused on the activity of the learner [17]. In this reflection: (1) the participation of learners in a so-called “authentic” assessment becomes a source of meaningful learning. In this sense, Laveault asserts the importance of tolerating “asymmetric” training forms where all the students do not learn the same content at the same time [18], (2) the ways of conceptualizing evaluation are diversify- ing and becoming more precise according to the pedagogical and didactic conception of learning and the content to be taught [19]–[21], (3) learning assessment must be included in any pedagogical and didactic act while mobilizing its formative dimension. Simply reducing evaluation to a figure form cannot fully encapsulate its complexity, and (4) the gradual rejection of the behaviorist conception, which privileges evaluation methods whose goal is to increase learner’s performance responding to fixed objec- tives, becomes a necessity to promote in-depth learning [16], [22], [23]. Throughout the design process of our authoring system, we have tried to respect the principles of this reflection. 3 Assessment in a digital learning environment Assessment is a fundamental and determining aspect in any teaching-learning pro- cess [24]. It is the only way to better understand the learners’ knowledge and competen- cies [3], [25]. This is particularly true in the case of online learning. iJIM ‒ Vol. 16, No. 04, 2022 47 Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… Communication that exists in a face-to-face classroom does not occur in an online classroom [26], [27]. For this reason, the learner in a such learning environment needs special attention in order to take into consideration his/her real needs in the choice of learning situations and feedback. According to Charles Juwah [28], assessment must : (1) Be motivating for learners, (2) Encourage sustained learning activity, (3) Contrib- ute to learner progress and achievement, and (4) Be low in terms of human efforts and easily maintainable. It is generally accepted that the types of assessment in an online learning environ- ment aim at detecting and assessing the quality of learner’s interactions [29], whose main goal remains to motivate learners and facilitate their learning process [30]. Refer- enced practices in these types of environment cover the main classic assessment cate- gories (prognostic, diagnostic, formative and summative) [31], [32]. Online assessment, as a learning process, should provide activities that facilitate self-assessment, self-regulation, peer assessment and learner autonomy [33]. For this, tools and methods of assessment can be very different. Arend, in [34], has reviewed 60 online courses, Which allows him to identify assessment methods and tools that include online discussions, tests, written assignments, projects, quizzes, presentations and e-portfolios [8], [35]. We distinguish two types of tools or authoring systems for creating assessment activ- ities: those based on a pedagogical approach and those based on a performance oriented [36]. The first category focuses on how to cut out and teach content to facilitate the learning process. While the second category is mainly interested in the teaching-learn- ing environment so that it can be relatively rich, in which learners can learn knowledge by practicing them and receiving feedback [37]. 4 A formal model of competency-based assessment Traditional assessment generally aims to verify the knowledge acquisition degree through objective tests. However, In active pedagogical vision, the assessment goal is oriented to verify competencies development, by mobilizing criteria and indicators linked to projects, productions, simulations or e-portfolios [38]. In this sense, each competency is verified by a test which includes a set of weighted elements. In the same way, to provide appropriate feedback, the test uses an assessment grid-based criteria and rules. Through Figure 1 below, we have tried to highlight the different elements and com- ponents of the proposal model. 48 http://www.i-jim.org Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… Fig. 1. A formal model of online competency-based assessment (OCBA model) In the OCBA model, each learner is characterized by a competency profile [8] and may engage to take an assessment to validate, verify and gauge the level of his compe- tency development and knowledge acquired. An assessment therefore remains the proof to examine the students’ progression in the development of one or more competencies. Each assessment is made up of one or more questions (exercises). Each question has a weight. The weight is the number of points that will be added to the final score, in order to calculate competency acquisition degree. In this case, the exercise plays the role of mastery indicator of one or more criteria. Each response to a question can have feedback for both correct and incorrect answers. A competency can be linked to one or more assessment test. In this assessment learn- ing process, the validation level is determined by the teacher or tutor. In this model, we have tried to reduce the complexity of the competency-based assessment. iJIM ‒ Vol. 16, No. 04, 2022 49 Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… 5 The OCBA platform The present OCBA platform admits a triple function: (1) assessment activities creation and edition, (2) assessment process visualization and simulation in order to make the necessary adjustments (internal evaluation), and (3) publishing, exporting or archiving assessment. Figure 2 is a simplified flowchart illustrating the major algorithm steps for creating assessment activities. Fig. 2. Creating assessment activities algorithm Point 2 concerns the initialization of the assessment process. Its main goal is to determine intended competencies. Each teacher or tutor in charge identifies the compe- tencies to be assessed through the OCBA platform. 50 http://www.i-jim.org Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… In point 3, the teacher can make suitable pedagogical configuration regarding the content unit’s design and according to his teaching choices and learners need. In fact, in point 4, teachers can create a competency-based assessment or simply create a classic assessment activities. In point 5, teacher must add correction criteria and indicators. Each competency is split up into a series of criteria and each criterion will be verified using some indicators. Figure 3 shows the assessment process followed by the learner. Fig. 3. Learner assessment algorithm After a double identity check (classic method and using facial recognition) and test presentation, point 3 concerns the initialization of the learner’s model with the main goal of estimating an initial learner’s competency acquisition level. In point 4, the sys- tem aims to select the optimal question taking into account: iJIM ‒ Vol. 16, No. 04, 2022 51 Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… • Difficulty index: indicates the proportion of learners who got item right. Called Pi index or item difficulty. Pi n Nindex = , (1) where: (1) Pi index = difficulty index, (2) n = number of the learners selecting item correctly, and (3) N = total number of items who answered the test. • Discrimination index (DI): refers to the ability of an item to differentiate among learner on the basis of how well they know the material being tested. In fact, it is a variant of the Pearson correlation coefficient. DI rpbis( ) � � � ( ) , H L N 2 (2) where: (1) H = Number of correct answer in high group, (2) L = Number of correct answer in low group, and (3) N = Total number of students in both groups. • Cognitive Level is based on a modified version of Bloom’s taxonomy. For ease of classification, the six cognitive domains described by Bloom’s taxonomy (Knowl- edge, Comprehension, Application, Analysis, Synthesis, and Evaluation) have been collapsed into four: (1) knowledge including assessment activities (questions, exer- cises, situations …) that emphasize the remembering of ideas, material, or phenom- ena, (2) understanding refers to assessment activities that are carried out to verify if learner is able to translate, interpret or extrapolate, (3) application requires the learner to apply theory, principle, idea or method to a new situation, and (4) higher Mental Processes include assessment activities related to analysis, synthesis, and evaluation processes. In point 6, the OCBA system mobilizes three correction strategies depending on the type of question: (1) Automatic correction in the case of closed question, (2) Semi-automatic correction for project exercise. The system allows the teacher to consult the learner’s response and to check off the indicators present in his/her work, while displaying the following information: competency acquisition level, learner achievement level, competency validated or not and real time score in the form of badges and points, and (3) Self-assessment: Learners are provided with self-assess- ment forms to reflect on their own projects. They can check off the indicators and see their results synchronously and automatically. This assessment method helps learn- ers to understand the assessment process and to develop their transversal and disci- plinary skills. In this sense, several researchers claim that self-assessment increases the involvement, independence, assertiveness degree and improves learners thinking process [25], [39]–[41]. In addition, the self-assessment allows the teacher to save time and quickly assign badges or skill points [38]. For the stopping rule, in point 8, a threshold value of competency acquisition level (AL) is used as a validation criterion. In the situation of non-validation, a time-limit and a number of items are fixed in advance. As long as the stopping rule is not verified, a new item is selected and administered [42]. 52 http://www.i-jim.org Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… 6 Results and discussion Two types of information concern the educational software evaluation, one pedagog- ical [43] and the other ergonomic [44]. Indeed, the objective of this quality evaluation was twofold. On the one hand, check the pedagogical relevance of the authoring sys- tem, which verified via pedagogical staff feedback concerning the structure, quality and consistency and services provided by the application. On the other hand, examining the ergonomic quality especially usability of the features and services provided (compe- tency management, creation of tests and projects, correction process, etc.). Several practical tests and simulations were carried out in order to determine the effectiveness of the proposed model. The purpose of these experiments was to produce assessment activities on a theme that the participants, 20 teachers and 11 pedagogical inspectors, choose with reference to school curricula. After this pedagogical experience, we used an opinion grid that contains closed, semi-closed and open questions allowing us to collect data concerning the two dimen- sions targeted by this quality evaluation process. Figures 4 and 5 show the feedback expressed. Fig. 4. Pedagogical relevance of OCBA sytem The Figure 4 below shows that the pedagogical satisfaction for all the proposed activities presents a homogeneous distribution with an average of 75.20% and a disper- sion of 0.05. Teachers and pedagogical inspectors generally appreciated the pedagogi- cal quality of the activities offered. They confirmed the effectiveness of the educational structure adopted by the OCBA authoring system (Avg = 84%) as well as the consis- tency of the design with the vision of the competency-based approach (Avg = 83%) in fact, 71% state that the elements taken into account by each of the proposed evaluation activities are largely sufficient. iJIM ‒ Vol. 16, No. 04, 2022 53 Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… The relevance of the correction process for learning projects ranks slightly above 70% satisfaction. Likewise, 64.52% consider that the correction data generated auto- matically by the authoring system is broadly sufficient. Fig. 5. Ergonomic quality of OCBA system Reading the figure above, we observe that only half of the pedagogical actors expressed a high degree of satisfaction with the main menu structure (Avg = 51.61%). In addition, 35.48% believe that this structuring needs to be improved. On the other hand, 12.90% moderately disagree with the main menu organization. These perceptions are logical and understandable because we have not given much attention to the main menu ergonomics, since our authoring system is intended to be integrated into another platform. However, almost all of the respondents confirmed the readability of the content (assessment activities, feedbacks, corrections, etc.) and its ability to adapt to the visi- tor’s screen (Avg = 90.32%). Along with this readability satisfaction, it is interesting to observe that 80.65% of the experimenters did not have difficulty to using the function- alities and services provided by the OCBA system. 7 Conclusion The quality of online learning can be enhanced by adapting the content, assessment process and learning resources to the principles of active learning pedagogies—thus promoting meaningful learning. In this sense, we have tried to model and develop an online authoring system for formative and summative assessment, whose main objec- tive is to facilitate the process of creating competency-oriented assessment activities. The experimentation of this authoring system by pedagogical experts has proven, on the one hand, its pedagogical and ergonomic relevance. On the other hand, it allowed us to determine the dimensions to be improved. As a result, we are now planning to improve our OCBA system by adding the following functionalities and modules: 54 http://www.i-jim.org Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… 1) Peer assessment in order to give learners the possibility to receive peers’ feedback, as to their production or performance, and 2) Improving the feedback process by enriching it with various resources such as images, videos and links. 8 References [1] M. G. Violante and E. Vezzetti, “Implementing a new approach for the design of an e-learning platform in engineering education,” Comput. Appl. Eng. Educ., vol. 22, no. 4, pp. 708–727, 2014, https://doi.org/10.1002/cae.21564 [2] T. A. R. Abukhalil, S. M.-F. Halawani, and W. M. Daher, “School principals’ evaluation of the effectiveness of employing distance learning tools by teachers,” Int. J. Interact. Mob. Technol., vol. 15, no. 19, p. 64, 2021, https://doi.org/10.3991/ijim.v15i19.24837 [3] J. A. Baird, D. Andrich, T. N. Hopfenbeck, and G. Stobart, “Assessment and learning: fields apart?,” Assess. Educ. Princ. Policy Pract., vol. 24, no. 3, pp. 317–350, 2017, https://doi.org/ 10.1080/0969594X.2017.1319337 [4] J. Baird, P. E. Newton, T. N. Hopfenbeck, and G. Stobart, “Assessment and learning: state of the field review,” Knowl. Cent. Educ., no. July, 2014. [5] M. Mogapi, “Examinations wash back effects: challenges to the criterion referenced assessment model,” J. Educ. e-Learning Res., vol. 3, no. 3, pp. 78–86, 2016, https://doi. org/10.20448/journal.509/2016.3.3/509.3.78.86 [6] R. Beebe, S. Vonderwell, and M. Boboc, “Emerging patterns in transferring assessment practices from F2f to online environments,” Electron. J. e-Learning, vol. 8, no. 1, pp. 1–12, 2010. [7] L. R. Kearns, “Student assessment in online learning: challenges and effective practices,” vol. 8, no. 3, pp. 198–208, 2012. [8] M. Ilahi-Amri, L. Cheniti-Belcadhi, and R. Braham, “A framework for competence based e-assessment,” Interact. Des. Archit., vol. 2017, no. 32, pp. 189–204, 2017. [9] G. Paquette, D. Rogozan, and O. Marino, “Competency comparison relations for recom- mendation in technology enhanced learning scenarios,” CEUR Workshop Proc., vol. 896, no. September, pp. 23–34, 2012. [10] J. S. Magdaleno-Palencia, M. Garcia-Valdez, M. Castanon-Puga, and L. A. Gaxiola-Vega, “On the modelling of adaptive hypermedia systems using agents for courses with the compe- tency approach,” Commun. Comput. Inf. Sci., vol. 181 CCIS, no. part 3, pp. 624–630, 2011, https://doi.org/10.1007/978-3-642-22203-0_53 [11] J. Najjar et al., “A data model for describing and exchanging personal achieved Learning outcomes (paLo),” Int. J. IT Stand. Stand. Res., vol. 8, no. 2, pp. 87–104, 2010, https://doi. org/10.4018/jitsr.2010070107 [12] T. Alsinet, D. Barroso, R. Béjar, and J. Planes, “A formal model of competence-based assessment,” Front. Artif. Intell. Appl., vol. 202, no. 1, pp. 428–436, 2009, https://doi. org/10.3233/978-1-60750-061-2-428 [13] N. Rungrangtanapol and J. Khlaisang, “Development of a teaching model in virtual learn- ing environment to enhance computational competencies in the 21st century,” Int. J. Interact. Mob. Technol., vol. 15, no. 13, pp. 93–107, 2021, https://doi.org/10.3991/ijim. v15i13.21791 [14] O. Jerez, N. Baloian, and G. Zurita, “Authentic Assesment between Peers in Online Courses with a Large Number of Students,” Proc.—IEEE 17th Int. Conf. Adv. Learn. Technol. ICALT 2017, pp. 235–237, 2017, https://doi.org/10.1109/ICALT.2017.160 iJIM ‒ Vol. 16, No. 04, 2022 55 https://doi.org/10.1002/cae.21564 https://doi.org/10.3991/ijim.v15i19.24837 https://doi.org/10.1080/0969594X.2017.1319337 https://doi.org/10.1080/0969594X.2017.1319337 https://doi.org/10.20448/journal.509/2016.3.3/509.3.78.86 https://doi.org/10.20448/journal.509/2016.3.3/509.3.78.86 https://doi.org/10.1007/978-3-642-22203-0_53 https://doi.org/10.4018/jitsr.2010070107 https://doi.org/10.4018/jitsr.2010070107 https://doi.org/10.3233/978-1-60750-061-2-428 https://doi.org/10.3233/978-1-60750-061-2-428 https://doi.org/10.3991/ijim.v15i13.21791 https://doi.org/10.3991/ijim.v15i13.21791 https://doi.org/10.1109/ICALT.2017.160 Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… [15] A. A. Al-Hattami, “E-assessment of students’ performance during the e-teaching and learn- ing,” Int. J. Adv. Sci. Technol., vol. 29, no. 8, pp. 1537–1547, 2020. [16] M. James, “Assessment, Teaching and Theories of Learning,” Assess. Learn., no. March, pp. 47–60, 2006, https://doi.org/10.13140/2.1.5090.8960 [17] B. Collis and J. Moonen, Flexible Learning in a Digital World. London, Kogan, 2001. [18] D. Laveault, Les pratiques d’évaluation en éducation, L’adMee. Montréal, 1992. [19] D. Bain, “L’évaluation formative fait fausse route.,” Mes. évaluation en éducation, vol. 10, no. 4, pp. 23–32, 1988. [20] D. Bain and B. Schneuwly, “Pour une évaluation formative intégrée dans la pédagogie du français: de la nécessité et de l’utilité de modèles de référence.,” neuchâtel: delachaux et niestlé, pp. 51–79, 1993. [21] L. M. Lopez and D. Laveault, “L’évaluation des apprentissages en contexte scolaire: Dével- oppements, enjeux et controverses,” Mes. évaluation en éducation, vol. 31, no. 3, pp. 5–34, 2008, https://doi.org/10.7202/1024962ar [22] P. Boghossian, “Behaviorism, constructivism, and socratic pedagogy,” Educ. Philos. Theory, vol. 38, no. 6, pp. 713–722, 2006, https://doi.org/10.1177/1469787408100194 [23] W. TYLER, Basic principales of curriculum, and instruction. Chicago: University of Chicago Press, 1950. [24] J. P. Dintilhac and I. Rak, “Evaluation de la technologie en collège,” 2005. [25] G. Ellis, “Looking at ourselves—self-assessment and peer assessment: Practice exam- ples from New Zealand,” Reflective Pract., vol. 2, no. 3, pp. 289–302, 2001, https://doi. org/10.1080/1462394012010303 [26] N. A. Bakar, S. Rosbi, and A. A. Bakar, “Evaluation of students performance using fuzzy set theory in online learning of islamic finance course,” Int. J. Interact. Mob. Technol., vol. 15, no. 7, pp. 202–209, 2021, https://doi.org/10.3991/ijim.v15i07.20191 [27] Z. L. Berge, “Barriers to communication in distance education,” Turkish Online J. Distance Educ., vol. 14, no. 1, pp. 374–388, 2013, https://doi.org/10.17718/tojde.66881 [28] C. Juwah, “Using peer assessment to develop skills and capabilities.,” USDLA J., pp. 39–50, 2003. [29] J. Bull, “Computer-assisted assessment: impact on higher education institutions.,” J. Educ. Technol. Soc., vol. 2, no. 3, pp. 123–126, 1999. [30] M. Amraouy, A. Bennane, M. Majid Himmi, M. Bellafkih, and B. Aziza, “Detecting learner’s motivational state in online learning situation towards adaptive learning environ- ments,” 13th Int. Conf. Intell. Syst. Theor. Appl. (SITA’20). Assoc. Comput. Mach. New York, NY, USA, vol. Article 22, pp. 1–6, 2020, https://doi.org/10.1145/3419604.3419760 [31] S. Chartouf, M. Amraoui, and M. M. Drissi, “l’encadrement pédagogique à l’ère numérique,” Rabat, Maroc, 2017. [32] L. Audet, “Les pratiques et défis de l’évaluation en ligne,” Montréal: REFAD, 2011. [33] S. Vonderwell, “Asynchronous discussions and assessment in online learning,” vol. 39, no. 3, pp. 309–328, 2007, [Online]. Available: http://files.eric.ed.gov/fulltext/EJ768879.pdf; https://doi.org/10.1080/15391523.2007.10782485 [34] B. Arend, “Course assessment practices and student learning strategies in online courses,” J. Asynchronous Learn. Networks, vol. 11, 2006, https://doi.org/10.24059/olj.v11i4.1590 [35] J. Gaytan and B. C. McEwen, “Effective online instructional and assessment strategies,” Int. J. Phytoremediation, vol. 21, no. 1, pp. 117–132, 2007, https://doi.org/10.1080/08923 640701341653 [36] T. Murray, “Expanding the knowledge acquisition bottleneck for intelligent tutoring systems.,” Int. J. AI Educ., vol. 8, no. 3, 1997. [37] T. Murray, “Authoring intelligent tutoring systems: an analysis of the state of the art,” Int. J. Artif. Intell. Educ., 1999. 56 http://www.i-jim.org https://doi.org/10.13140/2.1.5090.8960 https://doi.org/10.7202/1024962ar https://doi.org/10.1177/1469787408100194 https://doi.org/10.1080/1462394012010303 https://doi.org/10.1080/1462394012010303 https://doi.org/10.3991/ijim.v15i07.20191 https://doi.org/10.17718/tojde.66881 https://doi.org/10.1145/3419604.3419760 http://files.eric.ed.gov/fulltext/EJ768879.pdf https://doi.org/10.1080/15391523.2007.10782485 https://doi.org/10.24059/olj.v11i4.1590 https://doi.org/10.1080/08923640701341653 https://doi.org/10.1080/08923640701341653 Paper—Online Competency-Based Assessment (OCBA): From Conceptual Model to Operational… [38] N. Alruwais, G. Wills, and M. Wald, “Advantages and challenges of using e-Assessment,” Int. J. Inf. Educ. Technol., vol. 8, no. 1, pp. 34–37, 2018, https://doi.org/10.18178/ijiet.2018.8.1. 1008 [39] N. Falchikov, “Product comparisons and process benefits of collaborative peer group and self assessments,” Assess. Eval. High. Educ., vol. 11, no. 2, pp. 146–166, 1986, https://doi. org/10.1080/0260293860110206 [40] S. J. Hanrahan and G. Isaacs, “Assessing self- and peer-assessment: The students’ views,” Int. J. Phytoremediation, vol. 21, no. 1, pp. 53–70, 2001, doi: https://doi.org/10.1080/ 07294360123776 [41] D. Boud, “Enhancing learning through self-assessment,” Biochem. Educ., vol. 24, no. 3, p. 183, 1996, https://doi.org/10.1016/0307-4412(96)82523-4 [42] C. Schumacher, Linking Assessment and Learning Analytics to Support Learning Processes in Higher Education. 2020. https://doi.org/10.1007/978-3-319-17727-4_166-1 [43] A. Bennane, “Système d ’ Information Orienté vers la Formation (SIOF): édition de didac- ticiels adaptatifs adaptatifs,” Rabat, Maroc, 2010. [44] N. M. S. Araújo and F. R. R. Freitas, “Pedagogic software evaluation protocol: analyzing a digital educational game for portuguese language teaching,” Alfa Rev. Linguística (São José do Rio Preto), vol. 61, no. 2, pp. 381–408, 2017, https://doi.org/10.1590/1981-5794-1709-6 9 Authors Mohammed Amraouy is a pedagogical inspector in computer science and part- time trainer at Regional Center for Education and Training Profession Oujda, Morocco, and a current PhD student National Institute of Posts and Telecommunications, Rabat, Morocco. His research interests focus on Human computer interaction, Artificial intel- ligence and Online learning assessment. Email: amraouy.mohamed1@gmail.com. Mostafa Bellafkih is professor researches in The National Institute of Posts and Telecommunications (INPT) in Rabat, Morocco since 1995. He received the PhD thesis in Computer Science from the University of Paris 6, France, in June 1994 and Doctorates Science in Computer Science (option networks) from the University of Mohammed V in Rabat, Morocco, in May 2001. His research interests include the net- work management, knowledge management, A.I., Data mining and Database. Email: bellafkih@inpt.ac.ma Abdellah Bennane is professor researches at the Training Center of Teaching Inspectors, Rabat, Morocco. He is a professional in applied informatics in education. His recent research is e-learning, teaching software and use on machine learning tech- niques. Email: abdellah.bennane@gmail.com Mohammed Majid Himmi is a professor researches in the LIMIARF Laboratory, at the physics department of the Faculty of Science Rabat. He is a PhD holder and team leader for PhD students in DSP and DIP. He is also teaching various subjects such as programming languages, algorithmics, DSP and so on. Actually, he is the coordi- nator of the Master program “Informatique System Telecommunications IST” Email: m.himmi@um5r.ac.ma Article submitted 2021-11-19. Resubmitted 2021-12-25. Final acceptance 2021-12-26. Final version published as submitted by the authors. iJIM ‒ Vol. 16, No. 04, 2022 57 https://doi.org/10.18178/ijiet.2018.8.1.1008 https://doi.org/10.18178/ijiet.2018.8.1.1008 https://doi.org/10.1080/0260293860110206 https://doi.org/10.1080/0260293860110206 https://doi.org/10.1080/07294360123776 https://doi.org/10.1080/07294360123776 https://doi.org/10.1016/0307-4412(96)82523-4 https://doi.org/10.1007/978-3-319-17727-4_166-1 https://doi.org/10.1590/1981-5794-1709-6 mailto:amraouy.mohamed1@gmail.com mailto:bellafkih@inpt.ac.ma mailto:abdellah.bennane@gmail.com mailto:m.himmi@um5r.ac.ma