Australasian Journal of Educational Technology, 2021, 37(2). 1 Enabling adaptive, personalised and context-aware interaction in a smart learning environment: Piloting the iCollab system Eduardo Araujo Oliveira, Paula de Barba University of Melbourne Linda Corrin Swinburne University of Technology Smart learning environments (SLE) provide students with opportunities to interact with learning resources and activities in ways that are customised to their particular learning goals and approaches. A challenge in developing SLEs is providing resources and tasks within a single system that can seamlessly tailor learning experience in terms of time, place, platform, and form. In this paper we introduce the iCollab platform, an adaptive environment where learning activities are moderated through conversation with an intelligent agent who can operate across multiple web-based platforms, integrating formal and informal learning opportunities. Fifty-eight undergraduate computer science students were randomly assigned to either an intervention or control group for the 12 weeks of the pilot study. Learning analytics were used to examine their interactions with iCollab, while their course performance investigated the impact of using iCollab on learning outcomes. Results from the study showed a high level of interaction with iCollab, especially social interaction, indicating an interweaving of formal learning within their informal network spaces. These findings open up new possibilities for ways that SLEs can be designed to incorporate different factors, improving the ability of the system to provide adaptive and personalised learning experiences in relation to context and time. Implications for practice or policy: ● This system illustrates features that can be implemented in smart learning environments to enable adaptive, context-based, and personalised learning. ● Smart learning environments can combine formal and informal learning contexts to promote student engagement through the provision of flexibility of the platforms in which learning occurs. ● Conversational intelligent tutoring systems can adapt the form of learning resources/activities so students can interact using natural language as they would with a teacher in the classroom. Keywords: smart learning environments, intelligent tutoring systems, context awareness, adaptive learning, personalised learning, learning analytics Introduction Technology has played an important role in advancing higher education, making it widely available to a broader range of students (e.g., MOOCs) and supporting learning in new ways (e.g., cognitive tools). However, the potential of technology to truly transform education can only be realised through the provision of personalised and adaptive learning experiences to students at scale. Moving beyond a simple application of technology involves exploring new contexts for learning, while addressing issues related to increasing student diversity as a result of the widening of participation in higher education (Gros, 2016; Kinshuk et al., 2016). This means embracing informal learning opportunities and adapting for the diversity of students’ prior knowledge and learning skills found in these settings. Recently, the term smart learning environments has been coined to refer to such endeavours in educational technology. Smart learning environments “not only enables learners to access digital resources and interact with learning systems in any place and at any time, it also actively provides the necessary learning guidance, hints, supportive tools or learning suggestions in the right place, at the right time and in the right form” (Hwang, 2014, p.2). These learning environments differ from intelligent tutoring systems as they are focused on applying and embedding learning activities in the real world, as opposed to creating stand-alone Australasian Journal of Educational Technology, 2021, 37(2). 2 environments with limited connectivity with other digital environments. According to Hwang (2014), the three main features of smart learning environments are adaptive support, adaptive interfaces, and context- awareness, which come together to create a personalised learning experience for students that enables them to meet their learning goals. Such ambitious scope comes with many challenges. These include the degree to which these environments are customisable, the ability to scale their ubiquity, and how learning related data can be integrated (Boulanger et al., 2015). In addition, it is expected that smart learning environments will provide seamless connection (continuous service as any device or service connects), have a natural interaction with students, and be highly engaging (Zhu et al, 2016). Overcoming these challenges would result in a seamless student learning experience, in which formal and informal learning settings are blurred. This means formal learning initiatives would permeate settings where informal learning usually happens, and vice-versa. Social media has been considered an ideal platform for such opportunistic learning (Kinshuk et al., 2016). By incorporating social media, smart learning environments would be supporting students to learn in their informal contexts rather than waiting for them to sign into a formal digital environment. An example of benefit of such distributed nature is creating opportunities (such as being available across platforms) and removing obstacles (such as the need to switch systems) for students to connect knowledge that is being learnt at a course with either new or prior knowledge that they may encounter in varied digital environments (McCombs, 2017) However, the multitude of platforms and different characteristics of each in this space adds to the challenge of any tutoring system becoming ubiquitous. In this study many of these challenges were addressed in the design, development, and implementation of a smart conversational tutoring system – iCollab. iCollab combines the adaptive and personalised structure of a conversational intelligent tutoring system (Rus et al., 2013) across multiple social media platforms (e.g., Twitter, Gtalk, Skype). The distributed nature of iCollab allows students to converse with the system via a diverse range of web systems anytime anywhere, using the system most convenient to them. In addition, iCollab is proactive, initiating contact with students instead of relying on their self-regulatory skills or motivation to either seek help or interact more with the course. Moreover, iCollab predicts behaviour and adapts responses to each student by taking into consideration aspects of their personality and their context (i.e., preferred web system, time of contact, current performance, and emotional state). Students’ personality traits are used to determine the format of the presented content, while students’ context is used to determine when and where students can be most effectively contacted. These unique features of iCollab both position it closer to a human tutor’s ability to provide personalised learning as well as expand its scalable and distributed potential. This study contributes to a better understanding of students’ patterns of interaction and learning across multiple contexts using a smart learning environment. Developing effective smart learning environments requires an in-depth understanding of how students interact with these tools when learning in informal settings (Gros, 2016). This is particularly relevant for social media platforms, where not much is known about how, when, and for what purpose students would interact with tutoring systems in this context (Kumar & Gruzd, 2019). This study also contributes as a proof of concept as to how an intelligent tutoring system framework can be adapted to include context-awareness and be embedded in the real world in a potentially scalable manner (Spector, 2014). Literature review In this section we explore work related to the elements of adaptive, context-aware, and personalised learning which were central to the development of iCollab as a smart learning environment. In doing so, we include research conducted across a number of disciplinary fields including conversational intelligent tutoring systems, context-based learning, informal learning via social media, and personalisation. Adaptive learning systems are a type of personalised learning environment that provides support customised to each student's learning needs. According to Paramythis and Loild-Reisinger (2003, p. 182): [A] learning environment is considered adaptive if it is capable of: monitoring the activities of its users; interpreting these on the basis of domain-specific models; inferring user requirements and preferences out of the interpreted activities, appropriately representing Australasian Journal of Educational Technology, 2021, 37(2). 3 these in associated models; and, finally, acting upon the available knowledge on its users and the subject matter at hand, to dynamically facilitate the learning process. An example of an adaptive learning environment is an intelligent tutoring system (ITS). The four key architectural elements of an ITS are: user interface, pedagogical model, knowledge base, and student model (Richardson, 1988). Students interact with an ITS through the user interface, while the pedagogical model determines how the learning session is constructed based on the knowledge base (i.e., domain content) and the student model (i.e., learner profile). Conversational intelligent tutoring systems (CITS) are an extension of traditional intelligent tutoring systems that integrate natural language interfaces rather than traditional menus of options for students. CITS enable students to explore topics through conversation and discussion, constructing knowledge as they would in the classroom. CITS uses a chat-based user interface to power the other three key architectural elements of an ITS, providing adaptive and personalised interaction over time. This is different to chatbots (or virtual assistants), which mainly include chat as a user interface and a knowledge base to be consulted, but are not able to provide adaptation and personalisation over time (Hobert, 2019). Following this discrimination, only a few CITS exist at present, due to the complexity and time-consuming nature of development (Cai et al., 2019). Well-known examples of CITS include AutoTutor (Graesser et al., 2004), and Oscar (Latham et al., 2014) (for a more detailed review of CITS see Paladines & Ramírez, 2020). Their impact on learning has been found to be as effective as human tutors (Cai et al., 2019). iCollab is similar to these previous CITS as it uses natural dialogue to answer questions and provide additional resources to students, provides personalised material based on students’ personal factors, adapts this over time based on on-going interactions with students, and has been used in real-life settings. iCollab differs from these previous CTIS due to its distributed nature. Instead of being based in a highly structured environment, following the traditional self-contained ITS approach, iCollab can be accessed from any social media platform or embedded as a widget in websites, such as the LMS. This means that, unlike other CITS in which students are expected to use the system during class time, iCollab is available to assist and guide students anywhere and at any time. This allows students to have meaningful engagements with course content across a diverse range of platforms, which can open up a rich informal learning context. Informal learning can occur in social media as a self-directed learning experience or as a consequence of encouragement from formal learning courses for students to make connections with their peers and beyond (Gruzd et al., 2020). Previous research has found that social media has been mainly used to provide resources and seek information in cases of informal learning (Gruzd & Conroy, 2020; Gruzd et al., 2020). CITS have the potential to be more proactive, such as initiating contact with students when noticing they are seeking information or suggesting alternative resources. This can directly address one of the main challenges of ITS related to the decision of contacting and directing students so they receive support only when needed (D’Mello et al., 2010). Formal higher education courses have slowly started to extend their learning activities to include social media, although there is a call to intensify such initiatives (Mpungose, 2020). In this study, iCollab was used as an additional support to students available through social media, where they could access course content and formative learning activities, such as quizzes. The use of CITS in higher education, therefore, represents a great opportunity to connect formal and informal learning through the promotion of a smart integration with the real-world through social media; that is, using an adaptive and personalised approach while being aware of the context. Context-aware learning systems extract, interpret and use contextual information to adapt the system’s behaviour and responses to students (Byun & Cheverst, 2004; Oliveira, 2008; Oliveira et al., 2015). Contextual information can include aspects of the learning design, learner profile, learner temporal information, people, place, artifact, time, and physical conditions (Zervas et al., 2011). In this study, contextual information related to the learner profile (personality, knowledge), their temporal personal information (emotional state), artifact (properties of social media), and time (availability status, when they accessed social media and the learning management system [LMS]) were used to adapt the iCollab system. These adaptations can be related to both learning resources and learning activities (Gómez et al., 2014). In iCollab, both types of adaptation occurred. Not only were learning resources selected to be delivered to students according to their interaction, but how these learning resources were delivered as learning activities was also adapted. For example, depending on the students’ personality, the same learning topic could be delivered as a challenge (e.g., as a quiz), a reference (e.g., a text), or an invitation to contact a peer. Australasian Journal of Educational Technology, 2021, 37(2). 4 Different ITSs consider different psychological dimensions in their student model to guide how they personalise their interaction with students. Commonly used personal characteristics include learning goals, learning strategies, interest, and/or personality (Xie et al., 2019). In this study we focused on students’ personality. Personality is an individual’s internal factor that provides consistency over time for their behaviour (Child, 1968). iCollab was created for the current study using the Myers-Briggs Type Indicator (MBTI), a personality measure commonly used in computer-based learning (Tlili et al., 2016), as a proof of concept due to available resources at the time of development (i.e., access to personality researchers with knowledge of this model). The MBTI is a self-report personality test to determine a person’s psychological type which is drawn from the work of Carl Jung. There are 16 types of personality based on a combination of four dimensions: (1) concentration and attention (extraversion or introversion); (2) perception and information processing (sensing or intuition); (3) problem solving and decision-making processes (thinking or feeling); and (4) dealing with tasks (judging or perceiving) (Myers et al., 1985). Differences in MBTI have been found to influence how people interact in online learning environments (Bolliger & Erichsen, 2013). For example, although introverts tend to prefer the online context more than face-to-face environments and participate more in online discussion (Ellis, 2003; Harrington & Loffredo, 2010), their overall participation is still less than extrovert students (Daughenbaugh et al., 2002). Additionally, a rich communication mode, including high levels of feedback, language variety, personal focus, and multiple cues was found more appropriate for students who identified themselves as feelers than for intuitive students (Daft et al., 1987). These intersections between personality type and learning approach are incorporated into an ITS student model to enable the personalisation of interactions between the system and the student. Current study The current study had two aims. The first was to understand the role of context and how it relates to learning by examining students’ behavioural patterns when using iCollab across contexts. We investigated three research questions related to this aim: RQ1: How did students access iCollab across different contexts throughout the semester? RQ2: What was the content of students’ interaction with iCollab in these contexts? RQ3: Who initiated interaction between students and iCollab throughout the semester? The second aim was to evaluate the impact of iCollab on learning outcomes. The following research question was related to this aim: RQ4: What was the impact of using iCollab on students’ final grade? Method An experimental research design was used to answer these questions. Participants were recruited from an undergraduate computer science course at Catholic University of Pernambuco (Brazil), and then randomly assigned to two groups: one with access to iCollab during the semester (intervention group) and the other who had access to the same resources and quizzes via the learning management system, but didn’t have the iCollab system responding/prompting them (control group). This study was approved by the university’s human research ethics committee and all participants provided informed consent to participate in this study. Participants The participants included 58 students enrolled in the face-to-face course Introduction to Programming I. Participants were 81% male and 19% female, within an age range of 17 to 26. From the 58 students enrolled in the course, 29 were chosen randomly and invited to use iCollab. None of the participants had prior experience with programming. The course The duration of the course was 18 weeks. Participants in the intervention group had access to iCollab for 12 weeks of course (from weeks 7 to 18). The course involved two 1-hour lectures focused on software Australasian Journal of Educational Technology, 2021, 37(2). 5 programming fundamentals per week. Students were expected to attend the weekly lectures, to read the references provided on the LMS and to work on homework practical programming exercises. The course assessments consisted of two 2−hour closed book examinations which took place in week 9 (40%) and week 18 (60%). Both exams involved the development of C algorithms to solve various problems with different complexity levels. iCollab was used as an additional support tool to the course that participants could access via social media and/or a widget in the LMS, to ask questions related to the course content. This allowed them to access course material whenever they were engaging with potentially informal learning situations in social media, without the need to change digital environments. The knowledge base used to provide answers and/or formative assessment (i.e., quizzes) were course materials available to all students via the LMS and during lecture, including to participants who did not have access to iCollab. In case these sources were not sufficient to clarify students' questions, iCollab then used web search (i.e., Google) to provide URLs to students with potential answers (e.g., to YouTube videos, Wikipedia pages). The course material did not have any explicit mention of or incentive to interact with iCollab. Measures All interactions between students and iCollab were stored in the format of audit logs, which included a timestamp, platform used, and content of each interaction. A sequence of continuous interactions between the student and iCollab was considered a session. Considering the resources available in the course, if no interaction happened for 10 minutes, that would be considered the end of a session (de Barba et al., 2020). Such data treatment allowed for the identification of the number of sessions across the course and who initiated each session - the student or iCollab. In addition, students’ final grade in the course was used as a measure of their learning outcome. Procedure Students were invited to interact with iCollab using their preferred social media service and/or LMS widget. The first suggested task for each student after connecting to the iCollab system was to submit their answers to the MBTI survey, which were then used to initialise their roles in the student model. Students could use GTalk, Twitter, Skype, and/or a chat widget embedded in the LMS as the user interface for the iCollab system. The iCollab system iCollab is designed to provide a personalised learning experience for students which adapts to their preferred learning contexts and in response to their behaviour. The system deploys an intelligent conversation agent as the main interface with students in the form of an avatar that simulates a human tutor. Learners communicate with the avatar in the same way as any other contact in the matching social media platform. The system takes this communication and applies a unique combination of advanced text mining, context, and prediction algorithms to build a model of individual learners. The resulting rule-based prediction model then drives the presentation of personalised interactions. iCollab was designed to allow individual profiles, encapsulated by the student model, to adapt over time. The pedagogical model is robust and flexible, providing advanced text-mining, pattern recognition, and prediction capabilities, which, combined with the student model, determine learning content and activities for students from the knowledge base. These features and components are shown in Figure 1. Australasian Journal of Educational Technology, 2021, 37(2). 6 Figure 1. The high-level view of the architectural components in the iCollab framework What differentiates iCollab from other conversational ITSs is that the user interface can be any technology with public services/API on the web, including learner’s preferred social media platform. The user interface is responsible for creating the natural language communication bridge between students and the system, which subsequently provides input for the pedagogical model. Any web-based system that provides public APIs for sending (output) and receiving (input) text can play the role of the user interface in the iCollab system. This could include social media platforms (e.g. Facebook, Twitter, Skype) or systems such as Moodle, Blackboard, Canvas, and Yammer. In this study the platforms that were used as part of the user interface included Twitter, Skype, GTalk, and a private LMS used by Catholic University of Pernambuco. Students were able to converse with the iCollab conversational agent, named Chico, via their preferred platform about matters related to the course. Chico was developed using ProgramD library (Bush, 2013), a fully functional AIML bot engine, implemented with Java. This agent provided scaffolding for interactions with the student (via their chosen user interface) to determine access to content within the knowledge base and subsequently makes updates to the student model. It processes students’ inputs in natural language and consults the knowledge base to provide personalised and immediate responses. An image of Chico was included in his profile across each platform to make him more relatable for the students. Figure 2 shows a scenario of a student conversing with Chico across different platforms. Australasian Journal of Educational Technology, 2021, 37(2). 7 Figure 2. A scenario of interaction between a student and the conversational agent through the use of different user interfaces (social media) The data from the user interface is fed into the pedagogical model which determines how the learning session will be conducted, that is, “what expertise to give, the size of the knowledge to package, and the best way to present such material in the dimensions of time and space” (Burns & Parlett, 2014, p. 5). In order to do so, the pedagogical module in iCollab makes reference to the student model (for the particular student) and the knowledge base. The pedagogical model consists of six components designed in a modular and flexible manner to allow individual components to be added, reused or replaced as necessary. In addition to the conversational agent component (profiled above) the other components include: ● Controller: responsible for managing data flows (by communicating with all the other components) in the framework, instantiating objects and variables. ● Data collector: responsible for collecting data (text and events) from students’ interactions with the iCollab CITS in heterogeneous web-based systems. In order to integrate with various web systems, iCollab needs to implement their public API or services. By implementing web systems services, iCollab receives an instance of an application that is running on those systems. Essentially, iCollab connects to different web-based systems as a client. There is an instance of iCollab running as an application in various web-based systems (e.g., a contact on Skype, a user in Twitter, and a conversational agent in LMSs or MOOCs). For instance, after implementing Twitter service, iCollab gets integrated with the microblog as an application that runs with name and avatar on Twitter. If any student mentions, sends a message or starts following this created application, these actions can be identified in iCollab by the conversational agent that monitors all of these inputs. The same happens with other web systems. For students, there's no difference between the applications created by iCollab and other social media users. All they see is a normal social media account with a profile, photo and can be observed to communicate with people. Students must follow or add the iCollab conversational agent as a contact in the web-based system they would like to access iCollab as they would do with any other contact. Once that is done, students can communicate with the conversational agent. Australasian Journal of Educational Technology, 2021, 37(2). 8 ● Conversational agent: responsible for processing the input data and classifying inputs: text messages, commands, and/or events (Figure 3). Figure 3. Conversational Agent I/O flow As shown in Figure 3, after receiving an input, the controller delivers that to the conversational agent module that will process and identify what type of input is that. In case of receiving a text input, the conversational agent will consult the student model and knowledge bases to provide quick responses to the student based on his/her MBTI personality and on that particular web system. In case of receiving events (e.g., a student is now online on GTalk), the conversational agent will check event-rules designed in Drools and will receive instruction on what to do (e.g., get in touch with the student, do nothing, and so on). Lastly, commands will be processed by the conversational agent, which will provide feedback to students after executing them. For instance, students can ask the conversational agent to integrate two different web- systems by using the command #addEnvironment. When a student enters the command “#addEnvironment Gtalk mylogin@gmail.com” in Skype, Gtalk will be set as a new integrated environment for that student. Thus, it is possible for iCollab to match unique individuals based on login names in different web-based systems (regardless of their various logins or e-mails). If a student interacts with the conversational agent through Gtalk, by referring to the historical database of the student the conversational agent will know that the student has already communicated with it through Skype and demonstrated interest in studying a particular content. The use of special #hashtag commands help students to perform tasks such as: (i) undertaking subject exams or quizzes, (ii) answering the MBTI personality test, (iii) integrating distributed web systems (Figure 3). ● Text mining: responsible for processing input texts from students. The first task is to reduce the complexity of the input text to allow for sentiment analysis to occur (Liu, 2010). The text mining component makes use of natural language processing (NLP) techniques to reduce the complexity of the input text and to support efficient data manipulation and representation. iCollab uses Multinomial Naive Bayes model (Manning et al., 2008; Pak & Paroubek, 2010; Pang et al., 2002) and represent input texts as bags of words since the frequencies of words don’t play a key role in our classification. This method treats each word completely separately from any other word. The resulting low dimension output vector is subsequently used for sentiment analysis to determine the satisfaction level of students on the content provided by the conversational agent, which is based on the current state of the student model. This is represented in this paper as students’ emotions, which can be positive or negative. This component supports MBTI profile updates (as detailed below). ● Context awareness: responsible for maintaining real-time contextual elements used to characterise an entity in a given domain (Oliveira, 2013). The context of the interaction between the student and the iCollab CITS encapsulates elements from the student model including: the web-based system used to Australasian Journal of Educational Technology, 2021, 37(2). 9 communicate with iCollab CITS, the data and time of the interaction, the student’s personality profile, and curriculum tasks/activities (from the knowledge base). The context awareness component receives original text inputs from students (messages with headers, add-ons, and extra data) or event notifications (e.g., the student was online/offline, the student finished a quiz, the student completed the personality survey) from the user interface via the controller. This component cleans and processes input texts and event notifications in order to format a corresponding vector of contextual elements (derived from each input text or notification) that is then passed to the prediction component. ● Prediction: a rule-based model which receives input from the text mining and context awareness components. It is responsible for updating the student MBTI profile. Personalised recommendations are supported in iCollab by the MBTI (Myers et al., 1985). Depending on the current MBTI profile of the student in the student model and the corresponding contextual element vector, the prediction component makes a specific recommendation that is subsequently passed to the student through the conversational agent. This recommendation, consisting of specific learning activities or tasks appropriate to the student’s current MBTI profile, is generated based on a set of rules designed and written in JBoss Drools (Browne, 2009), following an IF-THEN format (Figure 4). Figure 4. Example of decision rules developed for managing personalisation in iCollab The rule set was generated by iCollab developer, who was also a subject coordinator with content and design knowledge, in collaboration with personality tests’ researchers from the School of Psychology Sciences, Federal University of Pernambuco, Brazil. The researchers provided a summary of each personality type to the developer, who used them as a guide to create the MBTI rules for iCollab considering the subject content available (knowledge base and quizzes) and the iCollab features. One example of recommendation provided by the involved researchers is shown below. MBTI Profile: ISTJ Personality: Perform tasks quickly and on time; they are skilled with details and careful when managing them; responsible; they honor commitments and finish what has been started; they prefer to work alone; they like orderly, task-oriented work that provides privacy to work around the clock; reserved and rational, logical and analytical. How to personalize contents to them within iCollab? As they like to work alone, they should be kept quiet, without referring them to other users. The ideal would be for the conversational agent to inform them about an existing item in the discussion board, which is linked to their current studies (where perhaps they would find extra motivation to answer questions about their current task or to participate in discussions with other users). Note. The conversational agent can also encourage ISTJ students by recommending tasks that they can perform in a short time, alone, and that obtain results quickly (context- dependent suggestion). Australasian Journal of Educational Technology, 2021, 37(2). 10 By studying the recommendations from the psychologist and understanding the subjects intended learning outcomes and content, the developer interpreted and translated the recommendations into rules. For instance, “the conversational agent can also encourage ISTJ students by recommending tasks that they can perform in a short time, alone, and that obtain results quickly (context-dependent suggestion)” could be coded as a rule in which the conversational agent would ask the student to write a small code in C to identify all the prime numbers between 1 and 100, for example, depending on the context of the ISTJ student (quiz scores, topics of interest). The rule would then be validated with the researchers over a few meetings and finally translated to Drools. Rule R1 “CommunicationGap” IF Student has not communicated with the conversational agent within the last two weeks. AND MBTI profile equals ISTJ THEN Send the student a small challenge from a learning topic (identified based on previous interactions with the conversational agent - context awareness component). A few challenges would be classified in terms of complexity and available in the system database to be randomly selected by the conversational agent. Each rule was also correlated with events captured from the web systems integrated with iCollab. iCollab monitored events such as: excessive periods of time offline, students adding iCollab conversational agent as a contact, students finalising a quiz, and new student responses to the personality test. In order to keep students’ MBTI profiles updated in the student model component (explained below), which means more accurate personalised contents to each student, the prediction component considers the historical sentiment analysis results (positiveness and negativeness levels) from each student as part of a sequential data analysis. Depending on the probability of how happy or sad the student is among the last sessions, this component can suggest answers from the student MBTI profile or from a different one and may update the student MBTI profile to a similar or completely different personality group and type (Figure 5). From 50 written rules as part of this component, 18 were exclusively designed to manage students’ MBTI updates, while the other 32 were used to provide personalised recommendations to students. Figure 5. Prediction component strategy to update MBTI profiles Two examples of rules designed to update students' MBTI profiles are listed below: Rule R34: IF the student's happiness probability in the last 5 recommendations is greater than 0.7 THEN keep the student MBTI profile. Rule R41: IF the student's happiness probability in the last 5 recommendations is smaller than 0.5 and MBTI equals ENTP THEN perform next 3 recommendations from INFP profile (another MBTI - specified on Drools - from a different group) and reassess happiness scores (Figure 5). A scenario presenting a complete interaction flow in iCollab is detailed in Figure 6. The scenario shows what happens in iCollab when a text message from an ESTP student is received by the conversational agent. Australasian Journal of Educational Technology, 2021, 37(2). 11 The knowledge base component is responsible for managing static repositories in the CITS (e.g., reference material, lecture notes, quiz questions). The repositories consist of tagged content for each personality profile and the corresponding web-based system where the public API services have been implemented. Given the limited number of characters on Twitter, it was necessary to separate Twitter records from other social media networks. The repositories were developed in Artificial Intelligence Markup Language (AIML), an XML dialect for creating natural language software agents (Mitrovic, 2003). AIML is used in the CITS as the mechanism to provide learning content and general responses to students based on their student model. When the conversational agent receives a text message query from any student using the iCollab system the response is extracted from the static AIML files created for each MBTI profile. This means, for example, for students of one personality profile who ask about pointers in C programming language, the knowledge base will reply: “Pointers ‘point’ to locations in memory. Think of a row of safety deposit boxes of various sizes at a local bank. Each safety deposit box will have a number associated with it so that you can quickly look it up. More text ...”. For another personality profile a sample code will be provided instead of text messages. The student model maintains current student profiles based on their personality, which are used to generate recommendations, lessons, problems, feedback and guidance in a personalised manner. To populate the student model, new student users are asked to complete the MBTI profile survey (Myers et al., 1985). If they decide not to complete the survey, they are assigned to a default personality type. This default (and generic) MBTI profile combines two different MBTI profiles (INTP and ESTP) to initially populate their student model. Regardless of how a MBTI profile is assigned to a student, this is continually reviewed and potentially updated in response to students’ behaviour in iCollab and their interactions with the conversational agent. Over time, iCollab processes text input from students to learn/understand their behaviour while studying (as explained above in the pedagogical model) and update the personality type if required. As presented in Figure 6, iCollab was designed to have efficient communication between the modules in order to avoid system overheads, thus enabling scalability. iCollab can be used in different scenarios, either using a single type of data collection or many others from third parties, all integrated through the CITS. Australasian Journal of Educational Technology, 2021, 37(2). 12 Figure 6. Complete interaction flow diagram in iCollab after receiving a text from an ESTP student One additional feature supported by iCollab was designed to suggest for two students to connect and collaborate – hence the name of the system (intelligent collaboration or iCollab). As all the other recommendations, this would be based on students’ behaviours, level of knowledge and personality type. Context-based and personalised recommendations of peers collaboration would happen mostly after quizzes taken with the support of the conversational agent. Students would be recommended to work together depending on their MBTI personalities, scores (high/low) on the quizzes and preferred online platform. This feature, however, is not the focus of this paper. Data analysis The data was analysed using an iterative analysis method, which moves from a general high-level analysis to more specific ones (Kennedy & Judd, 2003). In the current study, the examination of the audit logs analysis moved from a general analysis of usage patterns to more specific analysis of the interactions relevant to the research questions: overall access to iCollab, access to iCollab across different contexts, content of students’ interaction with iCollab in these contexts, session initiation between students and iCollab. A series of plots were created to examine the distribution of number of sessions, or interactions, between students and iCollab across the semester for the first three research questions, which revealed patterns of session behaviour across the course. A t-test was conducted to compare mean differences on final grade between the intervention and control groups. Results The results are presented in five sections: (1) students’ overall interaction with iCollab, (2) access to iCollab across different contexts, (3) content of students’ interaction with iCollab in these contexts, (4) session initiation between students, and (5) iCollab and group comparison on final grade. Australasian Journal of Educational Technology, 2021, 37(2). 13 Students’ overall interaction with iCollab Students interacted with iCollab across a period of 3 months. Overall there were 518 sessions (n) across the course with a total time of interaction of 3011 minutes (t) between students and iCollab (M = 5.81, SD = 4.67). During the first month there were a total of 226 sessions and the total time of interaction was 1287 minutes (M = 5.69, SD = 4.75). During the second month there were 137 sessions and the total of 749 minutes (M = 5.47, SD = 4.36). During the third month there were a total of 155 sessions and the total time of interaction was 975 minutes (M = 6.29, SD = 4.43). Figure 7 presents a distribution of these interactions. Figure 7. Number of interactions between students and iCollab across the course In the first month of the case study there was a peak of sessions on April 15. On this date the lecturer demonstrated the iCollab to students during a tutorial. Students then interacted with iCollab during the 40 minutes of this tutorial. More than 80 sessions were registered on that day. The lecturer made iCollab available on the LMS on April 1, allowing students to interact with it prior to its official launch. Apart from the student’s engagement in the first month of use of iCollab, their interest in communicating with the intelligent agent waned after their first subject exam on April 25. The number of interactions unsurprisingly increased again in the lead up to their second exam on June 2 and their final exam on June 17. These findings show that even though there was a decline in the use of iCollab across the semester, students continued to access it throughout the course. How did students access iCollab across context throughout the semester? Many students opted to connect to iCollab using different platforms. Nine students chose to integrate all of the supported platforms during the research period. The other 12 students elected to use two of the social media platforms to connect to iCollab. Initially, some students experienced difficulties connecting to iCollab. The lecturer provided the appropriate support to solve these difficulties. He identified that the main reason for the difficulties was that these students were unfamiliar with writing computational commands to integrate iCollab with the different platforms. Skype represented the majority of all interactions (50.97%), followed by Gtalk (21.43%) and Twitter (15.44%). Only 12.16% of the interactions were carried out through the dedicated widget embedded in the LMS. Figure 8 presents these distributions across the three months of the course. During the in-class demonstration of iCollab, most sessions were conducted using Skype, which was the platform chosen by the lecturer to be used on that day. However, on that same day, some students started to use iCollab via Twitter. As the course continued, different platforms were used at different points. Overall, students used Skype and Twitter frequently across the course, while the use of Gtalk increased towards the end of the course. The LMS widget was more frequently used at the beginning of the course, with just two sessions Australasian Journal of Educational Technology, 2021, 37(2). 14 of usage in June. These were the days the lecturer provided new content on the LMS to help students prepare for their final assignment. These findings show that even though students initially used the platform suggested by the lecturer, they also used their platform of choice across the semester, with the LMS being their least used platform. Figure 8. Number of interactions between students and iCollab across the course per platform The next iteration of analysis investigated the use of different platforms across the day over the course. This was important information used by iCollab to predict when and where to contact students. Gtalk was frequently accessed at night, with peaks during late hours towards the end of the trial period. Skype was mostly accessed at evening during the first month, especially around 6 to 7pm, which could be explained by the time of the in class iCollab demonstration. During the following months access via Skype became more dispersed, with the peak shifting to the middle of the day between 12 and 2pm, and around 5 and 6pm. Twitter had the most spread pattern of access throughout the day, with some peaks of access at 2am during the first month, at 11am and 4pm during the second month, and both at 2am and 4pm during the third month. The LMS widget had the most number of accesses at dawn and during the morning of the first month, with fewer access during the second and third months at night. Figure 9 presents these data, with stronger shading representing higher frequency of sessions. These findings show that students had different patterns of access to iCollab across platforms throughout the course, indicating the relevance of this information when making decisions on when and where to initiate contact with students. Australasian Journal of Educational Technology, 2021, 37(2). 15 Figure 9. iCollab’s most accessed hours per platform per month What was the content of students’ interaction with iCollab throughout the semester? A closer inspection of the content of the conversation between students and iCollab revealed that there were three main categories: course content, social interaction, and CITS functionality. Specifically: (i) 39.6% of the interactions were related to issues surrounding the virtual course (i.e., questions about content, reference textbooks, external web links, e-quiz, and exams); (ii) 54.2% of the interactions were related to social exchanges (i.e., questions about the conversational agent, and students expressing their particular tastes and opinions); and (iii) 6.2% of the interactions related to queries about setting up iCollab setup and how it worked (i.e., questions about how the web technologies were integrated and how the MBTI survey was used to generate recommendations). Figure 10 presents a distribution of the content of students’ interaction with iCollab across the course considering these three categories. Findings show that setup was higher at the beginning of the course, as expected. Surprisingly, interactions for social purposes were higher than for content purposes throughout the course. Figure 10. Overall distribution of the content of students’ interaction with iCollab per month Australasian Journal of Educational Technology, 2021, 37(2). 16 In the next iteration of analysis, we examined whether there were differences in content of interactions across platforms (Figure 11). Findings show that Skype was mainly used to initially setup iCollab (probably during class), although we can also find setup interactions with iCollab in all other platforms too. With the exception of the LMS, which had a drastic decrease of interactions with students, all other platforms had a balanced type of content of interaction across the course between social and subject content (with a slightly higher number for social interactions, as expected per results in the previous sub-section). The only exception was the second month of interactions with Gtalk, which had a much higher number of social interactions than subject content ones. These findings suggest that the students' interaction with iCollab for social and content purposes was similar across platforms and across the course. Figure 11. Distribution of the content of students’ interaction with iCollab per month per platform Who initiated interaction between students and iCollab throughout the semester? Of the total of 518 sessions across the course, 99 were initiated by iCollab, while 419 were initiated by students. Sessions initiated by iCollab had the average duration of 6.04 minutes, while sessions initiated by students had the average duration of 5.76 minutes. Figure 12 presents a distribution of sessions across the course according to who initiated each session. Overall, there was an increase in the number of sessions initiated by the agent over time. In the first month, the agent initiated a total of 12 sessions, followed by 37 sessions during the second month and 50 sessions during the third month. On the other hand, there was initially a decrease in the number of sessions initiated by students, from 214 in the first month to 100 in the second month. That number then remained almost the same in the third month, with 105 sessions initiated by students. Across the course, there were in total 5 days in which only the agent-initiated sessions. Australasian Journal of Educational Technology, 2021, 37(2). 17 Figure 12. Distribution of who initiated sessions per month When this distribution was considered separately per platform (Figure 13), results revealed that in the first month all sessions were initiated by students in Gtalk and Skype. In the LMS and Twitter, on the other hand, there were a few sessions already initiated by iCollab in the first month. Gtalk was the only platform where there was an increase from the second to the third month in relation to student-initiated sessions, while these decreased in all other platforms. Interestingly, there were no sessions initiated by iCollab in the LMS in the second and third semester. These findings are related to the adaptability of iCollab in relation to its context-aware features, as it would choose to contact students in platforms that they were more likely to be online and respond to iCollab. Figure 13. Distribution of who initiated sessions per month per platform In relation to the content of the sessions, iCollab initiated more sessions related to social content than setup or subject content (Figure 14). The number of subject content sessions initiated by iCollab increased across the course. A similar pattern was found for social sessions. As iCollab conversational agent considers historical data to start chatting students, this can be related to the fact that over 54% of the interactions were related to `social' exchanges (i.e., questions about the conversational agent, and students expressing their particular tastes and opinions), as discussed before. iCollab conversational agent used this approach to stimulate engagement with students. Australasian Journal of Educational Technology, 2021, 37(2). 18 Figure 14. Distribution of who initiated sessions per month per content What was the impact of using iCollab on students’ final grade? Overall, students’ average final grade in the course was 4.39 (SD = 2.81) out of 5. A t-test was conducted to compare students’ final grade between the iCollab and the control groups. Although there was no significant difference between the groups (t(58.87) = 1.60, p = 0.115), Figure 15 shows participants in the iCollab group (M = 4.99, SD = 2.81) descriptively performed better than participants in the control group (M = 3.87, SD = 2.73). Figure 15. Final grade distribution for iCollab and control groups Australasian Journal of Educational Technology, 2021, 37(2). 19 Discussion The overall number of interactions with iCollab revealed a peak at the start of the use of iCollab potentially related to its setup and novelty effect, which then normalised during the second and third months. The initial high use of one of the platforms, Skype, due to being the one used by the teacher in the classroom, was soon distributed over the other two social media platforms: Gtalk and Twitter. Interestingly, when given the choice, students moved out of the LMS to interact with iCollab. This suggests an acceptance of blurring their formal and informal learning contexts, and supports previous research on the benefits of embracing informal learning contexts as part of formal higher education (Kinshuk et al., 2016). The large amount of social interaction students had with iCollab was surprising. In the Gtalk platform, for example, most of the interactions in the second month were social ones, mainly initiated by the students. This finding suggests that there may be a place for smart learning environments to foster a sense of belonging and community with students when used in social media contexts. This was most likely related to iCollab being able to provide a natural and engaging interaction with students through a seamless connection (Zhu et al, 2016), in addition to the use of an avatar (Rus et al., 2013), supporting previous research that when using conversational tutoring systems “learners interpret their relation with the computer as a social one involving reciprocal communication” (Moreno et al., 2001, p.179). This finding was shared across all social media platforms, as they all had a similar distribution of type of content across the course. Such approach contributes to a common problem in ITS identified as “the cold start problem” (Pian et al., 2020, p. 376), in which students don’t interact with the system due to its complex nature. Although sessions initiated by iCollab increased over the course, the majority of sessions were initiated by students across the course. These findings suggest that the adaptive nature of iCollab was successful in relation to creating an interaction that was engaging enough for students, at the right time and place, to stimulate them to continue initiating dialogues across the course. However, considering the content of these interactions (mainly social rather than subject content) and the outcomes in relation to final grades, iCollab seemed to be more efficient on the social aspect rather than in relation to the learning outcomes. This may be related to how we only considered personality as students’ psychological characteristics for personalisation. Current literature in educational psychology suggests that other personal factors, such as the use of self-regulated learning strategies, may be more effective on impacting academic achievement, particularly in online learning environments (Broadbent & Poon, 2015). However, iCollab users performed better than the control group, which could be explained by an indirect effect of personality-based personalisation on final grades through the use of self-regulated learning strategies (Komarraju et al., 2011; Stajkovic et al., 2018). Future studies using conversational adaptive systems would benefit from extending their student model to include both personality (for an engaging experience) and self-regulated learning strategies (for a potentially more effective learning experience) to personalise their interaction with students (Stajkovic et al., 2018). We identify six main limitations in our study. First, measures of learning outcomes were not the focus of this research, however it is important to establish whether iCollab proof-of-concept effectively contributes to students’ learning gain. Future research could include measures of learning gain with pre and post-tests, and also measure the impact of iCollab on other aspects of learning, such as social-emotional and relational aspects (Krämer & Bente, 2010). Second, the sample size of the study was relatively small. A larger scale study with more variations and changes in students' contextual and personal factors would provide a richer testing ground for the iCollab framework. Third, the study was applied only in one course. Future studies would benefit to include two or more domains to expand the iCollab capability to deal with more than one knowledge base simultaneously. Fourth, the personality model used in this study, the MBTI, has been criticised for its lack of validity and reliability (e.g., McCrae & Costa, 1989). We recommend the use of other personality models as part of future learner models, such as the Big Five (Poropat, 2009). This limitation does not invalidate the current study, as previous research has found correlations between the MBTI and the Big Five (e.g., Furnham, 1996), but it does emphasise the need for replication studies. Fifth, the rules created to guide the personality adaptation were not evidence-based due to the lack of available research at the time of development linking the MBTI with online learning. The creation of rules based in personality in combination with learners’ activities considering the context to inform the system personalisation and adaptation (such as the use of self-regulated learning strategies as suggested in the previous paragraph) can help CITS overcome the challenge of falling into a pigeon-hole approach that bluntly categorise students (Kirschner, 2017). Moreover, the use of open learner models, in which students Australasian Journal of Educational Technology, 2021, 37(2). 20 have access to the system’s learner model, can also contribute to a more inclusive and ethical use of artificial intelligence in education (Bull & Kay, 2007). Thus, more research is needed on the creation of rules (or frameworks to inform the creation of rules) to be used in artificial intelligence initiatives in education, such as the research recently conducted by Sedrakyan et al. (2020), and how students can be involved when using CITS in real-life settings. The findings of this research have several implications for educational practice. Educators could use existing online platforms to communicate and promote a sense of belonging among students. This is particularly relevant to fully online courses where face-to-face interaction with the teacher and other students is often less accessible. In addition, the relation between students' online behaviour and their personalities may be used as a starting point for personalised feedback on students’ academic performance, to improve their learning experiences. Moreover, this research has implications for the design and development of smart learning environments as it combines CITS and heterogeneous online systems to promote adaptive, contextualised and personalised learning out of conventional LMS. Conclusion This paper presented an adaptive, personalised, and context-aware smart learning environment combining key features of a conversational intelligent tutoring system with social media platforms. Results showed that once given the option, students preferred to interact with the smart learning environment out of the LMS, while they were learning informally in social media platforms. Students also had a high level of interaction with the smart learning environment for social purposes, suggesting this combination of CITS and social media platforms, using personalisation based on personality factors, may be a good option to foster a sense of belonging. Moreover, the findings suggest that personalisation on the basis of personality may impact students’ acceptance and usage of a smart learning environment, but not necessarily impact their learning outcomes. Future studies would benefit by combining both personality and other personal factors, such as self-regulated learning, as part of their student model. Acknowledgements The authors would like to thank Dr Patricia Tedesco (CIn/UFPE) and Professor Michael Kirley (CIS/UOM) for their comments on an early version of the paper. References Bolliger, D., & Erichsen, E. (2013). Student satisfaction with blended and online courses based on personality type. Canadian Journal of Learning and Technology/La revue canadienne de l’apprentissage et de la technologie, 39(1). https://www.learntechlib.org/p/178006/ Boulanger, D., Seanosky, J., Kumar, V., Kinshuk, Panneerselvam, K., & Somasundaram, T. S. (2015). Smart learning analytics. In G. Chen, V. Kumar, Kinshuk, R. Huang, & S. C. Kong (Eds.), Emerging issues in smart learning (pp. 289–296). Springer. Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education, 27, 1-13. https://doi.org/10.1016/j.iheduc.2015.04.007 Browne, P. (2009). JBoss Drools business rules. Packt Publishing Ltd. Bull, S., & Kay, J. (2007). Student models that invite the learner in: The SMILI:() Open learner modelling framework. International Journal of Artificial Intelligence in Education, 17(2), 89-120. https://content.iospress.com/articles/international-journal-of-artificial-intelligence-in-education/jai17- 2-02 Burns, H., & Parlett, J. W. (2014). The evolution of intelligent tutoring systems: Dimensions of design. In H. Burns, C. A. Luckhardt, J. W. Parlett, & C. L. Redfield (Eds.), Intelligent tutoring systems: Evolutions in design (pp. 1–13). Psychology Press. Bush, N. (2013, January 27). Programd. https://code.google.com/archive/p/ainotebook/wikis/programd_aiml.wiki Byun, H. E., & Cheverst, K. (2004). Utilizing context history to provide dynamic adaptations. Applied Artificial Intelligence, 18(6), 533-548. https://doi.org/10.1080/08839510490462894 https://www.learntechlib.org/p/178006/ https://doi.org/10.1016/j.iheduc.2015.04.007 https://content.iospress.com/articles/international-journal-of-artificial-intelligence-in-education/jai17-2-02 https://content.iospress.com/articles/international-journal-of-artificial-intelligence-in-education/jai17-2-02 https://code.google.com/archive/p/ainotebook/wikis/programd_aiml.wiki https://doi.org/10.1080/08839510490462894 Australasian Journal of Educational Technology, 2021, 37(2). 21 Cai Z., Hu X., & Graesser A.C. (2019) Authoring conversational intelligent tutoring systems. In R. Sottilare, & J. Schwarz (Eds.), Adaptive instructional systems. HCII 2019. Lecture notes in computer science (Vol. 11597). Springer. https://doi.org/10.1007/978-3-030-22341-0_46 Child, I. L. (1968) Personality in culture. In E. F. Borgatta, & W. W. Lambert (Eds.), Handbook of personality theory and research. Rand McNally. Daft, R. L., Lengel, R. H., & Trevino, L. K. (1987). Message equivocality, media selection, and manager performance: Implications for information systems. MIS Quarterly, 11(3), 355-366. https://doi.org/10.2307/248682 Daughenbaugh, R., Ensminger, D., Frederick, L., & Surry, D. (2002, April 7-9). Does personality type effect online versus in-class course satisfaction [Paper presentation]. Seventh Annual Mid-South Instructional Technology Conference on Teaching, Learning, & Technology. https://eric.ed.gov/?id=ED464631 de Barba, P. G., Malekian, D., Oliveira, E. A., Bailey, J., Ryan, T., & Kennedy, G. (2020). The importance and meaning of session behaviour in a MOOC. Computers & Education, 146. https://doi.org/10.1016/j.compedu.2019.103772 D’Mello, S., Lehman, B., Sullins, J., Daigle, R., Combs, R., Vogt, K., L. Perkins, & Graesser, A. (2010, June 14-18). A time for emoting: When affect-sensitivity is and isn’t effective at promoting deep learning [Paper presentation]. International conference on intelligent tutoring systems (pp. 245-254). Springer. https://link.springer.com/chapter/10.1007/978-3-642-13388-6_29 Ellis, A. (2003). Personality type and participation in networked learning environments. Educational Media International, 40(1-2), 101-114. https://doi.org/10.1080/0952398032000092152 Furnham, A. (1996). The big five versus the big four: The relationship between the Myers-Briggs Type Indicator (MBTI) and NEO-PI five factor model of personality. Personality and Individual Differences, 21(2), 303-307. https://doi.org/10.1016/0191-8869(96)00033-5 Gómez, S., Zervas, P., Sampson, D. G., & Fabregat, R. (2014). Context-aware adaptive and personalized mobile learning delivery supported by UoLmP. Journal of King Saud University-Computer and Information Sciences, 26(1), 47-61. https://doi.org/10.1016/j.jksuci.2013.10.008 Graesser, A. C., Lu, S., Jackson, G. T., Mitchell, H. H., Ventura, M., Olney, A., & Louwerse, M. M. (2004). AutoTutor: A tutor with dialogue in natural language. Behavior Research Methods, Instruments, & Computers, 36(2), 180-192. https://doi.org/10.3758/BF03195563 Gros, B. (2016). The design of smart educational environments. Smart Learning Environments, 3(1), 1- 11. https://doi.org/10.1186/s40561-016-0039-x Gruzd, A., & Conroy, N. (2020). Learning analytics dashboard for teaching with Twitter. Proceedings of the 53rd Hawaii International Conference on System Sciences. http://hdl.handle.net/10125/64072 Gruzd, A., Kumar, P., Abul-Fottouh, D., & Haythornthwaite, C. (2020). Coding and classifying knowledge exchange on social media: A comparative analysis of the# Twitterstorians and AskHistorians communities. Computer Supported Cooperative Work, 29(6), 629-656. https://link.springer.com/article/10.1007/s10606-020-09376-y Harrington, R., & Loffredo, D. A. (2010). MBTI personality type and other factors that relate to preference for online versus face-to-face instruction. The Internet and Higher Education, 13(1-2), 89- 95. https://doi.org/10.1016/j.iheduc.2009.11.006 Hobert, S. (2019, December 15-18). Say hello to ‘Coding Tutor’! Design and evaluation of a chatbot- based learning system supporting students to learn to program [Paper presentation]. 40th International Conference on Information Systems, Munich. https://aisel.aisnet.org/icis2019/learning_environ/learning_environ/9/ Hwang, G. J. (2014). Definition, framework and research issues of smart learning environments-A context-aware ubiquitous learning perspective. Smart Learning Environments, 1(1), 4. https://doi.org/10.1186/s40561-014-0004-5 Kennedy, G. E., & Judd, T. S. (2003). Iterative analysis and interpretation of audit trail data. Proceedings of ASCILITE Conference, Adelaide, (Vol. 1, pp. 273-282). https://www.ascilite.org/conferences/adelaide03/docs/pdf/273.pdf Kinshuk, Chen, N. S., Cheng, I. L., & Chew, S. W. (2016). Evolution is not enough: Revolutionizing current learning environments to smart learning environments. International Journal of Artificial Intelligence in Education, 26(2), 561-581. https://doi.org/10.1007/s40593-016-0108-x Kirschner, P. A. (2017). Stop propagating the learning styles myth. Computers & Education, 106, 166- 171. https://doi.org/10.1016/j.compedu.2016.12.006 https://doi.org/10.1007/978-3-030-22341-0_46 https://doi.org/10.2307/248682 https://eric.ed.gov/?id=ED464631 https://doi.org/10.1016/j.compedu.2019.103772 https://link.springer.com/chapter/10.1007/978-3-642-13388-6_29 https://doi.org/10.1080/0952398032000092152 https://doi.org/10.1016/0191-8869(96)00033-5 https://doi.org/10.1016/j.jksuci.2013.10.008 https://doi.org/10.3758/BF03195563 https://doi.org/10.1186/s40561-016-0039-x http://hdl.handle.net/10125/64072 https://link.springer.com/article/10.1007/s10606-020-09376-y https://doi.org/10.1016/j.iheduc.2009.11.006 https://aisel.aisnet.org/icis2019/learning_environ/learning_environ/9/ https://doi.org/10.1186/s40561-014-0004-5 https://www.ascilite.org/conferences/adelaide03/docs/pdf/273.pdf https://doi.org/10.1007/s40593-016-0108-x https://doi.org/10.1016/j.compedu.2016.12.006 Australasian Journal of Educational Technology, 2021, 37(2). 22 Komarraju, M., Karau, S. J., Schmeck, R. R., & Avdic, A. (2011). The Big Five personality traits, learning styles, and academic achievement. Personality and Individual Differences, 51(4), 472-477. https://doi.org/10.1016/j.paid.2011.04.019 Krämer, N. C., & Bente, G. (2010). Personalizing e-learning. The social effects of pedagogical agents. Educational Psychology Review, 22(1), 71-87. https://doi.org/10.1007/s10648-010-9123-x Kumar, P., & Gruzd, A. (2019). Social media for informal learning: A case of# Twitterstorians. Proceedings of the 52nd Hawaii International Conference on System Sciences. https://hdl.handle.net/10125/59691 Latham, A., Crockett, K., & McLean, D. (2014). An adaptation algorithm for an intelligent natural language tutoring system. Computers & Education, 71, 97-110. https://doi.org/10.1016/j.compedu.2013.09.014 Liu, B. (2010). Sentiment analysis and subjectivity. In N. Indurkhya, & F. Damerau (Eds.), Handbook of natural language processing (2nd ed.). Chapman & Hall. Manning, C. D., Schütze, H., & Raghavan, P. (2008). Introduction to information retrieval. Cambridge University Press. McCombs, B. L. (2017). Historical review of learning strategies research: strategies for the whole learner—A tribute to Claire Ellen Weinstein and early researchers of this topic. Frontiers in Education, 2, 2-44. https://doi.org/10.3389/feduc.2017.00006 McCrae, R. R., & Costa Jr, P. T. (1989). Reinterpreting the Myers‐Briggs type indicator from the perspective of the five‐factor model of personality. Journal of personality, 57(1), 17-40. https://doi.org/10.1111/j.1467-6494.1989.tb00759.x Mitrovic, A. (2003). An intelligent SQL tutor on the web. International Journal of Artificial Intelligence in Education, 13(2-4), 173-197. https://content.iospress.com/articles/international-journal-of-artificial- intelligence-in-education/jai13-2-4-03 Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer- based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177-213. https://doi.org/10.1207/S1532690XCI1902_02 Mpungose, C. B. (2020). Are social media sites a platform for formal or informal learning? Students’ experiences in institutions of higher education. International Journal of Higher Education, 9(5), 300- 311. https://doi.org/10.5430/ijhe.v9n5p300 Myers, I. B., McCaulley, M. H., & Most, R. (1985). Manual, a guide to the development and use of the Myers-Briggs type indicator. Consulting Psychologists Press. Oliveira, E. A. (2008). i-Collaboration: Um modelo de colaboração inteligente personalizada para ambientes de EAD (Master's thesis). Universidade Federal de Pernambuco. Oliveira, E. A. (2013). i-Collaboration 3.0: Um framework de apoio ao desenvolvimento de Ambientes Distribuídos de Aprendizagem Sensíveis ao Contexto (Doctoral dissertation). Universidade Federal de Pernambuco. Oliveira, E. A., Koch, F., Kirley, M., & dos Passos Barros, C. V. G. (2015, May). Towards a middleware for context-aware health monitoring. Proceedings of the International Workshop on Multiagent Foundations of Social Computing (pp. 19-30). Springer. Pak, A., & Paroubek, P. (2010). Twitter as a corpus for sentiment analysis and opinion mining. European Language Resources Association, 10, 1320-1326. https://lexitron.nectec.or.th/public/LREC- 2010_Malta/pdf/385_Paper.pdf Paladines, J., & Ramírez, J. (2020). A systematic literature review of intelligent tutoring systems with dialogue in natural language. IEEE Access, 8, 164246-164267. https://doi.org/10.1109/ACCESS.2020.3021383 Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. Proceedings of the Conference on Empirical Methods in Natural Language Processing. https://arxiv.org/abs/cs/0205070 Paramythis, A., & Loidl-Reisinger, S. (2003). Adaptive learning environments and e-learning standards. Proceedings of the Second European Conference on e-Learning (Vol. 1, No. 2003, pp. 369-379). https://files.eric.ed.gov/fulltext/EJ1099144.pdf Pian, Y., Lu, Y., Huang, Y., & Bittencourt, I. I. (2020). A gamified solution to the cold-start problem of intelligent tutoring system. Proceedings of the International Conference on Artificial Intelligence in Education (pp. 376-381). https://doi.org/10.1007/978-3-030-52240-7_68 Poropat, A. E. (2009). A meta-analysis of the five-factor model of personality and academic performance. Psychological Bulletin, 135(2), 322–338. https://doi.org/10.1037/a0014996 Richardson, J. R. J. (1988). Foundations of intelligent tutoring systems. Psychology Press. https://doi.org/10.1016/j.paid.2011.04.019 https://doi.org/10.1007/s10648-010-9123-x https://hdl.handle.net/10125/59691 https://doi.org/10.1016/j.compedu.2013.09.014 https://doi.org/10.3389/feduc.2017.00006 https://doi.org/10.3389/feduc.2017.00006 https://doi.org/10.1111/j.1467-6494.1989.tb00759.x https://content.iospress.com/articles/international-journal-of-artificial-intelligence-in-education/jai13-2-4-03 https://content.iospress.com/articles/international-journal-of-artificial-intelligence-in-education/jai13-2-4-03 https://doi.org/10.1207/S1532690XCI1902_02 https://doi.org/10.5430/ijhe.v9n5p300 https://lexitron.nectec.or.th/public/LREC-2010_Malta/pdf/385_Paper.pdf https://lexitron.nectec.or.th/public/LREC-2010_Malta/pdf/385_Paper.pdf https://doi.org/10.1109/ACCESS.2020.3021383 https://arxiv.org/abs/cs/0205070 https://files.eric.ed.gov/fulltext/EJ1099144.pdf https://doi.org/10.1007/978-3-030-52240-7_68 https://doi.org/10.1037/a0014996 Australasian Journal of Educational Technology, 2021, 37(2). 23 Rus, V., D’Mello, S., Hu, X., & Graesser, A. (2013). Recent advances in conversational intelligent tutoring systems. AI Magazine, 34(3), 42-54. https://doi.org/10.1609/aimag.v34i3.2485 Sedrakyan, G., Malmberg, J., Verbert, K., Järvelä, S., & Kirschner, P. A. (2020). Linking learning behavior analytics and learning science concepts: Designing a learning analytics dashboard for feedback to support learning regulation. Computers in Human Behavior, 107, 105512. https://doi.org/10.1016/j.chb.2018.05.004 Spector, J. M. (2014). Conceptualizing the emerging field of smart learning environments. Smart Learning Environments, 1(1), 1-10. https://doi.org/10.1186/s40561-014-0002-7 Stajkovic, A. D., Bandura, A., Locke, E. A., Lee, D., & Sergent, K. (2018). Test of three conceptual models of influence of the big five personality traits and self-efficacy on academic performance: A meta-analytic path-analysis. Personality and Individual Differences, 120, 238-245. https://doi.org/10.1016/j.paid.2017.08.014 Tlili, A., Essalmi, F., Jemni, M., & Chen, N. S. (2016). Role of personality in computer-based learning. Computers in Human Behavior, 64, 805-813. https://doi.org/10.1016/j.chb.2016.07.043 Xie, H., Chu, H. C., Hwang, G. J., & Wang, C. C. (2019). Trends and development in technology- enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017. Computers & Education, 140. https://doi.org/10.1016/j.compedu.2019.103599 Zervas, P., Ardila, S. E. G., Fabregat, R., & Sampson, D. G. (2011, July 9-11). Tools for context-aware learning design and mobile delivery [Paper presented]. IEEE 11th International Conference on Advanced Learning Technologies (pp. 534-535). https://doi.org/10.1109/ICALT.2011.164 Zhu, Z. T., Yu, M. H., & Riezebos, P. (2016). A research framework of smart education. Smart Learning Environments, 3(1), 4. https://doi.org/10.1186/s40561-016-0026-2 Corresponding author: Eduardo A. Oliveira, eduardo.oliveira@unimelb.edu.au Copyright: Articles published in the Australasian Journal of Educational Technology (AJET) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC- ND 4.0). Authors retain copyright in their work and grant AJET right of first publication under CC BY- NC-ND 4.0. Please cite as: Oliveira, E. A., de Barba, P., & Corrin, L. (2021). Enabling adaptive, personalised and context-aware interaction in a smart learning environment: Piloting the iCollab system. Australasian Journal of Educational Technology, 37(2), 1-23. https://doi.org/10.14742/ajet.6792 https://doi.org/10.1609/aimag.v34i3.2485 https://doi.org/10.1016/j.chb.2018.05.004 https://doi.org/10.1186/s40561-014-0002-7 https://doi.org/10.1016/j.paid.2017.08.014 https://doi.org/10.1016/j.chb.2016.07.043 https://doi.org/10.1016/j.compedu.2019.103599 https://doi.org/10.1109/ICALT.2011.164 https://doi.org/10.1186/s40561-016-0026-2 https://creativecommons.org/licenses/by-nc-nd/4.0/ https://creativecommons.org/licenses/by-nc-nd/4.0/ https://doi.org/10.14742/ajet.6792 Introduction Literature review Current study Method Participants The course Measures Procedure The iCollab system Data analysis Results Students’ overall interaction with iCollab How did students access iCollab across context throughout the semester? What was the content of students’ interaction with iCollab throughout the semester? Who initiated interaction between students and iCollab throughout the semester? What was the impact of using iCollab on students’ final grade? Discussion Conclusion Acknowledgements References