Experiment assisting system with local augmented body (EASY-LAB) in dual presence environment ACTA IMEKO ISSN: 2221-870X September 2022, Volume 11, Number 3, 1 - 6 ACTA IMEKO | www.imeko.org September 2022 | Volume 11 | Number 3 | 1 Experiment assisting system with local augmented body (EASY-LAB) in dual presence environment Ahmed Alsereidi1, Yukiko Iwasaki1, Joi Oh2, Vitvasin Vimolmongkolporn2, Fumihiro Kato2, Hiroyasu Iwata3 1 Waseda University, Graduate School of Creative Science and Engineering, Tokyo, Japan 2 Waseda University, Global Robot Academic Institute, Tokyo, Japan 3 Waseda University, Faculty of Science and Engineering, Tokyo, Japan Section: RESEARCH PAPER Keywords: VR/AR; hands-free interface; Telecommunication; teleoperation Citation: Ahmed Alsereidi, Yukiko Iwasaki, Joi Oh, Vitvasin Vimolmongkolporn, Fumihiro Kato, Hiroyasu Iwata, Experiment assisting system with local augmented body (EASY-LAB) in dual presence environment, Acta IMEKO, vol. 11, no. 3, article 3, September 2022, identifier: IMEKO-ACTA-11 (2022)-03-03 Section Editor: Zafar Taqvi, USA Received February 26, 2022; In final form July 21, 2022; Published September 2022 Copyright: This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Corresponding author: Ahmed Alsereidi, e-mail: ahmed@akane.waseda.jp 1. INTRODUCTION There has been a notable amount of research and development in telepresence robotics, with the occurrence of Covid, telepresence robotics has been used to improve subject experiments. In previous studies, researchers and engineers frequently ask individuals to complete a task, watch their behaviour, or evaluate the usability of new technologies. As researchers, we often need to observe how a subject act in an experiment and provide guidance and instructions during an experiment. EASY-LAB system was designed to allow such experiments to be carried from a remote location. The designed system uses a 6 DoFs robotic head that utilizes differential gears to imitate human head-waist motion. The neck, waist, and detachable mechanism make up the mechanical design of the detachable robotic head. Also, the maximum latency between the user and the robot is 25 ms, which is low enough for human perception [1]. This was verified by comparing the human head motion and robot head motion side by side, both moved identically as shown in Figure 1. Figure 1. Detachable head robot used for testing. ABSTRACT This paper introduces a system designed to support conducting experiments of subjects when the situation does not allow experimenter and subject to be in the same place such as the COVID19 pandemic where everyone relied on video conference applications which has its limitation. Due to the difficulty of directing with a video conferencing system using solely video and voice. The system we developed allows an experimenter to actively watch and interact with the subject. Even if you're operating from a distant area, it is still possible to conduct experiments. Another important aspect this study will focus on is the case of when there are several subjects required and the experimenter must be able to guide both subjects equally well. The system proposed uses a 6 DoF robotic arm with a camera and a laser pointer attached to it on the subject side. The experimenter uses a head-mounted display to control it and it moves corresponding to the head movement allowing for easy instruction and intervention to the subject side. Comparison with other similar research is also covered. The study will focus mainly on which viewing method is the easiest for the experimenter to use, and if teaching one subject at the time gives better results than teaching two subjects simultaneously. mailto:ahmed@akane.waseda.jp ACTA IMEKO | www.imeko.org September 2022 | Volume 11 | Number 3 | 2 This system can also intervene using a laser pointer to point at the object being worked on. The joint angles of the robot are calculated by IK from the acquired head movements of the experimenter and reflected in the real robot. Photon Unity Networking was used to sync the motion of the experimenter and the robot remotely [2]. Photon Unity Networking is a package usually used for multiplayer games due to its flexible matchmaking where objects can be synced over the network. Figure 2 shows the system diagram of how the system sends and receives data. Another point to consider is that the system must be user friendly for most people to use. Tele-operated robots have been making advances in this field, as well as full-body immersion systems to imitate the human motion [3], [4]. Those systems focus on immersion and are lacking in terms of usability as they are heavy and are difficult to manoeuvre. There are some cases where multiple subjects are required for a certain experiment, in this case multiple robots would be required, at least one robot per subject. There are several requirements to fulfil for this system. First, instructions given to the subjects should be transmitted correctly in as little time as possible. Second, the experimenter must have clear visibility to both subject’s surroundings, Third, allow the experimenter to be present in several locations at the same time (Dual presence). To create a system that allows users to freely switch and reallocate their attention. In telepresence systems, to fully immerse a user in a remote environment, it is preferable that the user devotes his or her undivided attention to it. Teleoperation work efficiency improves when the system delivers a greater sensation of immersion and presence [5]. In this study, we focused on creating a dual presence system, so the experimenter needs to pay attention to two remote environments simultaneously and be able to focus their attention as needed between environments. In this research, we aim to develop a system that allows experimenters to be able to achieve dual-presence and monitor both subjects. We will propose two types of visual environment presentation and evaluate them in a set of experiments and then compare and discuss the results. The methods that would be used are as follow: a) Split screen: The experimenter’s head-mounted display (HMD) screen is split up from the middle horizontally and shows subject A’s environment on the top, while subject B’s environment is on the bottom. b) superimposed screen: The experimenter’s HMD screen only shows one image. Either environment A or environment B, or both with 50% transparency set to all of them. The subject’s feeling towards the robot must also be put into consideration when conducting this experiment. Therefore, the system was also designed to operate into different modes, mode 0, 1 and 2: a) Mode 0: allows the robots in both environments to always follow the experimenter’s motion b) Mode 1: the robot in environment A moves according to the experimenter’s motion and robot in environment B is static. c) Mode 2: the robot in environment B moves according to the experimenter’s motion and robot in environment A is static. The goal of this paper is to study the idea of dual-presence and how well it can be implemented as well as means of improving. Experiments were conducted to evaluate the system and compare the methods used and finally suggest a way to improve remote experimenting and advances multi-presence research. 2. PROPOSED METHOD When using EASY-LAB All participants improved their task performance and recorded higher scores in the subjective evaluation. This result suggests EASY-LAB’s effectiveness, at least in tasks that require pointing and observation from multiple sides [2]. There are three main objectives to achieve with this experiment. First, fulfil the given task successfully. Second, switch between the two robots with ease and be able to exist in two different places at the same time. Finally, give the subjects the feeling that the robot or at least the person operating is human. As for the experiment itself, the same task is used for all conditions. The robot head has a laser pointer attached to the camera and it follows the head movement of the user as shown in Figure 3. Local operator head motion is measured by VIVE Pro HMD (HTC). Local software is composed by using Unity (version 2019.4.17f1, Unity Technologies) VR Simulator. TCP/IP between Local PC and Remote PC, ROS# library was used. For this research, the system was tested over the same network in one building, in the future we would like to broaden the scale and operate it from a different town or country and measure the difference in delay. However, technology-wise, it is possible to operate from any distance if there is internet access. There are two subjects in this experiment, and the second robot head is the same as the one shown in Figure 3 but placed in front of the 2nd subject. The input required from the experimenter is made simple to improve usability, the VIVE controller’s trigger button is used to choose which robot to control, detailed explanation on the next section. On the subject’s side, each subject is given a piece of paper with holes in it as shown in Figure 4. While on the experimenter side there is an answer sheet that he can use as a reference. The pattern of the answer sheet is generated randomly every time so that the same pattern is never repeated. The Experimenter points the laser pointer in hole #1 and wait for 3 seconds, after 3 Figure 2. EASY-LAB system diagram. Figure 3. EASY-LAB system and experimenter. ACTA IMEKO | www.imeko.org September 2022 | Volume 11 | Number 3 | 3 seconds have passed, the subject inserts a thread into that hole and so on until hole #6. The answer sheet is colour coded as well to make it easier to read for the experimenter. Once all the holes have been connected, the subject stops working on the task and the experimenter must notice using the camera that all holes are connected properly, completing the task. Also, a timer is set as soon as the task starts, when the experimenter confirms the task completion visually, he presses a button to stop the timer. 3. SYSTEM CONFIGURATION 3.1. Experimenter System Configuration The outcome we’re studying on the experimenter side is the effect of changing the way environment visuals is shown and the usability of the proposed interface in each mode. The basic requirement to present visuals of the detachable head’s vision is to familiarize with remote location environments and be aware of the state of the subject and the task he/she is performing. In addition, another important requirement is that the user can allocate their attention to the two environments at will while using this interface to perform the co-presence tasks. In this section, we design both the concurrent vision presentation system to relay environment information and the modes for control. Regarding the environment information, the easiest way for most humans to be familiarized with an environment is to see it; so, we designed a system that presented images of the two environments simultaneously from a first-person view, since it provides a high level of immersion [6]. Recent studies have proposed several presentation systems for visuals, such as a split screen by arranging several images in one screen [7], and others of a screen superimposition type used for superimposing half- transparent images [8]. In this research, we use both methods. The screen superimposition viewing method is used due to being able to provide two first-person view images simultaneously. While the second viewing method splits the two environments to two screens. The other requirement of this system is to allow users able to easily switch or reallocate their attention. Researchers have proposed several methods for switching attention easily between two transparent images, such as changing the transmission ratio of two images via foot pedal [9], by the user's gazing point [10] but for this research we aimed for a simpler method which is pressing a button on the VIVE controller since the HMD used is also VIVE, the modes changing sequence changes as follow with every button click; mode 0 -> mode 1 -> mode 2. 4. EXPERIMENTER VIEWING METHOD The first viewing method is the two superimposed screens, by setting two depth-separated planes on different places as shown on (Figure 5 a). The experimenter can see both subjects at the same time, transparency of 50% is used for both images. Moreover, we want to test the difference between superimposed and split screens, so the 50% transparency affect is used for all viewing methods. The second viewing method is where the experimenter can see both images in the split screen method (Figure 5 b). 4.1. Experimenter operation modes Ideally, the system should be conducted with several tasks to verify its usability in and avoid dependance of the task itself as much as possible. But this one task should be sufficient for evaluating dual presence viewing and control method and the evaluation is based on meeting the following conditions: a) Point at the correct holes as viewed in the answer sheet b) Instruct the subjects as quickly as possible, depending on the view and control method, the time is expected to vary. c) Make sure the subjects perform the task correctly. In addition, notice when a subject makes a mistake as soon as possible. For every experiment, the experimenter was given time to test each mode and train for ~3 minutes to familiarize with the system. Also, the task was performed 4 times, with conditions changing every time and they are as follow: A) superimposed screens/ mode 0; the user can see both environments at the same time and the robots both move all the time. When changing from mode 0 to 1 in this case, mode 1 only shows the answer sheet and the user can go back to operating the robots by pressing the same button again and return to mode 0. B) superimposed screens/ mode 1&2; the user can see only environment A when in mode 1 and control the robot on the same environment, while robot in environment B stops. Vise-versa when in mode 2. Moreover, in this case, mode 0 shows the answer sheet. C) split screen/ mode 0; the user can see both environments at the same time and the robots both move all the time. When changing from mode 0 to 1 in this case, mode 1 only shows the answer sheet and the user can go back to operating the robots by pressing the same button again and return to mode 0. D) split screen/ mode 1&2; the user can see both environments when in mode 1 but control only the robot on environment A, while robot in environment B stops. Figure 4. Paper used to connect the dots in the experiment. Figure 5. Overview of the viewing methods. a) superimposed. b) split screen. ACTA IMEKO | www.imeko.org September 2022 | Volume 11 | Number 3 | 4 Vise-versa when in mode 2. Moreover, in this case, mode 0 shows the answer sheet. 4.2. Results and discussion on Experimenter operation In this chapter, we verify the usability of the developed system through a user study. For this experiment, we asked the cooperation of six robotics researchers (mean age 24, SD 1.26, male 5, female 1) who were previously involved in subject experiments. Each experimenter was instructing 2 subjects at the same time, the 12 participants who acted as subjects are also researchers familiar with robotics (mean age 23, SD 1.31, male 10, female 2) The experimenter and the two subjects were in locations where they can’t see each other, the only communication method is by using the laser pointer attached to each robot and guide the subjects to complete the task. Next, the experimenter points at the holes in the correct order as described in the answer sheet provided to him when the task starts. Since there are two subjects, the experimenters all chose to instruct subject A to connect the first hole, then moved to guide subject B immediately and repeated this until both finished the task (6 dots connected). Depending on the method used, the average number of errors changed as shown in (Table 1), error was measured by how many threads were inserted in the wrong hole. The total number of tries is also the maximum number of possible errors and its 12 for each method. Table 2 shows the average instruction time in seconds for each method. The outcome for each method used is as follows: a) 3 errors were made in total, the highest error from an operation method. This result was expected, when the two screens are superimposed, it is confusing at times. Some users reported that one of the subjects connected a hole without his instruction, this is due to both robots moving at the same time and the experimenter was instructing subject B but subject A’s robot was also moving. Another experimenter reported that it was difficult to distinguish between the two laser pointers. As for the time, this method had the longest operation time, it took the users more time to distinguish between the two environments in this method which increased operation time more than necessary. b) no errors were made in this method. This result was due to the fact that the user can focus entirely on one environment and ignore the other one until his instructing is finished as one of the users reported so. The average time was 179 seconds, 19 seconds faster than method (a), even though the viewing method was the same, the average time increased because it was faster to instruct one person even though switching between environments took some time. c) In this method, the total number of errors is 2, again the reason for this error is because when guiding subject B, subject A’s robot also moves and sometimes gives wrong pointing to the subject. The average time taken is 187 seconds, its faster than method (a) but slower than method (b). d) This method resulted in one error which is most likely due to human error. The average time in this method is 157 seconds, it is the fastest method between them all. One user reported that the operation was very smooth by checking the answer sheet in mode 0, instructing subject A in mode 1 and instructing subject 2 in mode 2 and repeating. We can see from the results above that the best method to use for the experimenter is method (d) if the task’s main concern is time as it gave the best results in instructing time and only one error. However, method (b) is a better candidate if the content of the task allows for no errors to be made as it’s the easiest to focus on. After the experiment was over, a questionnaire was given to the experimenters and it’s shown on (Table 3), the answers were based on a linear scale from 1-7 for all questions. The results of the questionnaire for each method are as follows: a) In this method, users reported that it was hard to see most of the time. They also felt that the two environments existed in the same place. The average answer was in the middle of the scale, similar to question 4 as well. They also felt that the time taken to instruct was too long. b) For this method, most users reported that its easy to see, they also felt as if they existed in two different locations at the same time. It was easy to instruct both subjects in this method. Almost no one got confused when instructing in this setting. Users reported time taken to be a little lower than method (a) but it is still considered a long time. c) In this method, users reported that it was easier to see the environments. Some users felt that they exist in 2 different places while others felt that they exist in the same place, most answers leading to the middle of the scale. Most users were able to instruct very well. Question 4 was also in the middle of the scale while all users reported that instructing time was short. d) All the results are exactly the same as (c) except for question 4, in this method no one got confused. The results of the questionnaire show that the users had a better experience overall when operating with method (d) the most. 4.3. Result And Discussion on Subject Operation The focus of the research on the subject side is to study the effect of the experimenter having to instruct two people at once and how the subject reacts to it, especially when changing operation methods. The requirements on this part are as follow: Table 1. Average instruction error. Method Instructional error (times) (a) superimposed screens/ mode 0 3 (b) superimposed screens/ mode 1&2 0 (c) split screen/ mode 0 2 (d) split screen/ mode 1&2 1 Table 2. Average instruction time. Subject Instructional time (s) (a) superimposed screens/ mode 0 198.7 (b) superimposed screens/ mode 1&2 179.3 (c) split screen/ mode 0 187.4 (d) split screen/ mode 1&2 157.1 ACTA IMEKO | www.imeko.org September 2022 | Volume 11 | Number 3 | 5 a) Fulfil the given task successfully with minimum errors and quickly. b) Be able to feel the presence of the experimenter instructing them. Before starting the task, only the experimenter knows which type of operation mode was used. However, depending on the mode used, the subjects reacted differently to the task, so a survey of four questions was conducted after each task to further investigate. The answer of the questionnaire (Table 4) had a linear scale from 1-7, similar to the experimenter survey. The methods used are also the same as (Table 2) and the results are as follow: a) In this method, some users did not complete the task successfully and had a harder time following the instructions. Most users also felt that the instructor is always watching, and they felt his presence most of the time. b) Most users had no issues completing the task in this mode. On the other hand, they felt the presence of the instructor less than method (a). c) Results of this method are the same as (a) d) Results of this method are the same as (b). From the answers of the survey, it can be seen that the viewing method of the experimenter has no effect on the subject’s performance. While the operation mode is different, users felt more at ease when the robots were moving all the time and made them feel the presence of the instructor more. To further prove this, Wilcoxon signed rank test shown in Figure 6 was used to verify which robot was easier to follow with 7 being hard to follow and 1 being easy to follow. The score was significantly improved when mode 1&2 was used compared to mode 0 as the test statistic is lower than the critical value (5 < 8) so we reject Ho. This is sufficient evidence that there is a difference between the two modes in terms of which one is easier to follow. 5. DISCUSSION ON THE PRACTICAL APPLICATION OF EASY-LAB In this section, the practical application of the proposed method presented in this study is discussed. The advantages of the using EASY-LAB as in dual presence settings and the methods used can be clarified by comparing this method with other manipulation methods. Following the comparison, concerns about using this interface in real life are discussed. 5.1. Comparison with other similar methods Based on the results of the previous section, the proposed method was compared with other similar methods: a. Gesture Cam: A Video Communication System for Sympathetic Remote Collaboration The SharedView system. The operator wears the SharedView. The SharedCamera’s image is sent to the display at the instructor’s sight and the instructor uses gestures in front of the display. The display and the gestures are received by a camera and sent back to the operator’s HMD. In this way, the instructor can give instruction with gestures [11]. EASY-LAB provides a more modern use case and adds the ability to increase the number of robots as needed, in this research, two robots were required. b. Use of Gaze and Hand Pointers in Mixed Reality Remote Collaboration The mixed reality remote collaboration system setup. The system supports the use of hand gesture, sketch, hand pointer, and gaze pointer visual cues for communication in the collaboration. The system tracks the remote expert’s hands to use the visual cues and employs a 360-degree camera to share the task space [12]. This system requires more practice to get used to its operation and takes more time than EASY-LAB to instruct someone. c. TELESAR VI: Telexistence Surrogate Anthropomorphic Robot VI TELESAR VI is a newly developed telexistence platform for the ACCEL Embodied Media Project. It was designed and implemented with a mechanically unconstrained full-body master cockpit and a 67 degrees-of-freedom (DOF) anthropomorphic avatar robot. The avatar robot can operate in a sitting position since the main area of operation is intended to be manipulation and gestural. The system provides a full-body experience of our extended “body schema,” which allows users to maintain an up-to-date representation in space of the positions of their different body parts, including their head, torso, arms, hands, and legs [13]. While this system provides more precise movements, it is too expensive to use for most researchers and requires experience to operate, its heavy and large size makes it hard to move as well. 5.2. Limitations In this study, only a laser pointer was used to give instructions in the operation method, the purpose was to ensure that the operation method does not interfere with verifying which viewing method or operation mode is better, so it was made simple. A disadvantage to this is that its hard to transmit Table 3. Experimenter evaluation questionnaire. Question Qe1 Can you see both environment A&B easily Qe2 Do you feel that you exist in 2 different places at the same time or do you feel like that both environments exist in the same place Qe3 Were you able to instruct the 2 subjects equally well Qe4 Did you get confused with the instructing when you switched environments Qe5 Did you feel that the time taken to instruct the students was too long Figure 6. Overview of the viewing methods. a) superimposed. b) split screen. Table 4. Experimenter evaluation questionnaire. Question Qe1 Where you able to fulfil the task successfully Qe2 Was the instruction of the robot easy to follow Qe3 Were you able to feel the presence of the person instructing you Qe4 Did you feel that the instructor is always watching you and not someone else ACTA IMEKO | www.imeko.org September 2022 | Volume 11 | Number 3 | 6 information other than the pointing, A button to stop the laser pointer in one environment should be added to reduce confusion as much as possible. Another feature that would be useful in this system is adding a method of transmitting audio of the experimenter to the subjects being instructed. One more restriction faced was the lack of experimenters, to provide more concrete results and finding, the number of subjects should be increased. 6. CONCLUSIONS In this research, we proposed using EASY-LAB to perform dual presence operation control. The task performed had 4 different settings. First, two different viewing method were selected and tested accordingly, superimposed screens and split screens. It was found that the split screen operation method provided better results, the time taken to complete the task was faster with minimum number of errors. There were two operation modes used in this system, one mode moved all robots at the same time while the other allowed the experimenter to choose one robot to control at a time. The best control method in terms of ease of use and least number of errors is to control one robot at a time. Furthermore, on the subject side, users had an easier time following the instructions of the robot when one robot at a time was being controlled. In the future, an intuitive posture instruction method will be developed that allows more information to be transmitted and provide more sense of being present in multiple places at the same time. Other than the control method, the next step is to increase the number of robots and subjects to evolve the system from being dual presence to multi-presence. We have yet to test the limit of how many subjects a single person can instruct using EASY-LAB. By increasing the number of subjects, the control method must also be revised to accommodate such system. Finally, this system has potential to be used by the masses in education, conferences, etc. Therefore, further testing is needed in these environments to verify that as this study might potentially suggest a new method of working with other humans from remote places. ACKNOWLEDGEMENT This research is supported by Waseda University Global Robot Academic Institute, Waseda University Green Computing Systems Research Organization and by JST ERATO Grant Number JPMJER1701, Japan. REFERENCES [1] V. Vimolmongkolporn, F. Kato, T. Handa, Y. Iwasaki, H. Iwata, Design and development of 6 DoFs detachable robotic head utilizing differential gear mechanism to imitate human head-waist motion; 2022 IEEE/SICE International Symposium on System Integration (SII), Narvik, Norway, 9-12 January 2022, pp. 467-472. DOI: 10.1109/SII52469.2022.9708793 [2] Y. Iwasaki, J. Oh, T. Handa, A. A. Sereidi, V. Vimolmongkolporn, F. Kato, H. Iwata, Experiment Assisting System with Local Augmented Body (EASY-LAB) for Subject Experiments under the COVID-19 Pandemic, ACM SIGGRAPH 2021 Emerging Technologies, Virtual, 9-13 August 2021, pp. 1-4. DOI: 10.1145/3450550.3465345 [3] I. Yamano, T. Maeno, Five-fingered Robot Hand using Ultrasonic Motors and Elastic Elements, Proc. of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18-22 April 2005, pp. 2673–2678. DOI: 10.1109/ROBOT.2005.1570517 [4] J. Butterfass, M. Grebenstein, H. Liu, G. Hirzinger, DLR-Hand II: next generation of a dextrous robot hand, Proc. of the 2001 ICRA, IEEE Int. Conference on Robotics and Automation (Cat. No.01CH37164), Seoul, Korea (South), 21-26 May 2001, vol. 1, pp. 109–114 DOI: 10.1109/ROBOT.2001.932538 [5] W. Zhou, J. Zhu, Yutao Chen, Jie Yang, Erbao Dong, Hao Zhang, Xuming Tang, Visual Perception Design and Evaluation of Electric Working Robots, IEEE Int. Conference on Mechatronics and Automation, Tianjin, China, 4-7 August 2019, pp. 886–891. DOI: 10.1109/ICMA.2019.8816366 [6] H. Debarba, E. Molla, B. Herbelin, R. Boulic, Characterizing embodied interaction in First and Third Person Perspective viewpoints, IEEE Symposium on 3D User Interfaces (3DUI), Arles, France, 23-24 March 2015, pp.67–72, 2015. DOI: 10.1109/3DUI.2015.7131728 [7] R. Sato, M. Kamezaki, J. Yang, S. Sugano, Visual Attention to Appropriate Monitors and Parts Using Augmented Reality for Decreasing Cognitive Load in Unmanned Construction, Proc. of the 6th Int. Conference on Advanced Mechatronics, No. 15-210, December 2015, p. 45. DOI: 10.1299/jsmeicam.2015.6.45 [8] T. Miura, Behavioral and Visual Attention, Kazama Shobo, Chiyoda, Japan, 1996, ISBN 978-4-7599-1936-3. [9] S. Iizuka, Y. Iwasaki, H. Iwata, Research on the Detachable Body -Validation of transparency ratio of displays for the co-presence dual task, The Robotics and Mechatronics Conference, Hiroshima, Japan, 5-8 June 2019, paper no.2A2-L04, 2019 (in Japanese). [10] M. Y. Saraiji, S. Sugimoto, C. L. Fernando, K. Minamizawa, S. Tachi, Layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending, ACM SIGGRAPH 2016, Anaheim, USA, 24-28 July 2016, Posters, pp. 1-2. DOI: 10.1145/2945078.2945098 [11] H. Kuzuoka, T. Kosuge, M. Tanaka, GestureCam: a video communication system for sympathetic remote collaboration, Proc. of the 1994 ACM Conference on Computer Supported Cooperative Work (CSCW '94), Chapel Hill North Carolina USA, 22-26 October 1994, pp. 35-43. DOI: 10.1145/192844.192866 [12] S. Kim, A. Jing, H. Park, S. H. Kim, G. Lee, M. Billinghurst, Use of Gaze and Hand Pointers in Mixed Reality Remote Collaboration, 9th Int. Conference on Smart Media and Applications (SMA), Jeju, Republic of Korea, 17-19 September 2020, pp. 1-6. [13] Susumu Tachi, Yasuyuki Inoue, Fumihiro Kato, TELESAR VI: Telexistence Surrogate Anthropomorphic Robot VI, Int. Journal of Humanoid Robotics 17, 05(2020), 2050019. DOI: 10.1142/S021984362050019X [14] HTC VIVE, 2011. Online [Accessed 26 February 2022] https://www.vive.com/eu/product/vive/ [15] Arduino. Online [Accessed 26 February 2022] https://www.arduino.cc/. [16] C. Zaiontz, Wilcoxon Signed-Ranks Table, 2020. Online [Accessed 26 February 2022] http://www.real-statistics.com/statistics-tables/wilcoxon- signedhttp://www.real-statistics.com/statistics-tables/wilcoxon- signed-ranks-table/ranks-table/ [17] Unity Technologies Japan/UCL, Unity-chan!, 2014. Online [Accessed 26 February 2022] https://unity-chan.com/ https://doi.org/10.1109/SII52469.2022.9708793 https://doi.org/10.1145/3450550.3465345 https://doi.org/10.1109/ROBOT.2005.1570517 https://doi.org/10.1109/ROBOT.2001.932538 https://doi.org/10.1109/ICMA.2019.8816366 https://doi.org/10.1109/3DUI.2015.7131728 http://dx.doi.org/10.1299/jsmeicam.2015.6.45 http://dx.doi.org/10.1145/2945078.2945098 https://doi.org/10.1145/192844.192866 https://doi.org/10.1142/S021984362050019X https://www.vive.com/eu/product/vive/ https://www.vive.com/eu/product/vive/ https://www.arduino.cc/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ https://unity/ https://unity-chan.com/ https://unity-chan.com/