International Journal of Interactive Mobile Technologies (iJIM) – eISSN: 1865-7923 – Vol. 13, No. 3, 2019 Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… An Interactive Mixed Reality Ray Tracing Rendering Mobile Application of Medical Data in Minimally Invasive Surgeries https://doi.org/10.3991/ijim.v13i03.9893 Samir A. El-Seoud (*), Amr S. Mady, Essam A. Rashed The British University in Egypt (BUE), Cairo, Egypt samir.elseoud@bue.edu.eg Abstract—Visualization of patient’s anatomy is the most important pre- operation process in surgeries; minimally invasive surgeries are among these types of medical operations that counts totally on medical visualization before operating on a patient. However, medicine has a problem in visualizing patients’ through looking through multiple slices of scans, trying to understand the three-dimensional (3D) anatomical structure of patients. With Mixed Reality (MR) the developments in medicine visualization will become much easier and creates a better environment for surgeries. This will help reduce the excessive effort and time spent by surgeons to locate where the problem lies with patients without looking through multiple of two-dimensional (2D) slices, but to see patients’ bodies in 3D in front of them augmented in their reality, and to interact with it whatever pleases them. Moreover, this will reduce the number of scans that doctors will ask their patient’s for, which will result in less harmful x-ray dosages for both the patient and the radiologist. Biomedical development in medical visualization is an active research topic as it provides the physicians with required devices for clinically feasible way for diagnosis, follow-up and take decisions in different disease lifeline. Current clinical imaging facility can provide a 3D imaging that can be used to guide different interventional procedures. The main challenge is how to map the information presented in the digital image with the real object. This is commonly implemented by mental processing that requires skills from the medical doctor. This paper contributes to this problem by providing a mixed reality system to merge the digital image of the patient anatomy with the patient visual image. Anatomical image obtained from Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) is mapped over the patient body using virtual reality (VR) head-mounted device (HMD). Keywords—Mixed reality, volume rendering, medical imaging, ray-casting, ray-tracing. 1 Introduction The current growth in medicine and technology should proceed at the same level. Furthermore, medicine should take advantage in the speedily development in iJIM ‒ Vol. 13, No. 3, 2019 29 Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… technology. One of these significantly important subjects of medical applications using new technologies are the visualization of human anatomy [1,2]. Interventional radiology procedures using imaging guidance such as CT/MRI does not meet complete surgeon’s satisfaction. In current procedures, radiologists must scan the patient from different positions. Thereafter, surgeons and radiologists must investigate the scanned images to better locate the problem. Consequently, doctors and patients are exposed to heavy radiation. However, some types of medical imaging systems provide a series of scans that can be viewed as a 3D model using appropriate software that can guide interventional clinical procedures. Augmented reality (AR) was used to provide a solution to this medical problem for a long time [3]. The core challenge in these applications is to offer real-time accurate representation of human organs that is enough for surgeon to proceed with clinical procedure [4]. The developed system should consider the data acquisition device (e.g. video camera, human eye, etc.), the data registration that maps the digital data to be mapped with the real visualized object, and finally, the motion handling and calibration. In this research, we consider the following scenario. A patient is undergoing a minimally invasive surgery where the surgeon needs to process some procedure like injunction of medication or biopsy. We also assume that a CT or MRI image of the patient is available. However, it is difficult for the surgeon to map the 3D anatomical image to the real patient. This happens frequently when the surgeon lack experience in similar procedures. It requires mental process to imaging how the 2D slices presented on the screen are represented on the patient in surgery room. The target of the developed system is to map the anatomical image over the patient real body for easy to comfortable process. Our developed system should be a one step forward to solve the problem of visualizing human bodies. Volume rendering of 3D image data of patient is multiple slices is the revolution in imaging human body [5]. A voxel is the 3D equivalent to a pixel and are the smallest element in a 3D object [6]. Voxels are used to build 3D objects, mostly used in computer graphical applications like computer games, but also used to render a volume. Applications on volume rendering have taken a large part in interventional and minimally invasive surgeries over the past couple of years. Before the volume- rendering concept is used, there were other techniques that concentrate on visualization via surface shading. It transforms the volumetric data into geometric primitives then screen the pixels. It is good in representing the object but not the best for visualization. When it comes to volume rendering, the technique displays the information inside the object volume, it is a direct display, the technique transforms volumetric data to screen pixels directly, and it uses transparency to see through volumes. The presented study aims at the development of MR software that will be used in minimally invasive surgeries and interval procedures. Several Groups and associations formed of researchers and scientists have also work in this same field of technology, but none of which went on developing such software in MR, neglecting its vast importance in medicine, some of these works are mentioned in Section 3.3. Our goal is to reduce the heavy load of scans visualization, as well as saving time and effort. This procedure will be much cheaper than the previous methods. Our system requires only 30 http://www.i-jim.org Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… a smartphone and a MR ready headset. Mixed Reality is the combination of Virtual Reality (VR) and AR [7]. This combination brings together the real world and the digital one in one reality [8, 9]. It allows the users to interact with both physical and virtual items, making it more practical than previous methods. 2 Materials and Methods 2.1 System overview The introduced system visualizes medical images (CT, MRI) as a 3D object. First, we use the developed software to visualize the medical images by using volume rendering ray-casting technique. The term volume rendering is used to describe techniques, which allow the visualization of 3D data. Volume rendering is a technique for visualizing sampled functions of three spatial dimensions by computing 2D projections of a colored semitransparent volume. The technique works as follows: Step 1: • Trace from each pixel a ray into object space. • Compute and accumulate color/opacity value along the ray in the process of pixel compositing. • Assign the obtained value to the pixel. Figures 1 and 2 illustrates the process of Ray-marching and compositing of pixels. Fig. 1. Ray-marching process [10, 11]. iJIM ‒ Vol. 13, No. 3, 2019 31 Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… Fig. 2. Compositing of pixels’ color/opacity along the ray [11, 12], where c is the color of pixel, and ∝ (alpha) refers to the opacity. Step 2 In this step, we use compositing technique named alpha blending, i.e. the iterative computation of discretized volume integral. Figure 2 illustrates how alpha blending works while each ray goes through the object on its direction. The developed software runs on Samsung Gear VR headsets and using its pass- through camera feature. It will enable the software to augment the 3D object of the medical scans in real world space. Interaction with the augmented object will be performed using the Gear VR controller. Users could manipulate the viewed 3D object generated from the medical images sliced by hiding parts of the object or view it in different ways with some GUI features to help the user to interact with it more easily, such as: • Increasing visibility • Increasing and Decreasing Opacity • Clipping (removing parts of the object) on the X, Y and Z axes. • Rotation and Translation Briefly, the considered scenario may be summarized as follows: • First obtain volumetric medical data Digital Imaging and Communications in Medicine (DICOM) [13], or RAW file format. • Preprocess the data to the best possible lossless form of useable data. • The data are stored afterwards on a smartphone then mount it on a VR headset that has a pass-through camera feature. • The software will render the preprocessed data as a 3D object into reality using AR technology through the virtual reality headset. • User will interact with the 3D object via Gear VR controller. 32 http://www.i-jim.org Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… Using this scenario, surgeons and radiologists will be able to see the scanned slices of the patient as a real 3D object in front of them and will be able to interact with it through a controller. They have the capability to zoom in or out or even to make parts of the object transparent as well as clipping parts of it. 2.2 Image acquisition CT or MRI scanners first scan the patient. Afterwards, measured data is sent to an online archive to be stored and registered. Thereafter, the data has to be sent to the smartphone via wireless communication for processing and visualization. In this study, we have used CT data provided from Suez Canal University Hospital with blind patient information. An example of the image slices is shown in Fig. 3. Moreover, we have used some online free available CT data to confirm the validity of the proposed method using different resources. 3 Results and Discussion 3.1 Samples and results In this section, we demonstrate results obtained from experimental study using the developed system. In Fig. 3, a sample of abdominal CT slices obtained from single image from the CT data used in this experiment. These images demonstrate the anatomical structural of human internal organs. Fig. 3. Sample slices from the first CT dataset. Fig. 4. (left) A 3D object rendered with ray marching by using the first dataset, full opacity, no clipping, front facing, (center) rotated 90 degrees on the Y-axis and (right) rotated 90 degrees on X-axis. iJIM ‒ Vol. 13, No. 3, 2019 33 Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… Fig. 5. (left) Same 3D object, 0.03 opacity, no clipping, front facing, (center) rotated 90 degrees on the Y-axis and (right) rotated 90 degrees on X-axis. Fig. 6. left) Same 3D object, 0.03 opacity, clipped 50% of it on X-axis, front facing, (center) rotated 90 degrees. Fig. 7. Sample slices from the second CT dataset. Fig. 8. (left) A 3D object rendered with raymarching using the second dataset, full opacity, no clipping, front facing, (center) rotated 90 degrees on the Y-axis and (right) rotated 90 degrees on X-axis. 34 http://www.i-jim.org Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… Fig. 9. Same 3D object, 0.03 opacity, no clipping, front facing, (center) rotated 90 degrees on the Y-axis and (right) rotated 90 degrees on X-axis. Fig. 10. (left) Same 3D object, 0.05 opacity, clipped 50% of it on Y-axis, front facing, (center) rotated 90 degrees on the Y-axis and (right) rotated 90 degrees on X-axis. The proposed method is implemented using volume image shown in Fig. 3 and the surface of the patient’s body as a 3D object viewed from three different angles is shown in Fig. 4. The volume rendering displays the 3D object with focus on the surface only. It is not possible to view internal structures with this visualization setup. The internal structures can be viewed with three different angles after decreasing the opacity value as shown in Fig. 5. Several organs can be viewed with higher quality. Spinal cord, liver and kidneys can be viewed accurately in 3D structures. Figure 6 is showing only half of the rendered object viewed from three different angles using the first dataset. The same experiment is repeated for the second dataset and results are shown in Figs. 7-10. 3.2 Discussion The proposed software will help in minimizing the visualization of medical images, saving time and effort for surgeons and radiologist, with relevantly fast run time. It requires few minutes to render a data set of 300 images. This method has a potential to be adapted in several minimally invasive surgeries where the surgeon is required to view internal structures mapped with the patient body in real-time. Results indicate that using the proposed method can help in the rendering and 3D visualization of CT volumes in very short time that lead to exact recognition of different large size objects. However, it is still challenging to observe small size objects such as blood vessels and veins. Further development is required to improve the accuracy towards a better visualization of objects with size less than 10 mm. iJIM ‒ Vol. 13, No. 3, 2019 35 Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… 3.3 Related work In 2010, a group of researchers from university of München, Germany, started a project that maps the CT scans obtained with patient body. Their AR system of optical tracking and video see-through head HMD for visualization was developed to keep track of the objects in the scene. This process is carried out by two separate optical tracking systems. Four infrared ARTtrack2 cameras have been mounted to the room’s ceiling to obtain an outside-in optical tracking system, while an infrared (IR) camera mounted directly on the HMD is used as an inside-out optical tracking system. They have used video image as context layer, while is rendering focus layer with volume rendering. Occlusion handling is shown for instruments and hands" [14]. Images of their work are shown in figure 11. Fig. 11. “(a) Illustration of the occlusion problem. (b, c, d) Render pipeline for correct occlusion handling, (b) video texture, (c) hit texture for the skin, (d) final composition of (b) and (c). (e) like (d) with in-body MPR. (f) Focus and Context rendering with shaded volume rendering for the focus layer (bone), virtual mirror and instrument.” [14] Similarly, in 2018, a recent research carried out by group of researchers and scientists [15], they have developed a VR imaging technique that displays and interacts with optical coherence tomography (OCT) data. Their application was installed on a high-end notebook (Windows 10 home, 64bit, NVIDIA GeForce GTX 980 8192MB GDDR5, 32 RAM, CPU Intel Core i7-6700K CPU @ 4.00 GHZ, 4 cores). As in most VR applications, they used in their development phase headsets that connects to a powerful personal computer (PC). As a result, the used hardware delivers very high frame rates, when rendering high quality OCT data. They have reported that their application reaches a normal of 180 frames per seconds (fps) while rendering high quality volumetric data. They have used HTC Vive (VR headset) to render the data in a virtual reality environment [15]. Pictures of their work are shown in figures 12 and 13. 36 http://www.i-jim.org Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… Fig. 12. “Stereoscopic illustration of the VR environment displaying volume OCT data of a peripheral retinal tear” [15] Fig. 13. “VR CT of a skull with soft tissue rendering and corresponding original CT data with intensity display” [15] Their development approach is to render original point-cloud data rather than polygons or meshes, which enhance the detail level and preserves complexity rather than reducing it. The relation between their work and our work maybe be summarized as followed: is both works tend to image medical data in virtual environments. However, there is no point of comparison between both research works, since both projects use different types of hardware. Our work was tested on a smartphone and the research of [15] was tested on a high-end PC. Nevertheless, our work has more potential in future medical applications, with more interaction with the real world since we have implemented the volume rendering technology in MR rather than VR. iJIM ‒ Vol. 13, No. 3, 2019 37 Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… 4 Conclusion and Future Work 4.1 Conclusion In this study, we discussed the developed system and software that will be used as a new method in visualization of medical images. The software can deliver better visualizations to surgeons and radiologists, helping to create a better environment for surgeries. 4.2 Future work The research presented in this study also provide a strong basis for future work in awareness and in volume rendering technologies. One area of future work is in uniting the knowledge gained about mixed reality with knowledge about medicine. Another extent is in applying the results studied here to the many real-world situations in which reconstruction of 3D medical data is an important problem. 5 References [1] S. A. El-Seoud, A. S. Mady, and E. A. Rashed, “An Interactive Mixed Reality Imaging System for Minimally Invasive Surgeries,” in Proceedings of the 7th International Conference on Software and Information Engineering - ICSIE ’18, 2018. https://doi.org/10.1145/3220267.3220290 [2] Rowe, S. P. and Fishman, E. K. 2017 Image Processing from 2D to 3D, Springer https://doi.org/10.1007/174_2017_136 [3] Vavra, P et al., 2017 recent development of augmented reality in surgery: a review, J. Health Eng., 2017, 4574172. https://doi.org/10.1155/2017/4574172 [4] De Paolis, L. T. and Aloisio. G. Augmented reality in minimally invasive surgery. Advances in Biomedical Sensing, Measurements, Instrumentation and Systems. Springer; 2010. pp. 305-320. https://doi.org/10.1007/978-3-642-05167-8_17 [5] Udupa, J. K. and Goncalves, R. J. 1993 Medical image rendering. American journal of cardiac imaging 7.3, 154-163 [6] What is a Volume Pixel (Volume Pixel or Voxel)? - Definition from Techopedia", Techopedia.com. [7] https://www.techopedia.com/definition/2055/volume-pixel-volume-pixel-or-voxel [8] Virtual Reality Vs. Augmented Reality Vs. Mixed Reality - Intel [9] https://www.intel.com/content/www/us/en/tech-tips-and-tricks/virtual-reality-vs- augmented-reality.html [10] Otha, Y., and Tamura H. 2014-Mixed reality: merging real and virtual worlds. Springer [11] Billinghurst, M. and Kato, H. 1999 Collaborative mixed reality. Proceedings of the First International Symposium on Mixed Reality https://doi.org/10.1007/978-3-642-87512-0_15 [12] Volume ray casting, En.wikipedia.org, 2017. [13] https://en.wikipedia.org/wiki/Volume_ray_casting [14] Möller T., Direct Volume Rendering. University of Vienna. [15] Komura, T 2008, Volume Rendering, Visualization – Lecture 10, The University of Edinburgh 38 http://www.i-jim.org Paper—An Interactive Mixed Reality Ray Tracing Rendering Mobile Application… [16] Bidgood, W. Dean et al. “Understanding and Using DICOM, the Data Interchange Standard for Biomedical Imaging.” Journal of the American Medical Informatics Association 4.3 (1997): 199–212. Print. https://doi.org/10.1136/jamia.1997.0040199 [17] Wieczorek, M. et al., 2010 GPU-accelerated Rendering for Medical Augmented Reality in Minimally-invasive Procedures.” in Bildverarbeitung für die Medizin, 574, 102-106. [18] Maloca, P., de Carvalho, J., Heeren, T., Hasler, P., Mushtaq, F., Mon-Williams, M., Scholl, H., Balaskas, K., Egan, C., Tufail, A., Witthauer, L. and Cattin, P. (2018). High- Performance Virtual Reality Volume Rendering of Original Optical Coherence Tomography Point-Cloud Data Enhanced With Real-Time Ray Casting. Translational Vision Science & Technology, [online] 7(4), p.11. Available at: http://High-Performance Virtual Reality Volume Rendering of Original Optical Coherence Tomography Point- Cloud Data Enhanced with Real-Time Ray Casting. 6 Authors Samir A. El-Seoud is a Professor at the British University in Egypt (BUE). He joined BUE in 2012. Currently, he is Basic Science Coordinator at the Faculty of Informatics and Computer Science. He has expertise in Algorithms, Computer Architecture and Computer Graphics. Amr S. Mady currently works at the Department of Computer Science, The British University in Egypt. Amr does research in Artificial Neural Network, Artificial Intelligence and Algorithms. Essam A. Rashed is a Faculty of Informatics and Computer Science, at the British University in Egypt (BUE). Article submitted 2018-11-21. Resubmitted 2019-02-23. Final acceptance 2019-02-24. Final version published as submitted by the authors. iJIM ‒ Vol. 13, No. 3, 2019 39