Microsoft Word - 16-3452_s_ETASR_V10_N3_pp5674-5677 Engineering, Technology & Applied Science Research Vol. 10, No. 3, 2020, 5674-5677 5674 www.etasr.com Sanga et al.: Mobile-based Deep Learning Models for Banana Disease Detection Mobile-based Deep Learning Models for Banana Disease Detection Sophia L. Sanga Nelson Mandela African Institute of Science and Technology Arusha, Tanzania sophiasanga66@gmail.com Dina Machuve Nelson Mandela African Institute of Science and Technology Arusha, Tanzania dina.machuve@nm-aist.ac.tz Kennedy Jomanga International Institute of Tropical Agriculture Arusha, Tanzania k.jomanga@cgirr.org Abstract—In Tanzania, smallholder farmers contribute significantly to banana production and Kagera, Mbeya, and Arusha are among the leading regions. However, pests and diseases are a threat to food security. Early detection of banana diseases is important to identify the diseases before too much damage is done on the plants. In this paper, a tool for early detection of banana diseases by using a deep learning approach is proposed. Five deep learning architectures, namely Vgg16, Resnet18, Resnet50, Resnet152 and InceptionV3 were used to develop models for banana disease detection, achieving all high accuracies, varying from 95.41% for InceptionV3 to 99.2% for Resnet152. InceptionV3 was selected for mobile deployment because it demands much less memory. The developed tool was capable of detecting diseases with a confidence of 99% of the captured leaves from the real environment. This tool will help smallholder farmers conduct early detection of banana diseases and improve their productivity. Keywords-deep learning; banana diseases; smartphones; early detection; android based application I. INTRODUCTION Deep learning is a method of training an artificial neural network in machine learning in which it performs a task by frequently testing its performance, and altering the internal parameters within the network between each test. Deep Convolutional Neural Networks (CNNs) have been used in many fields such as computer vision, speech recognition, face recognition, and natural language processing [1]. In computer vision, deep learning was found to be more effective for image recognition, object recognition, self-driving automobiles, object detection, and image segmentation. Deep learning has been introduced as a new method to ensure semantic interoperability in the case of concrete concepts such as images [3]. Applications such as affordable home robots and more intelligent personal assistants on a mobile phone will come when deep learning technologies become ubiquitous in smartphones [4]. These applications have higher resolution display and extensive built-in sets of accessories, such as advanced HD cameras and computing power [5]. Using deep learning for crop disease detection and smartphone applications is an active research area [4, 5]. Deep learning has shown a successful way for smartphones to assist in disease detection based on leaf images. In order to improve banana yield, various efforts have been made to provide information regarding disease management and develop an early warning system to help smallholder farmers and extension officers around the Arumeru district to make the right decisions regarding disease management and control [8]. Deep learning techniques were used for the detection of cassava disease using images captured from the real environment in Tanzania in [9]. In this study, a deep CNN was used to train the developed model for identifying two types of pest damage and three diseases. The best accuracy for the trained model was 98% for Brown Leaf Spot (BLS), 96% for Cassava Mosaic Disease (CMD), 96% for Red Mite Damage (RMD), 98% for Cassava Brown Streak Disease (CBSD), and 95% for Green Mite Damage (GMD). The best model scored an accuracy of 93% for a dataset that was not used during the training process. Moreover, the method was validated in the real field with a mobile phone embedded with TensorFlow Android Inference Interface. Mobile penetration in Tanzania had reached 90% by June 2019, with more than 8 million people using mobile phones. This massive use of smartphones is one of the advantages that will help smallholder farmers in early disease detection. Our aim was to develop a tool that would help smallholder farmers and extension officers in real-time detection of banana diseases because the present tools that have been developed in Tanzania are limited to the detection of only cassava. In order to have an optimal yield in harvest, disease control and management are necessary, hence the need for the application of artificial intelligent systems for disease detection, control, and management [10]. Regarding the banana disease early detection, there is a need for a supportive tool which is less expensive, operates in real-time, it is automatic, user-friendly, and fast. This tool will be of great importance in banana production [11]. The developed system is simple to use and easily available through a simple mobile application. It is thus a valuable tool for smallholder farmers and extension officers in Tanzania and elsewhere, who lack the proper tools for the detection of banana diseases. To train our model we evaluated the applicability of transfer learning from a deep CNN [12]. Due to the model’s complexity, a CNN can take weeks to fully train the model with millions of image samples. To avoid this challenge, transfer learning techniques were used in which a model developed for a task was reused as the starting point for Corresponding author: Sophia L. Sanga Engineering, Technology & Applied Science Research Vol. 10, No. 3, 2020, 5674-5677 5675 www.etasr.com Sanga et al.: Mobile-based Deep Learning Models for Banana Disease Detection a model on a second task [13, 14]. Previous research has shown that transfer learning is effective for diverse applications and has much lower computational requirements than learning from scratch [15, 16]. So, InceptionV3 architecture was selected as a deep learning tool because it combines very good performance with a low computational cost [12]. The software proposed for mobile development was an Android application. Android is a mobile operating system based on Linux kernel and initiated by Google designed purposely for touchscreen mobile devices such as tablets, computers, and smartphones with specialized user interfaces for wrist watches (Android Wear), televisions (Android TV) and cars (Android Automatic) [17]. II. MATERIALS AND METHODS A. Deep Convolutional Neural Network Models Deep CNNs have been used in many fields such as computer vision, speech recognition, face recognition, and natural language processing [1]. In computer vision, various developed CNN architectures were effectively applied to complicated tasks of visual imagery [5]. Five basic CNN architectures tested in the problem of early detection of banana diseases from images of banana leaves were used [12, 18]. To train our pre-trained models we used TensorFlow as it allows users to train their models on both Graphics Processing Units (GPUs) and Central Processing Units (CPUs) [19]. To train Vgg16, Resnet18, Resnet50, and Resnet152 we used a laptop, with these specifications: Windows 10 Pro N 64 bit equipped with one Processor - Intel(R) Core(TM) i5-4300U CPU @1.90GHz 2.50GHz with 4GB RAM. The language used during training was Python. The implementation was carried out by using Keras library which is good in handling neural network activities/models and in acquiring higher performance for numerical computations. During training, TensorFlow was used as the backend part. The software library used to train the models was the Google Colab machine with the following specifications: Python3 runtime type, GPU hardware accelerator, and 20MB Notebook size. To train InceptionV3 the following computer lab specifications were used: Device name Cocselab, 251.8GB memory size, Intel Xeon(R) CPU E5-26200@2.00GHz*12 processor, NVD9 graphics, GNOME 3.32.1, OS type 64-bit and 1.0TB disk. B. Dataset Training and Testing Python script was used for augmenting our dataset to 3000 images from each class. The augmentation technique for image data generator was horizontal flipping, zoom range of 0.2, rotating–right-bottom, rotating-left-bottom, shear range of 0.1, cropping and rescaling each image with 1/255 ratio and finally 27,000 banana leaf images of three classes (black Sigatoka, Fusarium wilt race 1 and healthy leaves) were acquired (see Figure 1) from images collected from Arusha and Mbeya regions in Tanzania. These leaf images were taken in real conditions in the cultivation fields and were captured by a simple camera that also captured effects such as shades and different backgrounds. To avoid bias in our pre-trained models, the data were randomly shuffled before splitting. Our image dataset was then split into three groups, 80% for training, 15% for validation, and 5% for testing. The training and validation sets were used during model selection while the test set was used to evaluate the generalization performance (performance verification stage) of the selected model. (a) (b) (c) Fig. 1. Dataset samples: (a) Fusarium wilt race 1, (b) black Sigatoka, (c) healthy plant III. RESULTS A. Model Results The selected models were trained with the parameters presented in Table I. After several trials, these values were selected because they showed the best results during training. Table II shows the successful classification results from validation, testing, and training of the three classes from our dataset. Regarding the performance of our models on the test set, Resnet152 achieved the highest accuracy of 99.8% while InceptionV3 achieved the lowest, which howerer was quite high. All models achieved equal or better performance when tested on the original banana leaf images. Figure 2 shows that there were no overfitting problems during the training process. TABLE I. PARAMETERS USED Parameter Value Optimizer SGD Batch-size 4 Learning rate 1e-3 Weight decay 1e-9 Model architecture Resnet, InceptionV3 Momentum 0.9 Image size (Resnet) 244×244 pixels Images size (InceptionV3) 299×299 pixels TABLE II. MODEL PERFORMANCE COMPARISON Models Validation accuracy % Test accuracy % Epochs Time (/epoch) Loss VGG16 98.4 98.7 35 30min 13s 0.197 Resnet18 98.8 98.8 50 8min 35s 0.256 Resnet50 98.8 98.9 50 8min 35 s 0.180 Resnet152 99.2 99.8 50 8min 35s 0.0539 InceptionV3 95.4 95.5 1500 2min 39s 0.1351 Fig. 2. Validation and training graph for the InceptionV3 model Engineering, Technology & Applied Science Research Vol. 10, No. 3, 2020, 5674-5677 5676 www.etasr.com Sanga et al.: Mobile-based Deep Learning Models for Banana Disease Detection B. Mobile Deployment We chose InceptionV3 for mobile deployment because it demands less memory and it has a lower computational cost when deployed to mobile apps as compared to the other tested deep learning models [12, 20], ever if they achieved higher accuracy, because these models require larger memory space for mobile deployment. Anyhow, the performance of InceptionV3 was satisfactory for our needs, so it was chosen as the solution that combined less computational resources and a high accuracy of 95.5%. TensorFlow was used as it is a deep learning framework that offers APIs for mobile deployment in Android and iOS-based smartphones [21]. The TensorFlow was used to train the InceptionV3 model for banana disease detection. Figure 3 shows how different activities will be conducted by users in disease detection while Figures 4 and 5 show screenshots that were taken during the validation of our model. The InceptionV3 model showed good results in early detection of banana diseases with an accuracy of 99% for the images captured on real environment as seen in Figure 5, and accuracy of at least 71% for the uploaded images from the internet. When tested again on real environment, it achieved an accuracy of 95% as shown in Figure 4. Fig. 3. Activity diagrams of users in disease detection (a) (b) Fig. 4. Screenshots of the mobile application for uploaded images and captured from real environment with confidence percentage (a) (b) Fig. 5. Screenshots of the mobile application only for images captured from real environment with confidence percentage IV. CONCLUSION The capability of deep learning models, and transfer learning techniques was evaluated in deploying an offline mobile application for the early detection of banana diseases. The performance of the deployed tools in detecting banana diseases was assessed for three datasets that were our main target. A rule was set that the results must at least have detection confidence percentage larger than 70%, otherwise the app will recommend having a clearer image of the captured leaf. Real-time early detection of banana fungal diseases is very important in improving banana yields, which are largely affected by such diseases. It was shown that with the use of machine learning techniques via a mobile phone application, these diseases can be easily detected. The developed mobile application is called FUSI (Fusarium Sigatoka) Scanner, it works under Android technology, and detects the presence of the two most common banana fungal diseases. REFERENCES [1] A. Voulodimos, N. Doulamis, A. Doulamis, E. Protopapadakis, “Deep learning for computer vision : A brief review”, Computational Intelligence and Neuroscience, Vol. 2018, Article ID 7068349, 2018 [2] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, F. E. Alsaadi, “A survey of deep neural network architectures and their applications”, Neurocomputing, Vol. 234, pp. 11-26, 2017 [3] N. E. A. Amrani, O. E. K. Abra, M. Youssfi, O. Bouattane, “A novel deep learning approach for semantic interoperability between heterogeneous multi-agent systems”, Engineering, Technology & Applied Science Research, Vol. 9, No. 4, pp. 4566–4573, 2019 [4] H. A. Pierson, M. S. Gashler, “Deep learning in robotics: A review of recent research”, Advanced Robotics, Vol. 31, No. 16, pp. 821–835, 2017 [5] S. P. Mohanty, D. P. Hughes, M. Salathe, “Using deep learning for image-based plant disease detection”, Frontiers in Plant Science, Vol. 7, Article ID 1419, 2016 [6] Y. Deng, “Deep learning on mobile devices: A review”, Proceedings of the SPIE, Vol. 10993, Article ID 109930A, 2019 [7] K. P. Ferentinos, “Deep learning models for plant disease detection and diagnosis”, Computers and Electronics in Agriculture, Vol. 145, pp. 311–318, 2018 [8] K. Ramadhani, D. Machuve, K. Jomanga, “Identification and analysis of Engineering, Technology & Applied Science Research Vol. 10, No. 3, 2020, 5674-5677 5677 www.etasr.com Sanga et al.: Mobile-based Deep Learning Models for Banana Disease Detection factors in management of banana fungal diseases: Case of Sigatoka (mycosphaerella fijiensis. mulder) and fusarium (fusarium oxysporum f. sp. cubense (Foc) diseases in arumeru district”, Journal of Biodiversity and Environmental Sciences, Vol. 11, No. 1, pp. 69–75, 2017 [9] A. Ramcharan, K. Baranowski, P. McCloskey, B. Ahmed, J. Legg, D. Hughes, “Using transfer learning for image-based cassava disease detection”, Frontiers in Plant Science, Vol. 8, Article ID 1852, 2017 [10] N. C. Eli-Chukwu, “Applications of artificial intelligence in agriculture : A review”, Engineering, Technology & Applied Science Research, Vol. 9, No. 4, pp. 4377-4383, 2019 [11] H. A. Hiary, S. B. Ahmad, M. Reyalat, M. Braik, Z. ALRahamneh, “Fast and accurate detection and classification of plant diseases”, International Journal of Computer Applications, Vol. 17, No. 1, pp. 31–38, 2011 [12] C. Szegedy, S. Ioffe, V. Vanhoucke, A. A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on learning”, Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, USA, February 4-9, 2017 [13] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, C. Liu, “A survey on deep transfer learning”, in: Artificial Neural Networks and Machine Learning- ICANN 2018, Vol. 11141, Springer, 2018 [14] K. Gopalakrishnan, S. K. Khaitan, A. Choudhary, A. Agrawal, “Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection”, Construction and Building Materials, Vol. 157, pp. 322–330, 2017 [15] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, F. F. Li, “Large-scale video classification with convolutional neural networks”, IEEE Conference on Computer Vision and Pattern Recognition, Columbus, USA, June 23-28, 2014 [16] J. Shijie, J. Peiyi, H. Siping, S. Haibo, “Automatic detection of tomato diseases and pests based on leaf images”, Chinese Automation Congress, Jinan, China, October 20-22, 2017 [17] K. Divya, S. V. Krishnakumar, “Comparative analysis of smart phone operating systems Android, Apple IOS and Windows”, International Journal of Scientific Engineering and Applied Science, Vol. 2, No. 2, pp. 432-438, 2016 [18] K. He, X. Zhang, S. Ren, J. Sun, “Deep residual learning for image recognition”, IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, June 27-30, 2016 [19] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, X. Zheng, “TensorFlow : A system for large-scale machine learning”, 12th USENIX Symposium on Operating Systems Design and Implementation, Savannah, USA, November 2-4, 2016 [20] C. Meng, M. Sun, J. Yang, M. Qiu, Y. Gu, “Training deeper models by GPU memory optimization on TensorFlow”, 31st Conference on Neural Information Processing Systems, Long Beach, USA, December 4-9, 2017 [21] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous distributed systems”, available at: https://arxiv.org/pdf/1603.04467.pdf, 2016 AUTHORS PROFILE Sophia Sanga is an MSc student in Information and Communication Science and Engineering at the Nelson Mandela African Institute of Science and Technology (NM-AIST). She completed her BSc of Engineering in Computer Science and Engineering in St. Joseph University, Tanzania. Her focus is to help smallholder banana farmers. Currently she is researching the development of early detection tools for banana diseases. Dina Machuve is a lecturer and researcher at the Nelson Mandela African Institute of Science and Technology (NM-AIST) in Tanzania. She holds a PhD in Information and Communication Science and Engineering from NM-AIST (2016), MSc in Electrical Engineering from Tennessee Technological University, USA (2008), and BSc in Electrical Engineering from the University of Dar es Salaam (2001). Her research interests are Data Science, Bioinformatics, Agriculture Informatics on Food Value Chains and STEM Education in schools. Kennedy Elisha Jomanga is a holder of MSc degree in plant protection and of a BSc degree of Science in Agronomy. He is currently working with IITA as Plant Pathologist and Senior Research Associate. He worked as an Assistant Area Manager and Agriculture Manager for 5 years. He is a former World Vision food security coordinator for seven regions in Tanzania. He also has been working as an Assistant Lecturer for more than 4 years at the St. Johns University of Tanzania. He is a founder and a tutor of Certificate and Diploma courses in General Agriculture at the same University. He has attended several trainings related to management skills in the past with Robert Antonio Sugarcane Research Institute of Mauritius. He was involved in a survey of different plant diseases causing pathogens in Kilimanjaro, Arusha, and Mbeya.