Journal of Applied Engineering and Technological Science Vol 4(2) 2023: 885-894 885 AUTOMATIC CLASSIFICATION OF DESMIDS USING TRANSFER LEARNING Rajmohan Pardeshi1*, Prapti Deshmukh2 Dept of Computer Science and Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, Maharashtra, India1 GY Pathrikar College of Computer Science and Information Technology, MGM University, Aurangabad, Maharashtra, India2 madhurajmohan1@gmail.com Received : 26 March 2023, Revised: 09 May 2023, Accepted : 09 May 2023 *Corresponding Author ABSTRACT This research paper presents a novel approach to classifying microscopic images of desmids using transfer learning and convolutional neural networks (CNNs). The purpose of this study was to automate the tedious task of manually classifying microscopic algae and improve our understanding of water quality in aquatic ecosystems. To accomplish this, we utilized transfer learning to fine-tune 13 pre-trained CNN models on a dataset of five categories of desmids. We evaluated the performance of our models using several metrics, including accuracy, precision, recall, and F1-score. Our results show that transfer learning can significantly improve the classification accuracy of microscopic images of desmids, and efficient CNN models can further enhance performance. The practical implications of this research include a more efficient and accurate method for classifying microscopic algae and assessing water quality. The theoretical implications include a better understanding of the application of transfer learning and CNNs in image classification. This research contributes to both theory and practice by providing a new method for automating the classification of microscopic algae and improving our understanding of aquatic ecosystems. Keywords: Deep Learning, Transfer Learning, Algae Classification, Desmids Classification, CNN Comparative study 1. Introduction Freshwater wetlands constitute vital ecosystems, where benthic, attached microbial communities, such as desmids, function as key habitats that contribute to primary productivity, nutrient cycling, and substrate stabilization (Domozych & Domozych, 2008). Desmids are unicellular, photosynthetic algae found in freshwater environments. The complex group of microorganisms called desmids, which thrive in aquatic ecosystems worldwide, have long intrigued biologists (Prescott et al., 1948). These microorganisms display an extensive variety of forms and hold potential applications across numerous sectors. Researchers continue to investigate ways to leverage these microscopic organisms to benefit human welfare, including alternative fuel sources, food, and more. The traditional taxonomy of desmids, based on morphology, suffers from an overabundance of synonyms and a high rate of splitting. Unfortunately, sexual reproduction, essential for applying the biological species concept, is a relatively rare phenomenon in this algal group, and for many species, no sexual stages have been observed (Coesel et al.,1996). Several methods are employed to monitor algal blooms, including using aircraft, satellites, or drones to acquire hyperspectral or multispectral images, which facilitate the identification of algal bloom events over expansive areas (Park et al., 2019, Goldberg et al, 2016, Kudela et al., 2015, Lekki et al., 2019). Continuous surveillance of undesirable algal blooms in ponds, water channels, or freshwater reservoirs is crucial to ensure timely and appropriate actions are taken to maintain drinking water quality. The conventional method of visually analyzing algal blooms via a microscope is labor-intensive, economically impractical, and cumbersome. An automated system leveraging state-of-the-art object detection algorithms provides a more efficient solution for real-time monitoring of algal blooms in water bodies (Ali et al., 2022). In recent years, computer vision techniques, especially those grounded in deep learning, have been applied to image classification and object detection problems (LeCun et al., 2015). Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 886 Deep learning approaches often require large datasets to train classifiers, but acquiring sufficient data is not always possible. In such instances, transfer learning can be utilized to apply knowledge gained from one task to solve another (George Karimpanal & Bouffanais, 2018). A significant number of studies in the literature have focused on assessing individual CNN architectures. Considering the rapid development of various variants, a comparative study of these CNN architectures is both timely and essential. This paper presents a comparative analysis of 13 state- of-the-art CNN architectures for desmid classification. These initial findings aim to guide emerging researchers in devising innovative algorithms, thereby fostering research and development in the domain of automated microscopic algae classification, including desmids 2. Literature Review A comprehensive search of scientific databases, including PubMed, ScienceDirect, and Google Scholar, was performed to identify relevant articles published between 2020 and 2023. The search was conducted using various keywords such as algae, classification, machine learning, image processing, and deep learning. The inclusion criteria were articles published in English, focused on the classification of algae using various techniques, including machine learning and image processing. Exclusion criteria included articles that were not relevant to the research question, articles not published in English and not given the authentic data sources. Machine Learning Approaches for Microalgae Classification: Several studies have employed machine learning approaches for microalgae classification. (Agarwal et al., 2023) used morphological features extracted from blobs with a Random Forest Classifier to classify two species of microalgae using imaging flow cytometry, achieving an accuracy of 95%. (Adejimi et al., 2023) used hyperspectral signature-based features with a Support Vector Machine (SVM) classifier to classify seven genera of algae. (Gerdan Koc et al., 2023) utilized cultivation parameters for the classification of three algae types, achieving an accuracy of 93.11%. (Pardeshi et al. 2021) classified Scenedesmus of four classes based on coenobium, obtaining an accuracy of 96%. Deep Learning Approaches for Microalgae Classification: In recent years, deep learning techniques, particularly Convolutional Neural Networks (CNN), have gained popularity in microalgae classification due to their ability to learn complex features from image data. (Gaur et al., 2023) compared CNN architectures, including MobileNet, VGG, AlexNet, and ResNext, for the classification of 15 categories of microalgae, achieving accuracies ranging from 40% to 99%. (Yuan et al.,2023) developed the Algal Morphology Deep Neural Network (AMDNN) model to classify 25 algal species, obtaining an accuracy of 99.87%. Several studies have employed the YOLO (You Only Look Once) framework for algae classification, with accuracies ranging from 41% (Park et al., 2021) to 91% (Tangsuksant, & Sarakon.,2023), (Ali et al., 2022). Comparisons and Evaluations of Techniques: Various studies have conducted comparative evaluations of different classification techniques. (Chong et al.,2023) provided a review of various techniques for microalgae classification, while (Yang et al.,2021) evaluated different CNN models for harmful versus harmless algae classification, achieving an accuracy of 94.8%. (Gong et al.,2023) compared various YOLO architectures for the detection of 54 genera of algae, reporting a Mean Average Precision (MAP) score of 50.5%. (Khan et al.,2022) detected harmful algal blooms with VGG-16, AlexNet, GoogleNet, and ResNet-18 CNN architectures, achieving an accuracy of 97.10%. Review Studies and Other Techniques: Some studies have focused on reviewing existing methods and techniques for microalgae classification. (Barsanti et al.,2021) provided a review on water monitoring using digital microscopy identification and classification of microalgae. (Chong et al.,2022) reviewed image processing algorithms for automatic micro-algae classification. (Guterres et al.,2022) developed an image data integration pipeline for phytoplankton classification. Other studies have explored different techniques, such as the use of Inception V3 for categorizing eight bloom-forming algae species (Gauret al., 2021), obtaining an accuracy of 99%. (Liu et al., 2021) classified red tide algae images of eight different types with a reported accuracy of 99%. (Guo et al., 2021) used Imaging FlowCytobot (IFCB) images of 15 categories of harmful algal blooms for classification, achieving an accuracy of 98%. Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 887 Additional studies have focused on specific applications and techniques: (Piazza et al., 2021) classified marine coralline algae using scanning electron microscopy (SEM) images with CNN, obtaining an accuracy of 80%. (Liao et al.,2022) analyzed Scenedesmus quadricauda culture using the Internet of Things (IoT) method. (Reimann et al., 2020) used fluorescent image features to classify living or dead microalgae of Chlorella sp., achieving an accuracy of 95%. (Pant et al., 2020) classified seven Pediastrum species based on the ResNext model, obtaining an accuracy of 98.45%. (Qian et al.,2020) employed a Faster R-CNN-based detection technique for nine genera of algae, reporting a MAP score of 74.64%. (Sonmez et al.,2022) compared AlexNet and other architectures for the classification of two species of microalgae, achieving an accuracy of 99.66%. (Gaur et al.,2022) focused on the classification of blue-green algae, obtaining an accuracy of 99.16%. (Xu et al.,2022) applied a CNN architecture for the classification of 13 categories of algae, achieving an accuracy of 93%. (Luo et al.,2023) developed a new technique using Landsat imagery for the classification of algal blooms in eutrophic shallow lakes, obtaining an accuracy of 84.49%. The classification of microalgae has evolved significantly in recent years, with machine learning and deep learning techniques showing great potential in achieving high classification accuracies. While CNN architectures, particularly YOLO, have demonstrated promising results, there is still room for improvement and exploration of other techniques. Further research is required to optimize and compare these approaches for different microalgae classification tasks, particularly in addressing the challenges of varying image quality and the vast diversity of microalgae species. Additionally, the integration of different data types, such as hyperspectral signatures and cultivation parameters, could further enhance classification performance. Future research should continue investigating novel techniques and refining existing methods to advance the field of microalgae classification. In this paper, we conducted a comparative analysis of 13 different CNN architectures for the classification of desmids into five categories. This study contributes to the growing body of research on microalgae classification by specifically targeting desmids, a group of microalgae known for their intricate cell shapes and unique morphological features. By evaluating the performance of various CNN architectures in classifying desmids, this study provides valuable insights into the effectiveness of deep learning techniques in addressing the challenges posed by the morphological diversity of desmids. The results of this study can serve as a reference point for future research aimed at improving classification performance and developing more robust techniques for the analysis of desmids and other microalgae species 3. Research Methods Convolutional neural networks (CNNs): Convolutional Neural Networks (CNNs) are a class of deep neural networks that have become popular for image recognition and classification tasks. CNNs consist of convolutional layers, pooling layers, and fully connected layers that work together to extract relevant features from input images. The convolutional layers use filters to extract features from the input images, while the pooling layers downsample the feature maps and reduce computational complexity. Fully connected layers are used to classify the images based on the features extracted by the convolutional and pooling layers. CNNs are known for their ability to learn complex feature representations from raw image data, and have achieved state-of- the-art results on several benchmark datasets for image recognition. Examples of CNN architectures include AlexNet, VGG-16, and ResNet. However, We chose transfer learning for desmids image classification because it improves performance with limited data, accelerates training, enhances generalization, and reduces overfitting, making it a highly effective and efficient approach for our research. Transfer learning (Yang et al. 2022) is a widely-employed machine learning technique that utilizes the knowledge acquired from a pre-trained Convolutional Neural Network (CNN) model to enhance the performance of a novel model for a related task, such as the classification of microscopic images of desmids. In this process, the features learned from the pre-trained model are reused and fine-tuned on a new dataset to adapt the model to a new task. The mathematical formulation for transfer learning is concisely presented in the following paragraph to facilitate a more comprehensive understanding. Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 888 Let M be a pre-trained model with parameters θ, trained on a source task Ts. The model M comprises a feature extractor f(.) and a classifier g(.), such that for an input x, the output produced by the pre-trained model M is given by: M(x;θ) = g(f(x;θ_f);θ_g) where θ_f and θ_g represent the parameters of the feature extractor and the classifier, respectively. Consider a new target task Tt, which is the classification of a desmids dataset. Although different from the source task Ts, Tt shares certain common features with it. The objective is to train a new model M′ with parameters θ′ that performs well on the target task Tt. Transfer learning initializes the new model M′ with the pre-trained model M and fine-tunes it on the target task Tt, as follows: M′(x; θ′) = g′(f′(x; θ_f′); θ_g′) Here, θ_f′ and θ_g′ denote the parameters of the feature extractor and the classifier of the new model M′, respectively. Transfer learning initializes the parameters of M′ with those of the pre-trained model M, i.e., θ_f′ = θ_f and θ_g′ = θ_g. Subsequently, the new model M′ is fine- tuned on the target task Tt using the desmids dataset. The process can be visualized through Fig. 1. The procedure for classification of Microscopic Images of Desmids using transfer learning involves the following steps: • Preprocessing: In our research, we first preprocess the image data by resizing, cropping, and normalizing the images to enhance the quality and consistency of the data. This step is crucial for training a transfer learning model, as it necessitates input images in a specific format. • Feature Extraction: In the second step, we utilize a pre-trained convolutional neural network (CNN) model to extract high-level features from the preprocessed images of Desmids. Having already learned to extract relevant features from microscopic images of Desmids in a general context, the pre-trained model's features can be employed for the new classification task. The CNN model's output consists of feature maps that represent the crucial features of each microscopic image of Desmids. • Fine-tuning: In the third, we fine-tune the pre-trained CNN model for the specific image classification task at hand, which is the classification of microscopic images of desmids. We use the feature maps obtained from the pre-trained model as input to a new classifier, which is trained to classify the images into different categories of desmids. In our case, we employ a fully connected neural network as the classifier. • Evaluation:In the final step, after fine-tuning the pre-trained model on the new dataset of microscopic images of desmids, we evaluate the performance of the transfer learning model using a test dataset. To determine the model's accuracy, we compare thepredicted class labels to the true class labels of the test images in the desmids dataset. Fig. 1. Transfer Learning Applied to Desmids Classification 4. Results and Discussions Dataset: We have used five types of desmids genus with total 88 microscopic images with varied angels, sizes and shapes for automatic classification desmids namely Closteriaceae (27) Desmidiaceae(23), Gonatozygon(13) ,Mesotaeniaceae(33), Peniaceae(19). This is one of the most Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 889 versatile data of microscopic image is considered for the experiments and evaluation of our method from [http://www.digicodes.info/]. The samples of microscopic images are shown in figure1. Further to enhance the size of dataset we have applied the various techniques of image augmentation given in (Wang et. al. 2022) (a) (b) (c) (d) (e) Fig. 2. Samples of microscopic images of desmids from our dataset a) Closteriaceae b) Desmidiaceae c) Gonatozygon d) Mesotaeniaceae e) Peniaceae Experiments: In this study, we evaluated the performance of 13 different deep learning models on the Desmids Dataset, a collection of images featuring desmids, a group of single-celled algae. The models assessed include Alexnet (Krizhevsky et al, 2017.), Densenet (Huang, et al., 2017), Efficientnet (Tan et al., 2019).,Googlenet (Szegedy et al., 2015), Mnasnet (Tan et al., 2019) , Regnet (Radosavovic et al., 2020), Resnet (He et al., 2016), Resnext (Xie et al., 2017), Shufflenet (Zhang et al., 2018),Squeezenet (Iandola et al.,2016) SwinNet (Z. Liu et al., 2022), Vggnet (Simonyan et al, 2014), and ViTnet (Kim et al, 2022). Our goal was to determine the effectiveness of each model for classifying desmids based on their morphological features. We trained each model using the same dataset and calculated their performance metrics, including precision, recall, F1-score, accuracy, and training time. The results obtained are given in Table 1. This paper provides a comprehensive evaluation of all major convolutional neural network (CNN) models presented in the literature up to 2022. Table 1 - Results based on transfer learning for classification of microscopic images of desmids Sl.No Model Name Precision Recall F1-Score Accuracy Training time in Minutes and Seconds 01 Alexnet 100% 99% 99% 100% 14M 41S 02 Densenet 100% 100% 100% 100% 38M 15S 03 Efficientnet 97% 97% 97% 97% 16M 2S 04 Googlenet 97% 97% 97% 97% 15M 56S 05 Mnasnet 96% 94% 94% 95% 15M 38S 06 Regnet 99% 99% 99% 99% 38M 32S 07 Resnet 96% 96% 96% 97% 40M 2S 08 Resnext 92% 87% 89% 91% 43M 49S 09 Shufflenet 97% 96% 97% 97% 24M 11S 10 Sqeezenet 100% 100% 100% 100% 39M 44S 11 SwinNet 98% 98% 98% 99% 25M 38S 12 Vggnet 99% 98% 98% 99% 33M 32S 13 ViTnet 100% 99% 99% 100% 46M 11S The contribution of this work lies in its thorough analysis and comparison of the performance of these models on desmids dataset, which allows for an informed understanding of their strengths and weaknesses. Overall, this study serves as a valuable resource for researchers and practitioners in the field of deep learning, providing insights into the state-of-the-art CNN models and their potential applications. The results presented in Table 1 demonstrate the accuracies of various convolutional neural network (CNN) models in classifying microscopic images of desmids. ViTnet, Squiznet, and AlexNet achieved perfect accuracy of 100%, while RegNet and EfficientNet performed very well with an accuracy of 99%. ResNet, GoogleNet, ShuffleNet, and SwimNet achieved an accuracy of 97%, and ResNext had an accuracy of 91%. MnasNet achieved an accuracy of 95%, while VGGNet had an accuracy of 99%. Based on these results, we can conclude that Squiznet, ViTnet, and Alexnet are the top- performing models with accuracies above 99%. These models use various techniques such as attention mechanisms, advanced activation functions, and depth wise convolutions to improve their performance. The middle-performing models, including Regnet, Resnet, Googlenet, and Shufflenet, achieved accuracies between 95-99% and are widely used due to their good performance and relatively simple architecture. Resnext and Mnasnet were lower-performing models, achieving accuracies between 90-95%, but they can still be useful in certain scenarios or when combined with other models. To facilitate comprehension of our results, we have provided Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 890 Fig. 2, which displays the Confusion Matrix Obtained by AlexNet which is the best performing model in all aspects including accuracy and time complexity. Additionally, we have included Fig. 3 and Fig. 4, which depict the accuracy and loss graphs, respectively, for both the training and validation, from this it can be clearly depicted that model avoids the overfitting. ViTnet and SwimNet are relatively new models that have shown promising results in their respective fields. ViTnet achieves high accuracy with fewer parameters, while SwimNet is specifically designed for RGBD image classification. Overall, these results provide valuable insights into the performance of different CNN models for the classification of microscopic images of desmids, which can be useful for researchers in this field. Fig. 3. Confusion Matrix given by AlexNet model for classification of desmids In summary, the CNN models described in this study exhibit varying accuracies and are optimized to perform well in diverse contexts. Some of these models are designed to be highly efficient, performing well on mobile devices, while others prioritize accuracy, necessitating greater computational resources. The selection of the most appropriate model for a given use case is contingent upon specific requirements and available resources. Based on considerations such as simplicity of architecture, required time for training, and computational resources, our findings suggest that AlexNet may represent the optimal choice for microscopic image classification. 5. Conclusion In conclusion, we have analyzed the performance of several popular CNN models based on their accuracy scores for the task of automatic desmids classification. ViTnet, Squiznet, and AlexNet are the top-performing models with perfect accuracy, while RegNet and EfficientNet also perform very well with an accuracy of 99%. ResNet, GoogleNet, ShuffleNet, and SwimNet have an accuracy of 97%, and MnasNet has an accuracy of 95%. ResNext has an accuracy of 91%, which is comparatively lower than other models on the list. It's important to note that accuracy is not the only metric to consider when selecting a CNN model for a particular task, and other factors such as speed, memory usage, and ease of training are also important. Nonetheless, the performance of these models in terms of accuracy provides a useful benchmark for evaluating their effectiveness in a given task and will be useful for the growth of research and development in the field of microscopic image analysis in general and classification of algae in particular. The performance of these CNN models could be further improved by exploring new training techniques and regularization methods. For example, recent research has shown that techniques such as mixup, cutout, and stochastic depth can improve the performance of CNN models on algae classification tasks. Overall, the comparative analysis of CNN models provides a useful starting point for future research, and there are many opportunities to improve the performance of these models and develop new architectures that better meet the needs of microscopic algae classification applications. Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 891 Fig. 4. Graph Showing the Training Vs. Validation Accuracy for Desmids Classification Fig. 5. Graph Showing the Training Vs. Validation Loss for Desmids Classification. References Adejimi, O. E., Sadhasivam, G., Schmilovitch, Z., Shapiro, O. H., & Herrmann, I. (2023). Applying hyperspectral transmittance for inter-genera classification of cyanobacterial and algal cultures. Algal Research, 71, 103067. https://doi.org/10.1016/j.algal.2023.103067 Agarwal, V., Chávez-Casillas, J., & Mouw, C. B. (2023). Sub-monthly prediction of harmful algal blooms based on automated cell imaging. Harmful Algae, 122, 102386. https://doi.org/10.1016/j.hal.2023.102386 Ali, Abdullah, S., Khan, Z., Hussain, A., Athar, A., & Kim, H. C. (2022). Computer Vision Based Deep Learning Approach for the Detection and Classification of Algae Species Using Microscopic Images. Water, 14(14), 2219. https://doi.org/10.3390/w14142219 Barsanti, L., Birindelli, L., & Gualtieri, P. (2021). Water monitoring by means of digital microscopy identification and classification of microalgae. Environmental Science: Processes & Impacts, 23(10), 1443–1457. https://doi.org/10.1039/d1em00258a Cai, H., Shan, S., & Wang, X. (2022, July). Rapid detection for optical micrograph of plankton in ballast water based on neural network. Algal Research, 66, 102811. https://doi.org/10.1016/j.algal.2022.102811 Chong, J. W. R., Khoo, K. S., Chew, K. W., Ting, H. Y., & Show, P. L. (2023, March). Trends in digital image processing of isolated microalgae by incorporating classification algorithm. Biotechnology Advances, 63, 108095. https://doi.org/10.1016/j.biotechadv.2023.108095 Chong, J. W. R., Khoo, K. S., Chew, K. W., Vo, D. V. N., Balakrishnan, D., Banat, F., Munawaroh, H. S. H., Iwamoto, K., & Show, P. L. (2023). Microalgae identification: Future of image processing and digital algorithm. Bioresource Technology, 369, 128418. https://doi.org/10.1016/j.biortech.2022.128418 Coesel, P. F. M. (1996). 5. Biogeography of desmids. Hydrobiologia, 336(1–3), 41–53. https://doi.org/10.1007/bf00010818 https://doi.org/10.1016/j.algal.2023.103067 https://doi.org/10.1016/j.hal.2023.102386 https://doi.org/10.3390/w14142219 https://doi.org/10.1039/d1em00258a https://doi.org/10.1016/j.algal.2022.102811 https://doi.org/10.1016/j.biotechadv.2023.108095 https://doi.org/10.1016/j.biortech.2022.128418 https://doi.org/10.1007/bf00010818 Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 892 Domozych, D. S., & Domozych, C. R. (2007). Desmids and Biofilms of Freshwater Wetlands: Development and Microarchitecture. Microbial Ecology, 55(1), 81–93. https://doi.org/10.1007/s00248-007-9253-y Gaur, A., Pant, G., & Jalal, A. S. (2021). Morphology-based Identification and Classification of Harmful Bloom Forming Algae through Inception V3 Convolution Neural Network. 2021 5th International Conference on Information Systems and Computer Networks (ISCON). https://doi.org/10.1109/iscon52037.2021.9702363 Gaur, A., Pant, G., & Jalal, A. S. (2022). Computer-aided cyanobacterial harmful algae blooms (CyanoHABs) studies based on fused artificial intelligence (AI) models. Algal Research, 67, 102842. https://doi.org/10.1016/j.algal.2022.102842 Gaur, A., Pant, G., & Jalal, A. S. (2023). Comparative assessment of artificial intelligence (AI)- based algorithms for detection of harmful bloom-forming algae: an eco-environmental approach toward sustainability. Applied Water Science, 13(5). https://doi.org/10.1007/s13201-023-01919-0 George Karimpanal, T., & Bouffanais, R. (2018). Self-organizing maps for storage and transfer of knowledge in reinforcement learning. Adaptive Behavior, 27(2), 111–126. https://doi.org/10.1177/1059712318818568 Gerdan Koc, D., Koc, C., & Ekinci, K. (2023). Fusion-based machine learning approach for classification of algae varieties exposed to different light sources in the growth stage. Algal Research, 71, 103087. https://doi.org/10.1016/j.algal.2023.103087 Goldberg, S.J.; Kirby, J.T.; Licht, S.C (2016). Applications of Aerial Multi-Spectral Imagery for Algal Bloom Monitoring in Rhode Island SURFO Technical Report No. 16-01. (n.d.). DigitalCommons@URI. https://digitalcommons.uri.edu/surfo_tech_reports/13 Gong, X., Ma, C., Sun, B., & Zhang, J. (2023). An Efficient Self-Organized Detection System for Algae. Sensors, 23(3), 1609. https://doi.org/10.3390/s23031609 Guo, J., Ma, Y., & Lee, J. H. (2021). Real-time automated identification of algal bloom species for fisheries management in subtropical coastal waters. Journal of Hydro-Environment Research, 36, 1–32. https://doi.org/10.1016/j.jher.2021.03.002 Guterres, B., khalid, S., Pias, M., & Botelho, S. (2023). A data integration pipeline towards reliable monitoring of phytoplankton and early detection of harmful algal blooms. Climate Change AI. https://www.climatechange.ai/papers/neurips2021/33 He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2016.90 Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2017.243 Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and. . . OpenReview. https://openreview.net/forum?id=S1xh5sYgx Khan, Abdullah, Z., Mumtaz, W., Mumtaz, A. S., Bhattacharjee, S., & Kim, H. C. (2022). Multiclass-Classification of Algae using Dc-GAN and Transfer Learning. 2022 2nd International Conference on Image Processing and Robotics (ICIPRob). https://doi.org/10.1109/iciprob54042.2022.9798730 Kim, S., Nam, J., & Ko, B. C. (2022). ViT-NeT: Interpretable Vision Transformers with Neural Tree Decoder. PMLR. https://proceedings.mlr.press/v162/kim22g.html Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90. https://doi.org/10.1145/3065386 Kudela, R. M., Palacios, S. L., Austerberry, D. C., Accorsi, E. K., Guild, L. S., & Torres-Perez, J. (2015). Application of hyperspectral remote sensing to cyanobacterial blooms in inland waters. Remote Sensing of Environment, 167, 196–205. https://doi.org/10.1016/j.rse.2015.01.025 LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539 https://doi.org/10.1007/s00248-007-9253-y https://doi.org/10.1109/iscon52037.2021.9702363 https://doi.org/10.1016/j.algal.2022.102842 https://doi.org/10.1007/s13201-023-01919-0 https://doi.org/10.1177/1059712318818568 https://doi.org/10.1016/j.algal.2023.103087 https://digitalcommons.uri.edu/surfo_tech_reports/13 https://doi.org/10.3390/s23031609 https://doi.org/10.1016/j.jher.2021.03.002 https://www.climatechange.ai/papers/neurips2021/33 https://doi.org/10.1109/cvpr.2016.90 https://doi.org/10.1109/cvpr.2017.243 https://openreview.net/forum?id=S1xh5sYgx https://doi.org/10.1109/iciprob54042.2022.9798730 https://proceedings.mlr.press/v162/kim22g.html https://doi.org/10.1145/3065386 https://doi.org/10.1016/j.rse.2015.01.025 https://doi.org/10.1038/nature14539 Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 893 Lekki, J., Ruberg, S., Binding, C., Anderson, R., & Vander Woude, A. (2019). Airborne hyperspectral and satellite imaging of harmful algal blooms in the Great Lakes Region: Successes in sensing algal blooms. Journal of Great Lakes Research, 45(3), 405–412. https://doi.org/10.1016/j.jglr.2019.03.016 Li, S. F., Jia, Q., & Liang, H. (2014). Research of Red Tide Algae Images Feature Selection Method Based on ReliefF and SBS. Applied Mechanics and Materials, 507, 806–809. https://doi.org/10.4028/www.scientific.net/amm.507.806 Liao, Y., Yu, N., Zhou, G., Wu, Y., & Wang, C. (2022). A wireless multi-channel low-cost lab- on-chip algae culture monitor AIoT system for algae farm. Computers and Electronics in Agriculture, 193, 106647. https://doi.org/10.1016/j.compag.2021.106647 Liu, Z., Tan, Y., He, Q., & Xiao, Y. (2022). SwinNet: Swin Transformer Drives Edge-Aware RGB-D and RGB-T Salient Object Detection. IEEE Transactions on Circuits and Systems for Video Technology, 32(7), 4486–4497. https://doi.org/10.1109/tcsvt.2021.3127149 Luo, J., Ni, G., Zhang, Y., Wang, K., Shen, M., Cao, Z., Qi, T., Xiao, Q., Qiu, Y., Cai, Y., & Duan, H. (2023,A new technique for quantifying algal bloom, floating/emergent and submerged vegetation in eutrophic shallow lakes using Landsat imagery. Remote Sensing of Environment, 287, 113480. https://doi.org/10.1016/j.rse.2023.113480 Pant, G., Yadav, D., & Gaur, A. (2020, June). ResNeXt convolution neural network topology - based deep learning model for identification and classification of Pediastrum. Algal Research, 48, 101932. https://doi.org/10.1016/j.algal.2020.101932 Pardeshi, R., & Deshmukh, P. D. (2020). Classification of Microscopic Algae: An Observational Study with AlexNet. Advances in Intelligent Systems and Computing, 309–316. https://doi.org/10.1007/978-981-15-2475-2_29 Park, J., Lee, H., Park, C. Y., Hasan, S., Heo, T. Y., & Lee, W. H. (2019, June 28). Algal Morphological Identification in Watersheds for Drinking Water Supply Using Neural Architecture Search for Convolutional Neural Network. Water, 11(7), 1338. https://doi.org/10.3390/w11071338 Piazza, G., Valsecchi, C., & Sottocornola, G. (2021, December 3). Deep Learning Applied to SEM Images for Supporting Marine Coralline Algae Classification. Diversity, 13(12), 640. https://doi.org/10.3390/d13120640 Prescott, G. W. (1948, December). Desmids. The Botanical Review, 14(10), 644–676. https://doi.org/10.1007/bf02861551 Qian, P., Zhao, Z., Liu, H., Wang, Y., Peng, Y., Hu, S., Zhang, J., Deng, Y., & Zeng, Z. (2020, July). Multi-Target Deep Learning for Algal Detection and Classification. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). https://doi.org/10.1109/embc44109.2020.9176204 Radosavovic, I., Kosaraju, R. P., Girshick, R., He, K., & Dollar, P. (2020, June). Designing Network Design Spaces. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr42600.2020.01044 Reimann, R., Zeng, B., Jakopec, M., Burdukiewicz, M., Petrick, I., Schierack, P., & Rödiger, S. (2020, June). Classification of dead and living microalgae Chlorella vulgaris by bioimage informatics and machine learning. Algal Research, 48, 101908. https://doi.org/10.1016/j.algal.2020.101908 Simonyan, K., & Zisserman, A. (2014, September 4). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv.org. https://arxiv.org/abs/1409.1556v6 Sonmez, M. E., Eczacıoglu, N., Gumuş, N. E., Aslan, M. F., Sabanci, K., & Aşikkutlu, B. (2022). Convolutional neural network - Support vector machine based approach for classification of cyanobacteria and chlorophyta microalgae groups. Algal Research, 61, 102568. https://doi.org/10.1016/j.algal.2021.102568 Szegedy, C., Wei Liu, Yangqing Jia, Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015, June). Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2015.7298594 Tan, M., & Le, Q. V. (2019, May 28). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv.org. https://arxiv.org/abs/1905.11946v5 https://doi.org/10.1016/j.jglr.2019.03.016 https://doi.org/10.4028/www.scientific.net/amm.507.806 https://doi.org/10.1016/j.compag.2021.106647 https://doi.org/10.1109/tcsvt.2021.3127149 https://doi.org/10.1016/j.rse.2023.113480 https://doi.org/10.1016/j.algal.2020.101932 https://doi.org/10.1007/978-981-15-2475-2_29 https://doi.org/10.3390/w11071338 https://doi.org/10.3390/d13120640 https://doi.org/10.1007/bf02861551 https://doi.org/10.1109/embc44109.2020.9176204 https://doi.org/10.1109/cvpr42600.2020.01044 https://doi.org/10.1016/j.algal.2020.101908 https://arxiv.org/abs/1409.1556v6 https://doi.org/10.1016/j.algal.2021.102568 https://doi.org/10.1109/cvpr.2015.7298594 https://arxiv.org/abs/1905.11946v5 Perdeshi & Deshmukh … Vol 4(2) 2023 : 885-894 894 Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., & Le, Q. V. (2019, June). MnasNet: Platform-Aware Neural Architecture Search for Mobile. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2019.00293 Tangsuksant, & Sarakon. (2023). Microalgae Detection by Digital Image Processing and Artificial Intelligence, International Conference on Artificial Life and Robotics (ICAROB2023), Feb. 9 to 12, on line, Oita, Japan, Retrieved May 9, 2023, from https://alife-robotics.co.jp/members2023/icarob/data/html/data/GS/GS2/GS2-1.pdf Wang, Z., Yang, S., Shi, M., & Qin, K. (2022). An Image Augmentation Method Based on Limited Samples for Object Tracking Based on Mobile Platform. Sensors, 22(5), 1967. https://doi.org/10.3390/s22051967 West, J., Ventura, D. and Warnick, S. (2007) Spring Research Presentation A Theoretical Foundation for Inductive Transfer. Brigham Young University, College of Physical and Mathematical Sciences, 32. - References - Scientific Research Publishing. (n.d.). Retrieved May 9, 2023, from https://www.scirp.org/(S(i43dyn45teexjx455qlt3d2q))/reference/referencespapers.aspx?re ferenceid=2624976 Won;Kim Kwangtae;Nam Jiwon;You Jungsu;Baek Park, J. S. (2021). Microalgae Detection Using a Deep Learning Object Detection Algorithm, YOLOv3 -Journal of Korean Society on Water Environment | Korea Science. Microalgae Detection Using a Deep Learning Object Detection Algorithm, YOLOv3 -Journal of Korean Society on Water Environment | Korea Science. https://doi.org/10.15681/KSWE.2021.37.4.275 Xie, S., Girshick, R., Dollar, P., Tu, Z., & He, K. (2017). Aggregated Residual Transformations for Deep Neural Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr.2017.634 Xu, L., Xu, L., Chen, Y., Zhang, Y., & Yang, J. (2022). Accurate Classification of Algae Using Deep Convolutional Neural Network with a Small Database. ACS ES&T Water, 2(11), 1921–1928. https://doi.org/10.1021/acsestwater.1c00466 Yang, M., Wang, W., Gao, Q., Zhang, L., Ji, Y., & Geng, S. (2021). Automatic Recognition of Harmful Algae Images Using Multiple CNNs. 2021 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI). https://doi.org/10.1109/icceai52939.2021.00055 Yang, M., Wang, W., Gao, Q., Zhao, C., Li, C., Yang, X., Li, J., Li, X., Cui, J., Zhang, L., Ji, Y., & Geng, S. (2022). Automatic identification of harmful algae based on multiple convolutional neural networks and transfer learning. Environmental Science and Pollution Research, 30(6), 15311–15324. https://doi.org/10.1007/s11356-022-23280-6 Yuan, A., Wang, B., Li, J., & Lee, J. H. (2023). A low-cost edge AI-chip-based system for real- time algae species classification and HAB prediction. Water Research, 233, 119727. https://doi.org/10.1016/j.watres.2023.119727 Zhang, X., Zhou, X., Lin, M., & Sun, J. (2018). ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/cvpr.2018.00716 https://doi.org/10.1109/cvpr.2019.00293 https://alife-robotics.co.jp/members2023/icarob/data/html/data/GS/GS2/GS2-1.pdf https://doi.org/10.3390/s22051967 https://www.scirp.org/(S(i43dyn45teexjx455qlt3d2q))/reference/referencespapers.aspx?referenceid=2624976 https://www.scirp.org/(S(i43dyn45teexjx455qlt3d2q))/reference/referencespapers.aspx?referenceid=2624976 https://doi.org/10.15681/KSWE.2021.37.4.275 https://doi.org/10.1109/cvpr.2017.634 https://doi.org/10.1021/acsestwater.1c00466 https://doi.org/10.1109/icceai52939.2021.00055 https://doi.org/10.1007/s11356-022-23280-6 https://doi.org/10.1016/j.watres.2023.119727 https://doi.org/10.1109/cvpr.2018.00716