INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL Online ISSN 1841-9844, ISSN-L 1841-9836, Volume: 18, Issue: 4, Month: August, Year: 2023 Article Number: 5108, https://doi.org/10.15837/ijccc.2023.4.5108 CCC Publications Hybrid ICHO-HSDC Model For Accurate Covid-19 Detection and Classification From CT Scan And X-Ray Images Badi Alekhya, R. Sasikumar, N. Sathish Kumar, N. Bharathiraja Badi Alekhya Department of Artificial intelligence and Machine Learning, RMD Engineering College, Kavaraipettai, Tamil Nadu, 601206, India. *Corresponding author: badialekhya565@gmail.com R. Sasikumar Department of Computer Science, and Engineering, RMD Engineering College, Kavaraipettai, Tamil Nadu, 601206, India. rsn.cse@rmd.ac.in N. Sathish Kumar Department of Artificial intelligence and Machine Learning, RMD Engineering College, Kavaraipettai, Tamil Nadu, 601206, India. rsn.cse@rmd.ac.in N. Bharathiraja Chitkara University Institute of Engineering and Technology, Chitkara University Punjab, India. rajamesoft@gmail.com Abstract The worldwide demand for medical care has increased due to the increasing expansion of Covid-19 cases. Therefore, in this case, prompt and precise identification of this illness is crucial. Health professionals are using additional screening techniques including CT imaging as well as chest X- rays for this. Pre-processing the CT scan pictures to eliminate the areas of areas, normalize image contrast, and minimize image noise, however, receives little attention. The seriousness of the Covid- 19 infection must be assessed in addition to the Covid-19 detection and categorization. An ICHO- HYBRID model for Covid-19 identification and classification from X-ray, as well as CT scan images, is offered as a solution to these issues. Histogram and morphological image processing methods are used for CT-scan images. The Improved Chicken Swarm Optimization (ICHO) technique is used to find the input image’s histogram threshold. The extracted areas are categorized using the Convolutional Neural Network method based on a feature vector. When infections are found, the CNN algorithm is used to categorize them as severe, moderate, or extremely severe using Support Vector Machine. To eliminate the noise from the test pictures for X-ray imaging, the Adapted Anisotropic Diffusion Filtering (A2DF) approach is used. Once the preprocessing is completed, features are extracted using an Image profile (IP) and Histogram-oriented gradient (HOG) to create a fused HOG and IP feature. Using the HYBRID method, the FHI characteristics are divided into 3 classes. When compared to SVM and CNN, the study provides the best accuracy, with scores of 94.6 for CT scan pictures and 95.6 for X-ray images. Keywords: Corona Virus, X-Ray Images, CT Images, Image Profile, Improved Chicken Swarm Optimization, Adapted Anisotropic Diffusion Filtering and Deep Learning https://doi.org/10.15837/ijccc.2023.4.5108 2 1 Introduction Groups of fatal bacterial infections caused by the Covid-19 infection closely mirror SARS-CoV illnesses. People with Covid-19 infection first have flu-like symptoms such as fever, exhaustion, a persistent cough, and breathing difficulties. In certain instances, however, a severe infection may lead to acute renal failure, which can be fatal. In addition to these typical signs and symptoms, this virus possesses a variety of unusual traits. There- fore, patients need to have an early diagnosis of this illness along with effective infection control [1]. Healthcare practitioners are using additional screening techniques, which are quicker and more accurate than standard testing, to identify Covid-19. These screening techniques [2] lead the process of finding the Covid-19 infection visibly. However, frequent CT scans are more expensive and riskier for youngsters and expectant mothers due to the high irradiation [3]. Deep Learning (DL) models have lately been effectively used in a variety of medical image analyses and applications, including the identification of lung cancer, pneumonia, and brain tumors [4]. The dangerous illness is less likely to spread due to the DL-based detection and classification approaches The use of the DL algorithms in numerous medical applications offers very accurate prediction and diagnosis while also assisting in the prevention of COVID-19 spread [5][6]. Early detection and accurate discovery of viruses will stop them from spreading and making patients worse, which may help lower the mortality rate. Grayscale CT scans and chest X-ray pictures are typical, thus pre-processing is necessary to confirm the size and struc- ture of the images before using them as training data for the model. For reliable identification and follow-up, segmenting the pneumonia lesions and localizing the infection are difficult tasks. However, current research has a difficult time differentiating pulmonary lesions in terms of size and form from infectious regions. A. Motivation and Issue Description To rank patients according to severity, it would be helpful to segment the CT scan medical images. It will aid in locating the contaminated area and offer details about its spatial characteristics, such as size and shape. However, only a few studies have used segmentation tasks for chest Coivd-19 CT images. The research’s main issue is that there isn’t enough focus on pre-processing datasets to get rid of diaphragm areas, normalize pic- ture contrast, and minimize image noise. In addition to COVID-19 identification and classification, DL-based segmentation techniques must be used to locate the infection zone to assess the extent of the illness. B. Main Contributions of the Work The ICHO-HYBRID model for recognition and classification in this study. The following is a summary of this work’s significant elements: • In contrast to conventional methods, the suggested model detects Covid-19 infections using both x-rays as well as CT scan images. • Reducing the diaphragm areas lowers the picture noise. • In addition to detection, it uses DL-based segmentation approaches to determine the rate of Covid-19 infec- tion. • It improves classification accuracy by combining CNN and SVM. The proposed model, ICHO-HYBRID, addresses the need for prompt and precise identification. The use of various image processing techniques and feature extraction methods for Covid-19 cases classification and detection accuracy. They also provide results that show superior accuracy when compared to existing methods like CNN-SVM. Therefore, the use of the ICHO-HYBRID model offers an effective solution to the problem of detection and classification, which is crucial in the present pandemic situation. The approach combines various image processing techniques such as Improved Chicken Swarm Optimization (ICHO), Histogram and morphological image processing, Adapted Anisotropic Diffusion Filtering (A2DF), and feature extraction methods like Image profile (IP) and Histogram-oriented gradient (HOG) to create a fused HOG and IP feature. The paper also uses a CNN method based on a feature vector and an SVM algorithm for classification. The results show higher accuracy compared to SVM and CNN. Therefore, the novelty of the proposed approach lies in the combination of various techniques and methods for categorization and identification, leading to improved accuracy in identifying the disease. 2 Related Work The pictures of chest X-rays using the Daniel Arias et al [7] .’s technique, which classifies them as either negative or positive for COVID-19 using the VGG19 and VGG16 models. In the classification process, pre- trained parameters from the Imagenet dataset are used to train the models of VGG19 and VGG16 using the TL method. The severity of the disorders is ignored as it labels the photos as normal and Covid afflicted. A novel model is suggested for identification in Tulin Ozturk’s method [8]. This model includes a multi-class classification in addition to a Covid and Normal classification algorithm. For binary classes, this model’s https://doi.org/10.15837/ijccc.2023.4.5108 3 classification accuracy was 98.08%, while for multi-class scenarios, it was 87.02%. A DL method has been created by Motif and Bandar [9] to classify COVID-19. Based on the categorization trials, three sorts of classes are returned: COVID-19, pneumonia, and normal. Additionally, a TL approach is used to enhance the model’s performance. CovidNet is a suggested DL architecture by Muhammad Aminuet al [10] that utilizes grayscale photos with a small training dataset. Deep feature extraction was then performed using CovidNet. The classification X-ray and CT, in contrast to other methods. Rachna Jain et al [11] collect a collection of X-ray as well as CT scan pictures that include information on both healthy and afflicted individuals. On the photos at first, they ran data cleansing and data augmentation processes. Then, a CNN model based on DL was used for classification. Models used to evaluate the results. MoutazAlazab et al [12] used the CNN model to identify Covid-19. Predictions techniques based on the Prophet method, LSTM networks, and the ARIMA model are employed in contrast to CNN. Dongsheng Ji et al [13] have suggested using feature fusion to detect Covid-19. This approach employs five common pretraining models after preprocessing to retrieve the precise characteristics. The severity of the disorders is ignored as it labels the photos as normal and Covid afflicted. Five pre-trained CNN-based transfer models were used by Ali Narin et al [14] to identify infection-related Covid and pneumonia. They employed 5-fold cross-validation and 3 binary classifications with 4 classes. However, the noise in the photos is not decreased with this strategy either. A fresh CAD method for Covid-19 identification by MortezaHeidari et al [15]. Bilateral low-pass filtering and the Histogram Equalization method are used to eliminate the majority of the diaphragm from the input image. The Covid-19, typical cases, and pneumonia categories are applied to the chest X-ray pictures. To identify Covid-19 infection, CT scan pictures were collected by Yazan Qiblaweya et al [16]. The afflicted area is identified from the pictures by segmenting the CT slices using the U-Net architecture. The noise in the photos was not lessened by this method, however. A hierarchical technique for CT lung pictures classification and identifying Covid-19 infections from Computed tomography is presented in [16]. Depending on the fraction of the lungs afflicted, a DED-CNN is used to identify the pictures as having a severe, moderate, or mild infection. Unfortunately, this method did not help to lessen the visual noise. An automatic workflow has been created by Dominik Müller et al [17] for segmenting the Covid-19-infected areas. This method uses data augmentation techniques and provides randomized, distinctive picture slices for training. They have a 3D U-Net DL model that they have employed to address the overfitting concerns. A deep learning hybrid forecasting CNN-LSTM [22] developed by Shwet Ketu et al. can predict the COVID- 19 pandemic in India with accuracy. Convolutional layers are used by the proposed approach to learn from and extract relevant data from a specific time series dataset. Additionally, the capacity of the LSTM layer to recognize both long- and short-term connections enhances it. Suneeta Satpathy et al [23] said that AI-mediated approaches are capable of predicting death rates. The investigation of effective prediction models is the study’s primary objective. The authors compare analyses of several models to find which one makes the best predictions. Foroogh Sharifzadeh et al. [24] have described an approach that starts with two neural nets, a CNN and a multilayer MLP recurrent neural network, and then suggests combining these two networks to benefit from each network’s strengths. It is obvious that the multilayer perceptron performs worse than CNN in terms of accuracy metrics; however, when it is combined with CNN, the accuracy metrics increase. Using deep learning, Noushin Davari et al. [25] have described a technique for analyzing UV-Visible video. The kind of incipient defect and its severity level is identified for every scene based mostly on the system’s logs of unexpected power outages as well as scheduled inspections throughout the year. Frames are retrieved from each clip at a pace based on the line’s nominal voltage to process the video. To account for camera movement, power devices are identified in each frame utilizing Faster RCNN and monitored across the entire video frame. Then, each device uses color thresholding to determine whether frames have corona discharges. To remove disturbances in the UV channel, extensive median filtering is also employed across the movie. The severity level of the impending problem is then calculated based on the proportion of the observed equipment’s surface. The suggested study employs a unique Quantum Tunnelling Particle Swarm Optimization (QT-PSO) technique to enhance the performance of the system, which is especially efficient for PV-based applications [26]. Envelope detectors, which are simple circuits that extract the envelope of the input signal, are used in the suggested method. To process the baseband signal using low-power digital signal processing (DSP) circuits, the envelope detector is utilized as a converter to transform the modulated radio frequency (RF) signal [27]. A balance between data accessibility, power consumption, and data availability ratio is what the suggested strategy seeks to accomplish. A four-phase strategy is used to achieve this, consisting of a multicast strategy for increased data availability, a data replication procedure for increased replication rate, a data accessibility strategy for increased accessibility rate, and a power consumption strategy for reduced transmission and reception power [28]. https://doi.org/10.15837/ijccc.2023.4.5108 4 3 Proposed Work This study develops a hybrid approach for the identification and categorization of Covid-19. The ICHO technique is used to calculate the histogram threshold for CT-scan images. The candidates are then subjected to a variety of statistics and shape-related characteristics extraction. Following the fusion of all the extracted features into a feature space, the hybrid method is used to classify the chosen regions. The nodules are categorized as severe, moderate, or extremely severe after they are found. A MADF approach is used to remove generated speckle noise from the test images for X-ray imaging. The selected features are utilized as the inputs to build the classifier model after the pre-processing stage. The chest X-ray pictures are then categorized as Covid-19, Pneumonia, and healthy using the same SVM-CNN method. Figure 1: Proposed Model Architecture The following table presents the summary of the parameters and variables used in this paper. Table 1 Parameters and variables C. Covid-19 Detection Pre-processing and Segmentation - Pre-processing is the preliminary phase in image processing which consists of Histogram and morphological techniques. Since we have to segment the lung region from the image, the first two components should be removed. In this technique, the histogram of the image is constructed and analyzed to automatically choose a threshold, based on which the outer portion of the image is detected and removed. Thresholding: After pre-processing, the threshold for each slice of the image is computed using the ICHO algorithm. The ICHO technique is used to calculate the histogram threshold for CT-scan images. The threshold image is obtained by applying the threshold to the pre-processed CT image. Nodule identification: The next step is to identify nodules in the CT image. The candidates for nodules are then subjected to a variety of statistics and shape-related characteristics extraction. Following the fusion of all the https://doi.org/10.15837/ijccc.2023.4.5108 5 extracted features into a feature space, the hybrid method is used to classify the chosen regions. The nodules are categorized as severe, moderate, or extremely severe after they are found. Speckle noise removal: In the case of X-ray images, a MADF (Median Absolute Difference Filter) approach is used to remove out generated speckle noise from the test images. Feature selection: The selected features are utilized as the inputs to build the classifier model after the pre-processing stage. The chest X-ray pictures are then categorized as Covid-19, Pneumonia, and healthy using the same SVM-CNN method. Covid-19 detection: Finally, the proposed hybrid approach is used to detect Covid-19 in CT-scan and X-ray images. The pre-processed CT images are used to identify nodules, and the X-ray images are categorized using the SVM-CNN method. Let I denote the pre- processed CT image of size MxN. From I, the histogram H is computed using the step value k from which the threshold η is estimated. Then the resultant image after removing the background is given by [18] where is the threshold image. The threshold η for each slice is computed using the ICHO algorithm. Then the Fitness function of ICHO is derived as f = ∂ 2(2 log )1/2 (2) Where ∂2 is the noise variance, ICHO Algorithm The intelligent bionic algorithm ICHO replicates the foraging activity of a flock of chickens in their native habitats. This algorithm consists of four steps: population, cock position, hen position, and chick position updation. (i) Update of Cock Position When the cock is away from the midpoint p of the hen in a cluster, it will perform a random search in a bigger region, thus improving the probability of a global search. On the other hand, when the cock is nearer to the midpoint p of the hen, it will perform a random search in a smaller region, which improves the local estimation capacity of the algorithm and hence improves the chances of determining the best solution [19]. The cock position update equation is given as kε[ 1,2,..,N ] and i are cock’s indices, dynamically chosen from the cock groups, i=k, fi, and fk represent the fitness function of the ith and kth cocks, respectively. ∂ is a small constant that is used to evade the divide by zero error and y is the proportional coefficient. (ii) Hen Position Update Mode Among the initial population, Gbest individuals with better fitness values are selected and one among them is randomly chosen as the supreme individual. The Gbest value is set manually, during experiments, which should be less than the number of cocks in the population. During the foraging step, the supreme individual is considered as the learning objective for the hen. Hence, the position of the hen (H) is updated based on the supreme individual using the following equation In which rand is a spontaneous value between 0 and 1, C is a cock, H is a hen, r1 is the cock index, which is the ith group mate of hen, the chicken index is r2, randomly from either the group, Pe(t) is the looking at individual hen presiding learning object in the tth iterative process, and hao is the elite individual’s number reserved in the community. S1 and S2 are calculated as https://doi.org/10.15837/ijccc.2023.4.5108 6 Where fi, fr1, and fr2 represent the fitness function values at i, r1, and r2, respectively. (iii) Chick Position Update Mode The chick’s position can be updated based on the position of the cock, by the equation Where m denotes the index of the chick’s mother, r1 is the index of the cock which is the ith chick’s group mate, CH is the individual chick, FL and ware learning factors (iv) Population Update Strategy In most of the optimization solutions, if a set of individuals are found to be near the local extreme, other indi- viduals will continuously move towards this local extreme. Hence the same individuals may be repeated in the population. During higher iterations, the number of similar individuals may linearly increase thereby affecting the diversity of the population. If diversity is affected, it reduces the chances of meeting the global best solution. Hence to keep the diversity of the population unaffected, the individuals in the population are updated as follows: Where Pi (t) is ith individual at the tth generation, L and U are the lower and upper bounds of the variable, and a rand is a uniform number in a random range of [0,1]. The above equation helps in estimating the histogram threshold for each slice of the input image. B. Feature Extraction Feature extraction is a key area in categorizing image characteristics. Further, by choosing notable aspects, successfully employed to increase the diagnostic system’s accuracy. In this phase, various shape and statistical- based required features are separated from the nodule elements. Let x1, x2, x3, · · ·, xn be the centers of the shapes which are fed as input to the region-growing algorithm. It returns the associated n candidate portions B1, B2, B3, · · ·, Bn. The following statistical and shape properties of candidate regions are extracted to construct a feature vector: Mean, Median, Mode, Variance, Standard deviation, and Consistency feature (CF). The CF is estimated from the shape and appearance of the lesion in the adjacent slices of the image. Hence the midpoints identified in the slice Si will be checked with the midpoints from the (Si-k) and (Si+k) slices. If it is found in any of those 2k slices, then CFi=1, and if not found CFi=0. The selected features are extracted for each nodule element portion Bi in a slice Sj and then fused to get a feature vector Vi C. Classification Depending on the feature vector V, the extracted regions are classified using the HYBRID algorithm. The HYBRID classifier combines the benefits of both CNN and SVM algorithms [20]. Then labeling is performed by dividing them into test and train data. SVM: The fundamental aim of SVM is to determine the optimal hyperplane in the feature space which maximally divides the target class into two. Geometrically, the SVM algorithm determines an optimal hyperplane with the maximal margin to divide the two classes, The training set of SVM is represented as Here xj is the characteristic vector of the jth model as input and yj is the output catalog = +1 or -1. SVM splits the +ve and –ve instances utilizing a hyperplane as Here w.x denotes the dot product of w and x. The training algorithm of SVM is illustrated below: SVM Training https://doi.org/10.15837/ijccc.2023.4.5108 7 The architecture of CNN: A When compared to a typical deep network, CNN applies an initial pixel filter in an image to collect detailed patterns. The CNN is composed of the following types of layers as follows: • Conv_layer - This layer applies n filters. Following the complication, a Relustimulation function is applied to add the system’s nonlinearity. • Max_Pool Layer: The typical approach of dividing local features into subspaces and retaining only the high- value max pooling. • Fully interconnected layer: The previous layers’ whole neurons are linked to the succeeding levels. The CNN will classify the labels using the attributes of the CLs and the PL. The details of the architecture are represented in Figure 2. Figure 2: CNN Model Each image attribute is selected from the testing collection of features. The training features matrix’s size is established. The SVM Structure is then estimated. Classes = 0 if (N1), training feature = 1. The training feature is then calculated using the SVM framework. CNN has been trained. The testing feature’s batch size is calculated. Once found, tumors are classed as serious, medium, or extremely severe using the CNN algorithm. D. Covid-19 Detection Pre-processing The MADF approach is used to remove noise from X-ray pictures at this phase. MADF divides input images into many sub-images or gradients. These gradients are processed successively before being mixed again. This filtering method primarily removes noise as well as edge areas from photos. This filter eliminates the edge detection from the pictures after several iterations. The following is a representation of an original image O mixed with speckle n: https://doi.org/10.15837/ijccc.2023.4.5108 8 To obtain accurately filtered imaging, the filtering procedure is continued till the value of k reaches 0.001 because if k were to approach 0, all image characteristics would be eliminated. The least possible association should exist between the noise and image classifications. There until the stochastic region of the picture approaches the stochastic confidence interval, speckle elimination is kept up. The kurtosis value is represented by k. Figure 3: CNN Architecture FHI Feature Extraction: The characteristics of HOG as well as IP are obtained after the pre-processing stage. HOG Features: The values of each image cell are joined with a gradient L2-norm to produce the direction statistics channel known as HOG. The histogram’s channels are made by computing a negative gradient on a rectangle void (i.e. R-HOG). The elements in the resulting feature vector add up numerous times since they overlap by a factor of two in their size. The gradient for each pixel in the image is computed as GH = P (x+1, y)-P (x, y) (15) GV=P (x, y+1)-P (x, y) (16) While GH and GV are vertical and horizontal gradients, respectively, and P (x, y) is the total no. of pixels at (x, y). Algorithm 1 describes the processes required in HOG feature extraction: Algorithm for HOG Feature Extraction 1. Get the Input Image 2. Perform pre-processing using MADF 3. Generate the Gradient Image https://doi.org/10.15837/ijccc.2023.4.5108 9 4. Divide the Image window into cells and overlapping blocks 5. Compute the HOG for each block 6. Perform normalization of each block 7. Concatenate all normalization histograms 8. Obtain the HOG feature vector Image Profiles (IP): The presence of artifacts in X-ray pictures may be readily recognized by extracting image profiles. Similarly, the fidelity of pictures may be assessed by examining the IPs of sharp corners. Obtaining FHI Features: Because fusing multiple image characteristics leads to a bigger number of attributes needed for reliable detection, the FHI attributes are obtained by fusing the HOG and IP characteristics. The classification method in FHI features training. HOG and IP are both extracted characteristics. The continuity equation [21] depicts the selection of features and the fusing process. The FHI characteristics are denoted by Eq. (19). We acquire 7876 fused FHI features, of which 116 are chosen based on the greatest entropy value. The HYBRID algorithm then categorizes the X-ray chest pictures into 3 groups: Covid-19, Pneumonia, and normal. (As discussed in section 3.2.2.3) The study presents a hybrid approach for identifying and categorizing COVID-19 in CT scans and X-ray images using the ICHO technique for threshold calculation for CT-scan images and a modified adaptive direc- tional filter (MADF) approach for speckle noise reduction in X-ray images. The following are the details of dataset testing and training: Dataset: The study uses two datasets: one for CT scans and X-ray chest images. Training Dataset: For CT scan images, the authors used the dataset publicly available at https://covid- ct.grand-challenge.org/Data/. It includes 1252 CT images belonging to COVID-19 (397), non-COVID pneu- monia (559), and normal cases (296). The authors used 60% of the total dataset for training the model. For chest X-ray images, collected the dataset from different sources, which included COVID-19 (150), pneu- monia (150), and healthy (150) cases. The authors used 70% of the total dataset for training the model. Testing Dataset: The remaining 40% of the CT scan dataset was used for testing the model’s performance. For chest X-ray images, the authors used the dataset publicly available at https://github.com/ieee8023/covid- chestxray-dataset. It includes 106 COVID-19, 120 non-COVID pneumonia, and 100 normal cases. The authors used this dataset for testing the model’s performance. Model Training: For CT scan images, the authors pre-processed the images using histogram and morpho- logical techniques to segment the lung region. The ICHO algorithm was then used to calculate the histogram threshold, and the extracted features were combined into a feature space. The SVM-CNN hybrid method was used to classify the nodules as severe, moderate, or extremely severe. For chest X-ray images, the MADF approach is to reduce the generated speckle noise. The extracted fea- tures were then used as inputs to build the classifier model after the pre-processing stage. The SVM-CNN hybrid method was used to classify the chest X-ray images as COVID-19, pneumonia, and healthy. 4 Results and Discussion Python was used to construct the hypothesized HYBRID-based classification algorithm. The CT scan as well as X-ray image Covid-19 datasets are taken from https://github.com/ieee8023/COVID-chestxray-dataset. Cohen JP created this using pictures from several freely accessible sources. For both pneumonia and healthy X-ray pictures, the dataset http://openaccess.thecvf.com/content cvpr 2017/papers/Wang ChestX-ray supplied by Wang et al. was utilized. Table 2 displays the CNN classifying parameters. https://doi.org/10.15837/ijccc.2023.4.5108 10 Table 2 CNN settings 5 Results -CT Scan Images A. CT Images For inputs, 1035 severe instances, 1552 intermediate cases, and 2077 normal cases are extracted from CT scan images. Figure 4 depicts a selection of input photos from the normal, moderate, and severe categories. Table 3 displays the categorized images for each class, as well as it allows for a better understanding. https://doi.org/10.15837/ijccc.2023.4.5108 11 For the proposed CNN-based classification model, validation and train curves are generated and depicted in Fig.5. Figure 5: Training Vs Validation for HYBRID The proposed HSNN classification outperforms the SVM as well as CNN classifiers for accuracy, recall, precision, and f-score metrics. https://doi.org/10.15837/ijccc.2023.4.5108 12 Figure 6: Analysis of Accuracy for Samples and Features Figure 7: Analysis of Precision for Samples and Features Figure 8:Analysis of Recall for Samples and Features https://doi.org/10.15837/ijccc.2023.4.5108 13 Figure 9:Analysis of F-score (%) for Samples and Features Table 4 and Figure 5 exhibit the performance measurement comparative findings for all three methods. Table 4 compares the system performance of all techniques. As shown in Fig. 6, HYBRID has the best accuracy of 94.6, followed by CNN (91.3) as well as SVM (87.6). Likewise, for additional measures, the proposed HYBRID beats the other 2 techniques. B. Results for X-Ray Images 1189 normal images, 2223 Pneumonia imaging, and 1516 Covid imaging are used as input in the X-ray image data. Figure 7 depicts the pictures used in the normal, Covid-19, and pulmonary datasets. The categorized X-ray images from data testing are in Table 5. Figure 8 depicts the pre- processing results obtained using the background subtraction filtering approach. Figure 9 depicts the HYBRID validation and training curves for 10 iterations. https://doi.org/10.15837/ijccc.2023.4.5108 14 Figure 11:Images of Pneumonia Figure 12:Preprocessed Images Table 5 Results for X-Ray Images https://doi.org/10.15837/ijccc.2023.4.5108 15 Figure 13 Curve for Proposed Methods With consideration of the parameters Precision, Accuracy, Recall, and F1- score, the based hybrid HYBRID classification outperforms the SVM and CNN classifiers. The comparative findings of the performance indicators for the three methods are shown in Table 6 and Figure 10. https://doi.org/10.15837/ijccc.2023.4.5108 16 Figure 14:Analysis of Accuracy for Samples and Features Figure 15:Analysis of Precision for Samples and Features Figure 16:Analysis of Recall for Samples and Features https://doi.org/10.15837/ijccc.2023.4.5108 17 Figure 17:Analysis of Recall for Samples and Features Table 6 Performance Analysis of All Proposed and Existing Methods Because HYBRID uses both SVM and CNN training techniques, it achieves better precision compared to the other two methods. As shown in Fig 10, HYBRID has the best accuracy of 95.6, followed by CNN (91.2) and SVM (89). Similarly, for additional measures, the proposed HYBRID beats the other two methods. Converge time and Computation Complexity: On GPU configurations, the computational complexity of the novel and current methods was examined. The convergence time as well as the computation cost of the methods are shown in the table below. Table 7 Convergence time and computational complexity The shortest convergence speed of roughly 0.010 seconds, which is faster than SVM and CNN. However, due to the combination of SVM and CNN from table 7, its computational cost is somewhat greater, roughly 11.25 seconds, than SVM as well as NN. https://doi.org/10.15837/ijccc.2023.4.5108 18 6 Conclusion The proposed technique utilizes the ICHO-HYBRID to detect and categorize Covid-19. The ICHO technique is used to determine the histogram threshold for CT-scan images. The HYBRID method is used to classify the feature vector’s chosen areas. When nodules are discovered, they are graded as severe, moderate, or extreme using the HYBRID method. The MADF approach was used to preprocess the test pictures for X-ray imaging. Following the pre-processing stage, the FHI features are employed in the classification model training process. The HYBRID system then categorizes the chest X-ray pictures as infected, or normal. Python was used to construct the proposed HYBRID-based classification method. As compared to SVM and CNN, experimental findings demonstrate that HYBRID achieves accurate results of 94.6 and 95.6 for images of CT and X-ray respectively. Future work could include the ICHO-HYBRID model with the larger dataset to validate its performance and accuracy. This would help to assess the model’s generalizability and robustness in identifying and classifying Covid-19 cases. The proposed model could be extended to incorporate other modalities, such as ultrasound or MRI, in the accuracy enhancement of Covid-19 detection and classification. Declaration: Participation Consent and Ethical Approval: This procedure is carried out without the involvement of people. Rights of Humans and Animals: Animal and human rights are not being violated in any way. Backing: There is no money associated with this effort. Competing Interests: There is no potential for a conflict of interest with this project. Contributions to the Authorship: There is no evidence of authorship. Salutation: No credit is due for this creation. References [1] Luca Brunese, Fabio Martinelli, Francesco Mercaldo and Antonella Santone, "Machine learning for coron- avirus covid-19 detection from chest x-rays", Elsevier, 24th International Conference on knowledge based and Intelligent Information and Engoneeriong Systems, 2020. [2] Md Mamunur Rahamana, Chen Li, Yudong Yaob, Frank Kulwa, Mohammad Asadur Rahmanc, Qian Wangd, Shouliang Qia, Fanjie Konge, Xuemin Zhuf, and Xin Zhaog, "Identification of COVID-19 samples from chest X-Ray images using deep learning: A comparison of transfer learning approaches", Journal of X-Ray Science and Technology 28 (2020) 821–839, DOI 10.3233/XST-200715, 2020. [3] Nazmus Shakib Shadin, Silvia Sanjana and Nusrat Jahan Lisa, "COVID-19 Diagnosis from Chest X- ray Images Using Convolutional Neural Network(CNN) and InceptionV3", International Conference on Information Technology (ICIT), 2021. [4] Boran Sekeroglu and Ilker Ozsahin, "Detection of COVID-19 from Chest X-Ray Images Using Convolutional Neural Networks", SLAS Technology,2020, Vol. 25(6) 553-565,2020. [5] Thiruneelakandan, A. & Kaur, Gaganpreet & Vadnala, Geetha & Bharathiraja, N. & Pradeepa, K. & Retnadhas, Mervin. (2022). Measurement of oxygen content in water with purity through soft sensor model. Measurement: Sensors. 24. 100589. 10.1016/j.measen.2022.100589. [6] B. Kaur and G. Kaur, “Heart disease prediction using modified machine learning algorithm,” in Interna- tional Conference on Innovative Computing and Communications. Springer, 2023, pp. 189–201. [7] Daniel Arias-Garzón, Jesús Alejandro Alzate-Grisales, Simon Orozco-Arias, Harold Brayan Arteaga- Arteaga, Mario Alejandro Bravo-Ortiz, Alejandro Mora-Rubio, Jose Manuel Saborit-Torres, Joaquim Ángel Montell Serrano, Maria de la Iglesia Vayá, Oscar Cardona-Morales and Reinel Tabares-Soto, "COVID-19 detection in X-ray images using convolutional neural networks", Elsevier, Machine Learning with Applica- tions 6 (2021) 100138, 2021. [8] Tulin Ozturk, Muhammed Talo, Eylul Azra Yildirim, Ulas Baran Baloglu, Ozal Yildirim and U. Rajen- dra Acharya, "Automated detection of COVID-19 cases using deep neural networks with X-ray images", Elsevier, Computers in Biology and Medicine 121 (2020) 103792,2020. https://doi.org/10.15837/ijccc.2023.4.5108 19 [9] Munif Alotaibi and Bandar Alotaibi, "Detection of COVID-19 Using Deep Learning on X-Ray Images", Intelligent Automation & Soft Computing DOI:10.32604/iasc.2021.018350,2021. [10] Pradeepa, K., Bharathiraja, N., Meenakshi, D., Hariharan, S., Kathiravan, M., & Kumar, V. (2022, De- cember). Artificial Neural Networks in Healthcare for Augmented Reality. In 2022 Fourth International Conference on Cognitive Computing and Information Processing (CCIP) (pp. 1-5). IEEE. (Scopus In- dexed)https://doi.org/10.1109/CCIP57447.2022.10058670 [11] Rachna Jain, Meenu Gupta, Soham Taneja, and D. Jude Hemanth, "Deep learning based detection and analysis of COVID-19 on chest X-ray images", Applied Intelligence, Elsevier, (2021) 51:1690-1700, 2021. [12] Moutaz Alazab, Albara Awajan, Abdelwadood Mesleh, Ajith Abraham, Vansh Jatana, Salah Alhyari, "COVID-19 Prediction and Detection Using Deep Learning", International Journal of Computer Informa- tion Systems and Industrial Management Applications, Vol-12, 2020 [13] Dongsheng Ji , Zhujun Zhang ,Yanzhong Zhao and Qianchuan Zhao, "Research on Classification of COVID- 19 Chest X-Ray Image Modal Feature Fusion Based on Deep Learning", Hindawi,Journal of Healthcare Engineering,Volume 2021, Article ID 6799202, 12 pages, 2021. [14] Ali Narin, Ceren Kaya, and Ziynet Pamuk, "Automatic Detection of Coronavirus Disease (COVID-19) Us- ing X-ray Images and Deep Convolutional Neural Networks", DOI: 10.1007/s10044-021-00984-y, arXiv,2020. [15] Morteza Heidari, Seyedehnafiseh Mirniaharikandehei, Abolfazl Zargari Khuzani, Gopichandh Danala, Yuchen Qiu and Bin Zheng, "Improving the performance of CNN to predict the likelihood of COVID- 19 using chest X-ray images with preprocessing algorithms", Elsevier, International Journal of Medical Informatics 144 (2020) 104284, 2020. [16] Yazan Qiblaweya, Anas Tahira, E.H Muhammad, H. Chowdhury, Amith Khandakara, Serkan Kiranyaza, Tawsifur Rahmanb, Nabil Ibtehaz, Sakib Mahmud, Somaya Al-Madeed and Farayi Musharavati, "Detection and Severity Classification of COVID-19 in CT images using deep learning", MDPI, 2021. [17] Dominik Müller, Inaki Soto-Rey, Frank Kramer, "Robust chest CT image segmentation of COVID-19 lung infection based on limited data", Elsevier, Informatics in Medicine Unlocked 25 (2021) 100681, 2021. [18] Noor Khehrah, Muhammad Shahid Farid, Saira Bilal and Muhammad Hassan Khan,” Lung Nodule De- tection in CT Images Using Statistical and Shape-Based Features”, Journal of Imaging, MDPI, Feb 2020 [19] Ravindhar, N., Sasikumar, S., Bharathiraja, N., & Kumar, M. V. (2022). Secure Integration Of Wireless Sensor Network Witth Cloud Using Coded Probable Bluefish Cryptosystem. Journal Of Theoretical And Applied Information Technology, 100(24). [20] Mudhafar Jalil Jassim Ghrabat, Guangzhi Ma1, Ismail Yaqub Maolood, Shayem Saleh Alresheedi1, and Zaid Ameen Abduljabbar,” An effective image retrieval based on optimized genetic algorithm utilized a novel SVM-based convolutional neural network classifier”, Human-Centric Computing Information Sci- ences, Springer, (2019) 9:31. [21] Nur-A-Alam, Mominul Ahsan, Md. Abdul Based, Julfikar Haider and Marcin Kowalski,” COVID-19 De- tection from Chest X-ray Images Using Feature Fusion and Deep Learning”, Sensors, MDPI, 2021. [22] Shwet Ketu and Pramod Kumar Mishra, "India perspective: CNN-LSTM hybrid deep learning model- based COVID-19 prediction and current status of medical resource availability", Soft Computing (2022) 26:645–664, 2022. [23] Suneeta Satpathy, Monika Mangla, Nonita Sharma, Hardik Deshmukh2 and Sachinandan Mohanty, "Pre- dicting mortality rate and associated risks in COVID-19 patients", Spat. Inf. Res. (2021) 29(4):455–464, 2021. [24] Foroogh Sharifzadeh, Gholamreza Akbarizadeh and Yousef Seifi Kavian, "Ship Classification in SAR Images Using a New Hybrid CNN–MLP Classifier", Journal of the Indian Society of Remote Sensing (April 2019) 47(4):551–562, 2019. [25] Jayanthi, E., Ramesh, T., Kharat, R. S., Veeramanickam, M. R. M., Bharathiraja, N., Venkatesan, R., & Marappan, R. (2023). Cybersecurity enhancement to detect credit card frauds in health care using new machine learning strategies. Soft Computing, 27(11), 7555-7565. https://doi.org/10.15837/ijccc.2023.4.5108 20 [26] Rajaram, A., & Sathiyaraj, K. (2022). An improved optimization technique for energy harvesting system with grid-connected power for greenhouse management. Journal of Electrical Engineering & Technology, 17(5), 2937-2949. [27] Vinod, D., Bharathiraja, N., Anand, M., & Antonidoss, A. (2021). An improved security assurance model for collaborating small material business processes. Materials Today: Proceedings, 46, 4077-4081. [28] S. UmaMaheswaran, G. Kaur, A. Pankajam, A. Firos, P. Vashistha, V. Tripathi, and H. S. Mohammed, “Empirical analysis for improving food quality using artificial intelligence technology for enhancing health- care sector,” Journal of Food Quality, vol. 2022, 2022. Copyright ©2023 by the authors. Licensee Agora University, Oradea, Romania. This is an open access article distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International License. Journal’s webpage: http://univagora.ro/jour/index.php/ijccc/ This journal is a member of, and subscribes to the principles of, the Committee on Publication Ethics (COPE). https://publicationethics.org/members/international-journal-computers-communications-and-control Cite this paper as: Alekhya, B.; Sasikumar, R.; Sathish Kumar, N.; Bharathiraja, N. (2023). Hybrid ICHO-HSDC Model For Accurate Covid-19 Detection and Classification From CT Scan And X-Ray Images, International Journal of Computers Communications & Control, 18(4), 5108, 2023. https://doi.org/10.15837/ijccc.2023.4.5108 Introduction Related Work Proposed Work Results and Discussion Results -CT Scan Images Conclusion