<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CSSE</journal-id>
<journal-id journal-id-type="nlm-ta">CSSE</journal-id>
<journal-id journal-id-type="publisher-id">CSSE</journal-id>
<journal-title-group>
<journal-title>Computer Systems Science &#x0026; Engineering</journal-title>
</journal-title-group>
<issn pub-type="ppub">0267-6192</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">25282</article-id>
<article-id pub-id-type="doi">10.32604/csse.2023.025282</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Detection of COVID-19 and Pneumonia Using Deep Convolutional Neural Network</article-title>
<alt-title alt-title-type="left-running-head">Detection of COVID-19 and Pneumonia Using Deep Convolutional Neural Network</alt-title>
<alt-title alt-title-type="right-running-head">Detection of COVID-19 and Pneumonia Using Deep Convolutional Neural Network</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Islam</surname><given-names>Md. Saiful</given-names></name>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Das</surname><given-names>Shuvo Jyoti</given-names></name>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Khan</surname><given-names>Md. Riajul Alam</given-names></name>
</contrib>
<contrib id="author-4" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Momen</surname><given-names>Sifat</given-names></name><email>sifat.momen@northsouth.edu</email>
</contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Mohammed</surname><given-names>Nabeel</given-names></name>
</contrib>
<aff><institution>Department of Electrical and Computer Engineering, North South University</institution>, <addr-line>Plot 15, Block B, Bashundhara, Dhaka, 1229</addr-line>, <country>Bangladesh</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Sifat Momen. Email: <email>sifat.momen@northsouth.edu</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-05-24"><day>24</day>
<month>05</month>
<year>2022</year></pub-date>
<volume>44</volume>
<issue>1</issue>
<fpage>519</fpage>
<lpage>534</lpage>
<history>
<date date-type="received"><day>18</day><month>11</month><year>2021</year></date>
<date date-type="accepted"><day>19</day><month>1</month><year>2022</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Islam et al.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Islam et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CSSE_25282.pdf"></self-uri>
<abstract>
<p>COVID-19 has created a panic all around the globe. It is a contagious disease caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), originated from Wuhan in December 2019 and spread quickly all over the world. The healthcare sector of the world is facing great challenges tackling COVID cases. One of the problems many have witnessed is the misdiagnosis of COVID-19 cases with that of healthy and pneumonia cases. In this article, we propose a deep Convolutional Neural Network (CNN) based approach to detect COVID&#x002B; (i.e., patients with COVID-19), pneumonia and normal cases, from the chest X-ray images. COVID-19 detection from chest X-ray is suitable considering all aspects in comparison to Reverse Transcription Polymerase Chain Reaction (RT-PCR) and Computed Tomography (CT) scan. Several deep CNN models including VGG16, InceptionV3, DenseNet121, DenseNet201 and InceptionResNetV2 have been adopted in this proposed work. They have been trained individually to make particular predictions. Empirical results demonstrate that DenseNet201 provides overall better performance with accuracy, recall, F1-score and precision of 94.75%, 96%, 95% and 95% respectively. After careful comparison with results available in the literature, we have found to develop models with a higher reliability. All the studies were carried out using a publicly available chest X-ray (CXR) image data-set.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>COVID-19</kwd>
<kwd>convolutional neural network</kwd>
<kwd>deep learning</kwd>
<kwd>DenseNet201</kwd>
<kwd>model performance</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>COVID-19 has thrown peoples&#x2019; lives into disarray all around the world. A virus named SARS-CoV-2 is responsible for this contagious disease [<xref ref-type="bibr" rid="ref-1">1</xref>]. The first known case was identified in Wuhan, Hubei province, China in December 2019 [<xref ref-type="bibr" rid="ref-2">2</xref>]. Later it gradually spread to all parts of the world. As of November 15, 2021 the number of affected and death cases are 254,050,589 and 5,115,804 respectively [<xref ref-type="bibr" rid="ref-3">3</xref>]. Symptoms of COVID-19 vary from one patient to another&#x2013;however fever, cough and shortness of breath have been found to be the most common traits amongst the infected people [<xref ref-type="bibr" rid="ref-4">4</xref>]. Through the respiratory droplets, the virus can spread to others who are within six feet from the infected ones [<xref ref-type="bibr" rid="ref-5">5</xref>]. The world is facing a massive disruption in the global economy. According to the World Bank, it will take 80 years to recover from this global economic meltdown [<xref ref-type="bibr" rid="ref-6">6</xref>].</p>
<p>COVID-19 can be detected using several techniques including RT-PCR test, image analyzing of chest X-ray and CT scans. RT-PCR is an effective way of COVID-19 detection, but it takes time on sample collection [<xref ref-type="bibr" rid="ref-7">7</xref>] and requires special kits that may not be available everywhere [<xref ref-type="bibr" rid="ref-8">8</xref>]. This test can give reliable results but according to laboratory facilities, it may take on average six to eight hours for processing each case [<xref ref-type="bibr" rid="ref-9">9</xref>]. Another way to diagnose COVID-19 is through the analyzing of chest CT scan images [<xref ref-type="bibr" rid="ref-10">10</xref>]. People with overexposure of radiation from CT scans have high risks of cancers [<xref ref-type="bibr" rid="ref-11">11</xref>]. Due to COVID-19, the numbers of CT scans have increased significantly. This increases the overexposure to radiation and consequently the likelihood of developing cancers in radiographers. Alongside it is also expensive, needs clinical expertise to operate and may not be available in the underdeveloped region of a country. However, chest X-ray tests are inexpensive and have a low risk of radiation, compared to CT scan [<xref ref-type="bibr" rid="ref-12">12</xref>]. The World Health Organization (WHO) finds chest X-ray images to be a very effective method in the diagnosis of COVID-19 [<xref ref-type="bibr" rid="ref-13">13</xref>].</p>
<sec id="s1_1">
<label>1.1</label>
<title>Goals and Objectives</title>
<p>In the recent time, patients with pneumonia have been misdiagnosed as COVID patients. In this article, we aim to solve this problem by developing an intelligent system that would be able to classify whether a patient has pneumonia or COVID. Furthermore the system is able to correctly identify the normal cases. A comparison between five different deep CNN models has been presented in this work. Comparing all the model performances on a publicly available CXR image data-set from Kaggle [<xref ref-type="bibr" rid="ref-14">14</xref>], we selected the most accurate one.</p>
</sec>
<sec id="s1_2">
<label>1.2</label>
<title>Contributions</title>
<p>After the outbreak of coronavirus doctors are focusing more on COVID-19. In many cases, they begin COVID-19 management finding few similar COVID symptom for a patient [<xref ref-type="bibr" rid="ref-15">15</xref>], which would not have been done prior to this pandemic. As a result it leads to early misdiagnosis as well. Few symptoms of COVID-19 matches with other repository disease (i.e., pneumonia), that is mainly the reason of raising misdiagnose issue. To tackle this issue we employed five renowned deep CNN model to classify COVID&#x002B;, pneumonia and normal case. Although the models are existing one, we rather focused more to solve three class predicting problem.</p>
<p>Rests of the segments of this article are organized as follows: <xref ref-type="sec" rid="s2">Section 2</xref> describes Literature Review. Methods and Materials have been provided in <xref ref-type="sec" rid="s3">Section 3</xref> while <xref ref-type="sec" rid="s4">Section 4</xref> provides a detailed discussion on results. Finally the article is concluded in <xref ref-type="sec" rid="s5">Section 5</xref>.</p>
</sec>
</sec>
<sec id="s2">
<label>2</label>
<title>Literature Review</title>
<p>Thus far, a lot of research has been done to detect COVID-19 using the CNN approach. In this section we discuss some research that used chest X-ray images to classify COVID-19.</p>
<p>Azemin et al. [<xref ref-type="bibr" rid="ref-16">16</xref>] proposed a Deep Learning (DL) method based on ResNet101 CNN architecture to detect COVID-19 from chest X-ray. ResNet101 has been used for its residual learning framework, due to which the model has lower computational complexity. In this study thousands of chest X-ray images were utilized in the pre-trained phase to distinguish meaningful objects, and thousands more were used in the re-trained stage to detect abnormalities. This method obtained only 71.9% accuracy.</p>
<p>Nishio et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] built a Computer-Aided Diagnosis (CADx) to classify COVID-19 pneumonia, non-COVID-19 pneumonia and healthy lung. They adopted VGG16 to train on a customized data-set. To avoid the noise in the data-set, lateral view and CT images were excluded. Combination of three types of data augmentation methods (conventional method such as flipping, shifting and rotating etc, mixup and Random Image Cropping and Patching-RICAP) were applied to prevent over-fitting in model training. The CAD system provides 83.6% accuracy over the test set containing 125 CXR images.</p>
<p>Rahaman et al. [<xref ref-type="bibr" rid="ref-18">18</xref>] trained 15 different CNN models and after careful comparisons in terms of precision, recall and F-1 score, selected the most suitable one. All the models have been trained on a custom data-set consisting of CXR images of COVID-19, pneumonia and healthy patients. Various data augmentation techniques have been applied on the training sample to improve the model performance.</p>
<p>Khan et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] presented a deep CNN model named CoroNet, which automatically detects COVID-19 infection from chest X-ray images. This proposed model was developed on Xception architecture and trained on a custom ImageNet data-set. The model obtained an overall accuracy of 89.6%.</p>
<p>Erdem et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] offered a comparison between six different DL models that trained over a custom data-set. They applied transfer learning methods on the data-set. Data augmentation technique has been adopted to reduce model over-fitting. The training phase was performed with batch size of 32 and learning rate of 0.0001. Among the six models, InceptionV3 provides the highest accuracy of 90%.</p>
<p>Hasan et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] reported a DL approach to detect pneumonia in COVID&#x002B; patients (i.e., patients who have been diagnosed to have COVID-19). They worked on the same data-set that we used to train different CNN models. Keras image data generator technique is used to perform data augmentation. After that the data-set has been split into train and test sets in the ratio of 80% and 20% respectively. The overall accuracy was 91.69% in predicting pneumonia in COVID&#x002B; patients. VGG16 provides the highest accuracy after epoch 7, when training and validation loss decreased and accuracy increased.</p>
<p>Abbas et al. [<xref ref-type="bibr" rid="ref-22">22</xref>] proposed a CNN based architecture based on DeTraC model (Decompose, Transfer and Compose) which facilitates the pre-trained CNN models to improve their performance to classify COVID&#x002B; cases from the chest X-ray images. By adding a class decomposition layer to the pre-trained models, DeTraC can be accomplished. The class decomposition layer seeks to divide each class in the dataset into numerous sub-classes, assign new labels to the new set, and consider each subset as an independent class before reassembling the subsets to produce final predictions. The mechanism reported the highest accuracy of 93.1%.</p>
<p>Wang et al. [<xref ref-type="bibr" rid="ref-23">23</xref>] tailored a CNN model, named COVID-Net which can predict COVID-19 from CXR images using a human-machine collaborative design strategy. A benchmark data-set is used for training and evaluating the model. The introduced model achieved 93.3% accuracy in three class (COVID&#x002B;, pneumonia and normal case) prediction.</p>
<table-wrap id="table-1"><label>Table 1</label>
<caption>
<title>Analysis of the discussed literature</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Literature</th>
<th>Approach</th>
<th>Pros</th>
<th>Class predicts</th>
<th>Result in accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Azemin et al. [<xref ref-type="bibr" rid="ref-16">16</xref>]</td>
<td>ResNet101</td>
<td colspan="2">The model performed over only frontal view chest X-ray images</td>
<td>COVID&#x002B;, normal</td>
<td>71.9%</td>
</tr>
<tr>
<td>Nishio et al. [<xref ref-type="bibr" rid="ref-17">17</xref>]</td>
<td>VGG16</td>
<td colspan="2">The developed CADx system improves model accuracy and robustness</td>
<td>COVID&#x002B;, pneumonia, normal</td>
<td>83.6%</td>
</tr>
<tr>
<td>Rahaman et al. [<xref ref-type="bibr" rid="ref-18">18</xref>]</td>
<td>VGG19</td>
<td colspan="2">Transfer Learning (TL) was applied to overcome insufficient data and training time</td>
<td>COVID&#x002B;, pneumonia, normal</td>
<td>89.3%</td>
</tr>
<tr>
<td>Khan et al. [<xref ref-type="bibr" rid="ref-19">19</xref>]</td>
<td>CoroNet</td>
<td colspan="2">Used a TL method to initialize model by weighted parameters</td>
<td>COVID&#x002B;, pneumonia, normal</td>
<td>89.6%</td>
</tr>
<tr>
<td>Erdem et al. [<xref ref-type="bibr" rid="ref-20">20</xref>]</td>
<td>InceptionV3</td>
<td colspan="2">Applied TL to overcome insufficient data and long training time</td>
<td>COVID&#x002B;, pneumonia, normal</td>
<td>90%</td>
</tr>
<tr>
<td>Hasan et al. [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td>VGG16</td>
<td colspan="2">Machine Learning (ML) tools (i.e., LabelBinarizer) were used on the images to transform them into categorical form</td>
<td>Pneumonia in COVID-19</td>
<td>91.69%</td>
</tr>
<tr>
<td>Abbas et al. [<xref ref-type="bibr" rid="ref-22">22</xref>]</td>
<td>ResNet</td>
<td colspan="2">The used DeTraC method helps to improve model performance</td>
<td>COVID&#x002B;, SARS and normal case</td>
<td>93.1%</td>
</tr>
<tr>
<td>Wang et al. [<xref ref-type="bibr" rid="ref-23">23</xref>]</td>
<td>COVID-Net</td>
<td colspan="2">COVID-Net, a hand-stitched human-machine collaborative design strategy was employed to make prediction from images</td>
<td>COVID&#x002B;, pneumonia, normal</td>
<td>93.3%</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>All the discussed literatures have used CNN technique to classify COVID-19 and other disease (i.e., pneumonia and normal case). Most of them solved three class prediction problem and achieved very good accuracy. But our proposed model is higher in accuracy.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Methods and Materials</title>
<p>In this article, we investigate, assess and analyze the influence of different CNN architectures using image dataset containing chest X-ray images of patients who are healthy, COVID infected or have Pneumonia.</p>
<p>Before we embark into the training phase, data augmentation techniques have been applied to tackle class imbalance issues. All images were reshaped to 224 &#x00D7; 224 pixels. All the images of the data-set are taken from only one angle. This will be a problem as it is impractical to always take the image from the same angle. The applied data augmentation techniques would tackle this issue as well. The system architecture is provided in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>System architecture</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-1.png"/>
</fig>
<sec id="s3_1">
<label>3.1</label>
<title>Data Collection &#x0026; Pre-processing</title>
<p>X-ray images of patients from [<xref ref-type="bibr" rid="ref-14">14</xref>] were used in this work. <xref ref-type="fig" rid="fig-2">Fig. 2</xref> shows sample images from this dataset. This dataset has already been divided into a test dataset and a train dataset. 80% of the data is in the train dataset. On the other hand, the test dataset holds 20% of the data.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Sample data</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-2.png"/>
</fig>
<p>After obtaining the dataset [<xref ref-type="bibr" rid="ref-14">14</xref>] from kaggle, we applied some preprocessing task on it. Image data augmentation is a technique, that we have used in our article to artificially increase the size of our training dataset by creating modified versions of images in the dataset. Image data augmentation is used to increase the size of the training dataset in order to improve the performance and capability of the models used in this article. Data augmentation techniques such as padding, cropping, rotating, resizing and flipping are the most common methods that are we used over the images to increase the size of the dataset. In this project we have done data augmentation through some parameters or measurements which is mentioned in <xref ref-type="table" rid="table-4">Tab. 4</xref>. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> shows the different data augmentation techniques that we applied on the dataset.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Sample data augmentation</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-3.png"/>
</fig>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Model</title>
<p>After completing the data pre-processing stage, the data-set is divided into train and test set. The train set is ready to be fed into CNN architecture. In this article, we have used five types of deep CNN architectures: VGG16, InceptionV3, DesNet121, DesNet201 and InceptionResNetV2 have been evaluated. All these architectures are briefly described below.</p>
<p>VGG16 is a 16 convolutional layer architecture [<xref ref-type="bibr" rid="ref-24">24</xref>]. There are four types of layers which are convolution &#x002B; relu, max pooling, fully nested &#x002B; relu and softmax. The final output of the max pooling layer, which performed over 2 &#x00D7; 2 pixels [<xref ref-type="bibr" rid="ref-24">24</xref>]. <xref ref-type="fig" rid="fig-4">Fig. 4</xref> [<xref ref-type="bibr" rid="ref-24">24</xref>] shows the basic network architecture of VGG16.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>VGG16 architecture</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-4.png"/>
</fig>
<p>In InceptionV3 model, factorized Convolutions is the initial part of the model. It also keeps a check on the network proficiency [<xref ref-type="bibr" rid="ref-25">25</xref>]. The auxiliary classifier mainly used for connecting each layer to maintain continuous process [<xref ref-type="bibr" rid="ref-25">25</xref>]. <xref ref-type="fig" rid="fig-5">Fig. 5</xref> [<xref ref-type="bibr" rid="ref-25">25</xref>] shows the basic network architecture of InceptionV3.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>InceptionV3 architecture</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-5.png"/>
</fig>
<p>We used the DenseNet architecture in our work because it reduces the vanishing-gradient problem, reinforces highlight proliferation, energizes highlight reuse, and considerably diminishes the number of parameters [<xref ref-type="bibr" rid="ref-26">26</xref>]. In our work we have used DenseNet121 and DenseNet201 which are designed by 121 and 201 convolutional layers respectively. <xref ref-type="fig" rid="fig-6">Fig. 6</xref> [<xref ref-type="bibr" rid="ref-26">26</xref>] shows the DenseNet architecture.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>DenseNet architecture</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-6.png"/>
</fig>
<p>We have used a renowned CNN model which belongs to Inception family of architectures with collaborative residual connections [<xref ref-type="bibr" rid="ref-27">27</xref>]. InceptionResNetV2 is a CNN model which is designed by 121 deep layers. This architecture results in an image output of 299 &#x00D7; 299 pixels. <xref ref-type="fig" rid="fig-7">Fig. 7</xref> shows the basic network architecture of InceptionResNetV2 [<xref ref-type="bibr" rid="ref-27">27</xref>].</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>InceptionResNetV2 architecture</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-7.png"/>
</fig>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Testing Process</title>
<p>The most important aspect of an article is whether or not the processes used works properly. To investigate this, some testing must be performed, as illustrated by the flow chart below <xref ref-type="fig" rid="fig-8">Fig. 8</xref>. The learned model is loaded first, whether it is VGG16, DenseNet, InceptionV3 or InceptionResNetV2 and then a new image (unseen) is provided as input. After we input an image, the model classifies it and displays the results, whether that image is holds COVID infection or not.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Result process</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-8.png"/>
</fig>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Results and Discussion</title>
<p>The experiment was run on a 64-bit version of Windows11 using Python 3.6 as the software development language in a Jupyter notebook. To build and train the model, the entire experiment was implemented in the TensorFlow framework using Keras as the back-end. The whole application is implemented on a computer with an Intel&#x00AE; Core-TM i5-8250U Processor (6M Cache, 1.60 GHz to 3.40 GHz) and 8 GB RAM. To begin, the data were divided into two categories: 80% training data and 20% testing data with random state, as shown in <xref ref-type="table" rid="table-2">Tab. 2</xref>. As a validation dataset, we used the test dataset for our model. During training and validation, we will begin investing the learning curves acquired by all backbone models with fine tuning parameter weight. On the images, we used the Keras image data generator technique to provide data augmentation. We used image augmentation techniques to enhance the number of samples in the dataset and improve the classification model&#x2019;s performance. Our image augmentation parameters were a rotation range of 20, zoom range of 20, width shift range of 0.2, height shift range of 0.2, shear range of 0.10, vertical flipping and horizontal flipping. The number of epochs was set at 50 for consistency of results and due to the size of the dataset. All hyperparameter shown is <xref ref-type="table" rid="table-3">Tab. 3</xref>. As optimization features, a learning rate of 0.001 using the Adam optimizer and a cross-entropy using categorical were used in this experiment. We employed the fine-tuning technique to optimize our total trainable parameter weights. Even when there is enough training data, fine-tuning is preferred because it considerably reduces training time. A pretrained model (base-model) and a new model (head-model) were built in our trained model. Each layer in Keras has a parameter called &#x201C;trainable&#x201D; which has been set to be &#x201C;non-trainable&#x201D; in the base-model and the head-model to set the total trainable parameters. <xref ref-type="table" rid="table-4">Tab. 4</xref> shows the total trainable parameters for particular CNN architectures which were trained with our data-set.</p>
<table-wrap id="table-2"><label>Table 2</label>
<caption>
<title>Datasets for training and validation</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th style="background:#FFFFFF;">Dataset</th>
<th style="background:#FFFFFF;">Number of images in the dataset</th>
<th style="background:#FFFFFF;">Class</th>
<th style="background:#FFFFFF;">Number of images</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3">Training data</td>
<td rowspan="3">5144</td>
<td>Covid-19</td>
<td>460</td>
</tr>
<tr>
<td>Normal</td>
<td>1266</td>
</tr>
<tr>
<td>PNEUMONIA</td>
<td>3418</td>
</tr>
<tr>
<td rowspan="3">Testing data</td>
<td rowspan="3">1288</td>
<td>Covid-19</td>
<td>116</td>
</tr>
<tr>
<td>Normal</td>
<td>317</td>
</tr>
<tr>
<td>PNEUMONIA</td>
<td>855</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-3"><label>Table 3</label>
<caption>
<title>Model training parameters and functions</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th style="background:#FFFFFF;">Parameters</th>
<th style="background:#FFFFFF;">Values/types</th>
</tr>
</thead>
<tbody>
<tr>
<td style="background:#FFFFFF;">Epoch</td>
<td style="background:#FFFFFF;">50</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Initial learning rate</td>
<td style="background:#FFFFFF;">0.001</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Batch size</td>
<td style="background:#FFFFFF;">16</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Optimizer</td>
<td style="background:#FFFFFF;">Adam</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Execution environment</td>
<td style="background:#FFFFFF;">CPU</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Shuffling</td>
<td style="background:#FFFFFF;">Each epoch</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Loss function</td>
<td style="background:#FFFFFF;">Categorical cross-entropy</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Rotation and Zoom range</td>
<td style="background:#FFFFFF;">20%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Width and height shifting</td>
<td style="background:#FFFFFF;">20%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Shear range</td>
<td style="background:#FFFFFF;">10%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Horizontal and Vertical flip</td>
<td style="background:#FFFFFF;">Yes</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Rescaling</td>
<td style="background:#FFFFFF;">1/255</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Fill mode</td>
<td style="background:#FFFFFF;">Nearest</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-4"><label>Table 4</label>
<caption>
<title>Total trainable parameters</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th>Total trainable parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>DenseNet121</td>
<td>65795</td>
</tr>
<tr>
<td>Inception Resnet V2</td>
<td>98563</td>
</tr>
<tr>
<td>InceptionV3</td>
<td>131331</td>
</tr>
<tr>
<td>VGG16</td>
<td>33027</td>
</tr>
<tr>
<td>DenseNet201</td>
<td>123139</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>For every fine-tuned model, the accuracy and loss results in the training and validation procedure is given in <xref ref-type="table" rid="table-5">Tab. 5</xref>. From <xref ref-type="table" rid="table-5">Tab. 5</xref> we see the best epoch result in our trained models. We can further conclude that DenseNet201 architecture has provided the best training and validation accuracies.</p>
<table-wrap id="table-5"><label>Table 5</label>
<caption>
<title>Overall classification accuracy and loss of the models at best epoch</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th style="background:#FFFFFF;">Model</th>
<th style="background:#FFFFFF;">Best epoch</th>
<th style="background:#FFFFFF;">Training accuracy</th>
<th style="background:#FFFFFF;">Validation accuracy</th>
<th style="background:#FFFFFF;">Training loss</th>
<th style="background:#FFFFFF;">Validation loss</th>
</tr>
</thead>
<tbody>
<tr>
<td style="background:#FFFFFF;">Inception v3</td>
<td style="background:#FFFFFF;">46,50</td>
<td style="background:#FFFFFF;">95.82%</td>
<td style="background:#FFFFFF;">95.01%</td>
<td style="background:#FFFFFF;">12.80%</td>
<td style="background:#FFFFFF;">17.24%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">DenseNet121</td>
<td style="background:#FFFFFF;">42,50</td>
<td style="background:#FFFFFF;">97.21%</td>
<td style="background:#FFFFFF;">95.92%</td>
<td style="background:#FFFFFF;">7.3%</td>
<td style="background:#FFFFFF;">12.23%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">VGG16</td>
<td style="background:#FFFFFF;">21,41</td>
<td style="background:#FFFFFF;">94.06%</td>
<td style="background:#FFFFFF;">94.88%</td>
<td style="background:#FFFFFF;">16.28%</td>
<td style="background:#FFFFFF;">14.71%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">Inception Resnet v2</td>
<td style="background:#FFFFFF;">42,50</td>
<td style="background:#FFFFFF;">96.14%</td>
<td style="background:#FFFFFF;">94.66%</td>
<td style="background:#FFFFFF;">12.14%</td>
<td style="background:#FFFFFF;">16.30%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">DenseNet201</td>
<td style="background:#FFFFFF;">40</td>
<td style="background:#FFFFFF;"><bold>98.06%</bold></td>
<td style="background:#FFFFFF;"><bold>96.80%</bold></td>
<td style="background:#FFFFFF;"><bold>5.6%</bold></td>
<td style="background:#FFFFFF;"><bold>9.02%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The training and validation curves for all models are shown in <xref ref-type="fig" rid="fig-9">Figs. 9</xref>&#x2013;<xref ref-type="fig" rid="fig-13">13</xref>. These figures demonstrate how training and validation accuracy increase or decrease with consecutive epochs, as well as training and validation losses. The number of epochs is plotted on the x-axis, while the accuracy/loss is plotted on the y-axis.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Training-validation accuracy loss and loss of VGG16 model</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-9.png"/>
</fig>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Training-validation accuracy loss and loss of DenseNet201 model</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-10.png"/>
</fig>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Training-validation accuracy loss and loss of InceptionResNetV2 model</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-11.png"/>
</fig>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>Training-validation accuracy loss and loss of InceptionV3 model</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-12.png"/>
</fig>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>Training-validation accuracy loss and loss of DenseNet121 model</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-13.png"/>
</fig>
<p>The confusion matrix is used for evaluating different classification metrics (e.g., precision, recall, F1-score and accuracy) which can be used to assess a model&#x2019;s performance. <xref ref-type="fig" rid="fig-14">Fig. 14</xref> shows the confusion matrix for the five different models that have been used.</p>
<fig id="fig-14">
<label>Figure 14</label>
<caption>
<title>3-class classification confusion matrix</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_25282-fig-14.png"/>
</fig>
<p><xref ref-type="table" rid="table-6">Tab. 6</xref> presents the classification performance result for our trained model for the three classes (COVID-19, NORMAL and PNEUMONIA). All the models have a very high performance when it comes to classifying COVID&#x002B; and pneumonia images, but to classifying the normal images the performance is slightly lower than COVID&#x002B; and pneumonia images. DenseNet121 and DenseNet201 have better precision, recall (sensitivity), and F1-Score when it comes to classifying COVID&#x002B; and Pneumonia. Inceptionv3, DenseNet121, and DenseNet201 perform well in terms of precision, recall (sensitivity), and F1-Score in addition to classifying normal cases.</p>
<table-wrap id="table-6"><label>Table 6</label>
<caption>
<title>Performance results from all of the evaluation models for each class</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th style="background:#FFFFFF;">Model</th>
<th style="background:#FFFFFF;">Class</th>
<th style="background:#FFFFFF;">Precision</th>
<th style="background:#FFFFFF;">Recall</th>
<th style="background:#FFFFFF;">F1-Score</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3" style="background:#FFFFFF;">Inception v3</td>
<td style="background:#FFFFFF;">COVID-19</td>
<td style="background:#FFFFFF;">95%</td>
<td style="background:#FFFFFF;">97%</td>
<td style="background:#FFFFFF;">96%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">NORMAL</td>
<td style="background:#FFFFFF;">92%</td>
<td style="background:#FFFFFF;">89%</td>
<td style="background:#FFFFFF;">90%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">PNEUMONIA</td>
<td style="background:#FFFFFF;">96%</td>
<td style="background:#FFFFFF;">97%</td>
<td style="background:#FFFFFF;">96%</td>
</tr>
<tr>
<td rowspan="3" style="background:#FFFFFF;">DenseNet121</td>
<td style="background:#FFFFFF;">COVID-19</td>
<td style="background:#FFFFFF;">99%</td>
<td style="background:#FFFFFF;">99%</td>
<td style="background:#FFFFFF;">99%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">NORMAL</td>
<td style="background:#FFFFFF;">86%</td>
<td style="background:#FFFFFF;">96%</td>
<td style="background:#FFFFFF;">91%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">PNEUMONIA</td>
<td style="background:#FFFFFF;">98%</td>
<td style="background:#FFFFFF;">94%</td>
<td style="background:#FFFFFF;">96%</td>
</tr>
<tr>
<td rowspan="3" style="background:#FFFFFF;">VGG16</td>
<td style="background:#FFFFFF;">COVID-19</td>
<td style="background:#FFFFFF;">100%</td>
<td style="background:#FFFFFF;">97%</td>
<td style="background:#FFFFFF;">98%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">NORMAL</td>
<td style="background:#FFFFFF;">80%</td>
<td style="background:#FFFFFF;">95%</td>
<td style="background:#FFFFFF;">86%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">PNEUMONIA</td>
<td style="background:#FFFFFF;">98%</td>
<td style="background:#FFFFFF;">91%</td>
<td style="background:#FFFFFF;">94%</td>
</tr>
<tr>
<td rowspan="3" style="background:#FFFFFF;">Inception Resnet v2</td>
<td style="background:#FFFFFF;">COVID-19</td>
<td style="background:#FFFFFF;">95%</td>
<td style="background:#FFFFFF;">99%</td>
<td style="background:#FFFFFF;">97%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">NORMAL</td>
<td style="background:#FFFFFF;">84%</td>
<td style="background:#FFFFFF;">95%</td>
<td style="background:#FFFFFF;">89%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">PNEUMONIA</td>
<td style="background:#FFFFFF;">98%</td>
<td style="background:#FFFFFF;">93%</td>
<td style="background:#FFFFFF;">95%</td>
</tr>
<tr>
<td rowspan="3" style="background:#FFFFFF;">DenseNet201</td>
<td style="background:#FFFFFF;">COVID-19</td>
<td style="background:#FFFFFF;">100%</td>
<td style="background:#FFFFFF;">92%</td>
<td style="background:#FFFFFF;">96%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">NORMAL</td>
<td style="background:#FFFFFF;">81%</td>
<td style="background:#FFFFFF;">99%</td>
<td style="background:#FFFFFF;">89%</td>
</tr>
<tr>
<td style="background:#FFFFFF;">PNEUMONIA</td>
<td style="background:#FFFFFF;">99%</td>
<td style="background:#FFFFFF;">92%</td>
<td style="background:#FFFFFF;">95%</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The validation accuracy, macro average (precision, recall, and F1 score) of various pre- trained models were compared in this study, as shown in <xref ref-type="table" rid="table-7">Tab. 7</xref>. The macro-average is used to determine how well the system performs overall across data sets. The macro-average function computes F-1, Precision, and Recall for each label and provides the average without taking the proportion for each label in the data-set. DenseNet121 performed well with a model accuracy of 96.65%, followed by Inception v3 with a model accuracy of 96.40% and DenseNet121 with a classification accuracy of 96.02%. In terms of performance measures, all of the models perform well, but the DenseNet121 model shows consistently better results than other models in performance metrics (Precision 95%, Recall 96% and F1-Score 95%).</p>
<table-wrap id="table-7"><label>Table 7</label>
<caption>
<title>Performance score</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th>Model accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>F1-Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>InceptionV3</td>
<td>94.53%</td>
<td>95%</td>
<td>95%</td>
<td>95%</td>
</tr>
<tr>
<td>DenseNet121</td>
<td>93.50%</td>
<td>95%</td>
<td><bold>96%</bold></td>
<td><bold>95%</bold></td>
</tr>
<tr>
<td>VGG16</td>
<td>92.39%</td>
<td>93%</td>
<td>94%</td>
<td>93%</td>
</tr>
<tr>
<td>InceptionResnetV2</td>
<td>93.78%</td>
<td>94%</td>
<td>95%</td>
<td>94%</td>
</tr>
<tr>
<td>DenseNet201</td>
<td><bold>94.75%</bold></td>
<td><bold>95%</bold></td>
<td>94%</td>
<td>94%</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="table-8">Tab. 8</xref> compares our proposed work with notable works available in the literature. We compared the accuracy metric only since most of the work in the literature (as described in <xref ref-type="table" rid="table-8">Tab. 8</xref>) reported the accuracy metric. It is also to be noted that along with accuracy, other performance metrics for our models are high.</p>
<table-wrap id="table-8"><label>Table 8</label>
<caption>
<title>A comparison of the proposed model to other existing deep learning-based studies</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th style="background:#FFFFFF;">Reference</th>
<th style="background:#FFFFFF;">Model</th>
<th style="background:#FFFFFF;">Prediction</th>
<th style="background:#FFFFFF;">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Azemin et al. [<xref ref-type="bibr" rid="ref-16">16</xref>]</td>
<td>ResNet-101</td>
<td>COVID&#x002B; and normal</td>
<td>71.9%</td>
</tr>
<tr>
<td>Nishio et al. [<xref ref-type="bibr" rid="ref-17">17</xref>]</td>
<td>VGG-16</td>
<td>COVID-19 pneumonia, non-COVID-19 pneumonia and healthy lung</td>
<td>83.6%</td>
</tr>
<tr>
<td>Rahaman et al. [<xref ref-type="bibr" rid="ref-18">18</xref>]</td>
<td>VGG-19</td>
<td>COVID&#x002B;, pneumonia and normal</td>
<td>89.3%</td>
</tr>
<tr>
<td>Khan et al. [<xref ref-type="bibr" rid="ref-19">19</xref>]</td>
<td>CoroNet</td>
<td>COVID&#x002B;, pneumonia (bacterial and viral) and normal</td>
<td>89.6%</td>
</tr>
<tr>
<td>Erdem et al. [<xref ref-type="bibr" rid="ref-20">20</xref>]</td>
<td>InceptionV3</td>
<td>COVID&#x002B;, pneumonia and normal</td>
<td>90%</td>
</tr>
<tr>
<td>Hasan et al. [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td>VGG-16</td>
<td>Pneumonia in COVID-19</td>
<td>91.69%</td>
</tr>
<tr>
<td>Abbas et al. [<xref ref-type="bibr" rid="ref-22">22</xref>]</td>
<td>ResNet (with DeTraC)</td>
<td>COVID&#x002B;, SARS case and normal</td>
<td>93.1%</td>
</tr>
<tr>
<td>Wang et al. [<xref ref-type="bibr" rid="ref-23">23</xref>]</td>
<td>COVID-Net</td>
<td>COVID&#x002B;, pneumonia and normal</td>
<td>93.3%</td>
</tr>
<tr>
<td>Proposed method</td>
<td>DenseNet121</td>
<td>COVID&#x002B;, pneumonia and normal</td>
<td><bold>94.75%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion</title>
<p>In this article, a multi-classification deep CNN model was designed for detecting COVID infected, pneumonia and normal case. As there have been a number of misdiagnosis amongst healthy, COVID infected and pneumonia patients, our work focuses on models that can classify these three cases reliably. We applied different types of deep CNN architecture such as VGG16, InceptionResnetV2, DenseNet121, DenseNet201 and InceptionV3 after careful pre-processing. Using the COVID-19 and pneumonia patients data-set, we demonstrated a mechanism for selecting appropriate models of estimation and prediction of desired parameters. Empirical results demonstrate that DenseNet201 provides overall better performance with accuracy, recall, F1-score and precision of 94.75%, 96%, 95% and 95% respectively. After careful comparison with results available in the literature, we have found that our demonstrated model is much higher in accuracy and more reliable than others. In our future work, we aim to collect more dataset to enrich our model.</p>
</sec>
</body>
<back><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> The authors received no specific funding for this study.</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. F.</given-names> <surname>Pedersen</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Ho</surname></string-name></person-group>, &#x201C;<article-title>Sars-cov-2: A storm is raging</article-title>,&#x201D; <source>Journal of Clinical Investigation</source>, vol. <volume>130</volume>, no. <issue>5</issue>, pp. <fpage>2202</fpage>&#x2013;<lpage>2205</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Singhal</surname></string-name></person-group>, &#x201C;<article-title>A review of coronavirus disease-2019 (COVID-19)</article-title>,&#x201D; <source>Indian Journal of Pediatrics</source>, vol. <volume>87</volume>, no. <issue>4</issue>, pp. <fpage>281</fpage>&#x2013;<lpage>286</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>&#x201C;COVID-19 coronavirus pandemic</collab></person-group>,&#x201D; <comment>[Online]. Available:</comment> <uri>https://www.worldometers.info/coronavirus/</uri>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y. R.</given-names> <surname>Nobel</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Phipps</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zucker</surname></string-name>, <string-name><given-names>T. C.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>M. E.</given-names> <surname>Sobieszczyk</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Gastrointestinal symptoms and coronavirus disease 2019: A case-control study from the United States</article-title>,&#x201D; <source>Gastroenterology</source>, vol. <volume>159</volume>, no. <issue>1</issue>, pp. <fpage>373</fpage>&#x2013;<lpage>375</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. N.</given-names> <surname>Desai</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Patel</surname></string-name></person-group>, &#x201C;<article-title>Stopping the spread of COVID-19</article-title>,&#x201D; <source>JAMA Patient Page</source>, vol. <volume>323</volume>, no. <issue>15</issue>, pp. <fpage>1516</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>The global economy: On track for strong but uneven growth as COVID-19 still weighs</collab></person-group>, <year>2021</year>. [Online]. Available: <uri>https://www.worldbank.org/en/news/feature/2021/06/08/the-global-economy-on-track-for-strong-but-uneven-growth-as-covid-19-still-weighs</uri>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R. S.</given-names> <surname>Khan</surname></string-name> and <string-name><given-names>I. U.</given-names> <surname>Rehman</surname></string-name></person-group>, &#x201C;<article-title>Spectroscopy as a tool for detection and monitoring of coronavirus (COVID-19)</article-title>,&#x201D; <source>Expert Review of Molecular Diagnostics</source>, vol. <volume>20</volume>, no. <issue>7</issue>, pp. <fpage>647</fpage>&#x2013;<lpage>649</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Akst</surname></string-name></person-group>, &#x201C;<article-title>RNA extraction kits for COVID-19 tests are in short supply in US</article-title>,&#x201D; <year>2020</year>. [Online]. Available: <uri>https://www.the-scientist.com/news-opinion/rna-extraction-kits-for-covid-19-tests-are-in-short-supply-in-us-67250</uri>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Jawerth</surname></string-name></person-group>, &#x201C;<article-title>How is the COVID-19 virus detected using real time RT-PCR</article-title>,&#x201D; <year>2020</year>. [Online] Available: <uri>https://www.iaea.org/sites/default/files/6120811.pdf</uri>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Ahuja</surname></string-name>, <string-name><given-names>B. K.</given-names> <surname>Panigrahi</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Dey</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Rajinikanth</surname></string-name> and <string-name><given-names>T. K.</given-names> <surname>Gandhi</surname></string-name></person-group>, &#x201C;<article-title>Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices</article-title>,&#x201D; <source>Applied Intelligence</source>, vol. <volume>51</volume>, no. <issue>1</issue>, pp. <fpage>571</fpage>&#x2013;<lpage>585</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Bajoghli</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Bajoghli</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Tayari</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Rouzbahani</surname></string-name></person-group>, &#x201C;<article-title>Children, CT scan and radiation</article-title>,&#x201D; <source>International Journal of Preventive Medicine</source>, vol. <volume>1</volume>, no. <issue>4</issue>, pp. <fpage>220</fpage>&#x2013;<lpage>222</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>&#x201C;Radiation risk from medical imaging</collab></person-group>,&#x201D; <year>2021</year>. [Online]. Available: <uri>https://www.health.harvard.edu/cancer/radiation-risk-from-medical-imaging</uri>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E. A.</given-names> <surname>Akl</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Bla&#x017E;i&#x0107;</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Yaacoub</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Frija</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Chou</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Use of chest imaging in the diagnosis and management of COVID-19: A WHO rapid advice guide</article-title>,&#x201D; <source>Radiology</source>, vol. <volume>298</volume>, no. <issue>2</issue>, pp. <fpage>E63</fpage>&#x2013;<lpage>E69</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="other"><person-group person-group-type="author">P. Patel, <collab>&#x201C;Chest X-ray (COVID-19 &#x0026; pneumonia)</collab></person-group>,&#x201D; <year>2020</year>. [Online]. Available: <uri>https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia</uri>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Yousefzai</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Bhimaraj</surname></string-name></person-group>, &#x201C;<article-title>Misdiagnosis in the COVID-19 era: When zebras are everywhere, don&#x2019;t forget the horses</article-title>,&#x201D; <source>JACC: Case Reports</source>, vol. <volume>2</volume>, no. <issue>10</issue>, pp. <fpage>1614</fpage>&#x2013;<lpage>1619</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. Z. C.</given-names> <surname>Azemin</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Hassan</surname></string-name>, <string-name><given-names>M. I. M.</given-names> <surname>Tamrin</surname></string-name> and <string-name><given-names>M. A. M.</given-names> <surname>Ali</surname></string-name></person-group>, &#x201C;<article-title>COVID-19 deep learning prediction model using publicly available radiologist-adjudicated chest X-ray images as training data: Preliminary findings</article-title>,&#x201D; <source>International Journal of Biomedical Imaging</source>, vol. <volume>2020</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>7</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Nishio</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Noguchi</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Matsuo</surname></string-name> and <string-name><given-names>T.</given-names> <surname>Murakami</surname></string-name></person-group>, &#x201C;<article-title>Automatic classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray image: Combination of data augmentation methods</article-title>,&#x201D; <source>Scientific Reports</source>, vol. <volume>10</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. M.</given-names> <surname>Rahaman</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Yao</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Kulwa</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Rahman</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Identification of COVID-19 samples from chest X-ray images using deep learning: A comparison of transfer learning approaches</article-title>,&#x201D; <source>Journal of X-ray Science and Technology</source>, vol. <volume>28</volume>, no. <issue>5</issue>, pp. <fpage>821</fpage>&#x2013;<lpage>839</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. I.</given-names> <surname>Khan</surname></string-name>, <string-name><given-names>J. L.</given-names> <surname>Shah</surname></string-name> and <string-name><given-names>M. M.</given-names> <surname>Bhat</surname></string-name></person-group>, &#x201C;<article-title>Coronet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images</article-title>,&#x201D; <source>Computer Methods and Programs in Biomedicine</source>, vol. <volume>196</volume>, no. <issue>18</issue>, pp. <fpage>9</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Erdem</surname></string-name> and <string-name><given-names>T.</given-names> <surname>Ayd&#x0131;n</surname></string-name></person-group>, &#x201C;<article-title>COVID-19 detection in chest X-ray images using deep learning</article-title>,&#x201D; <source>Research Square (Preprint)</source>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. D. K.</given-names> <surname>Hasan</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Ahmed</surname></string-name>, <string-name><given-names>Z. M. E.</given-names> <surname>Abdullah</surname></string-name>, <string-name><given-names>M. M.</given-names> <surname>Khan</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Anand</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Deep learning approaches for detecting pneumonia in COVID-19 patients by analyzing chest X-ray images</article-title>,&#x201D; <source>Mathematical Problems in Engineering</source>, vol. <volume>2021</volume>, no. <issue>1</issue>, pp. <fpage>8</fpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Abbas</surname></string-name>, <string-name><given-names>M. M.</given-names> <surname>Abdelsamea</surname></string-name> and <string-name><given-names>M. M.</given-names> <surname>Gaber</surname></string-name></person-group>, &#x201C;<article-title>Classification of COVID-19 in chest X-ray images using detrac deep convolutional neural network</article-title>,&#x201D; <source>Applied Intelligence</source>, vol. <volume>51</volume>, no. <issue>2</issue>, pp. <fpage>854</fpage>&#x2013;<lpage>864</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Z. Q.</given-names> <surname>Lin</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Wong</surname></string-name></person-group>, &#x201C;<article-title>COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images</article-title>,&#x201D; <source>Scientific Reports</source>, vol. <volume>10</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>12</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>M. U.</given-names> <surname>Hassan</surname></string-name></person-group>, &#x201C;<article-title>VGG16-convolutional network for classification and detection</article-title>,&#x201D; <year>2018</year>. [Online]. Available: <uri>https://neurohive.io/en/popular-networks/vgg16/</uri>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Kurama</surname></string-name></person-group>, &#x201C;<article-title>A review of popular deep learning architectures: ResNet, InceptionV3, and SqueezeNet</article-title>,&#x201D; <year>2020</year>. [Online]. Available: <uri>https://blog.paperspace.com/popular-deep-learning-architectures-resnet-inceptionv3-squeezenet/</uri>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="other"><person-group person-group-type="author">A. Arora, <collab>&#x201C;DenseNet architecture explained with PyTorch implementation from TorchVision</collab></person-group> <year>2020</year>,&#x201D; [Online]. Available: <uri>https://amaarora.github.io/2020/08/02/densenets.html</uri>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Szegedy</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Ioffe</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Vanhoucke</surname></string-name> and <string-name><given-names>A. A.</given-names> <surname>Alemi</surname></string-name></person-group>, &#x201C;<article-title>Inception-v4, Inception-ResNet and the impact of residual connections on learning</article-title>,&#x201D; in <conf-name>Thirty-first AAAI Conf. on Artificial Intelligence</conf-name>, <conf-loc>San Francisco, CA, USA</conf-loc>, <year>2017</year>. </mixed-citation></ref>
</ref-list>
</back>
</article>