<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">15249</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2021.015249</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>3D Semantic Deep Learning Networks for Leukemia Detection</article-title>
<alt-title alt-title-type="left-running-head">3D Semantic Deep Learning Networks for Leukemia Detection</alt-title>
<alt-title alt-title-type="right-running-head">3D Semantic Deep Learning Networks for Leukemia Detection</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western">
<surname>Amin</surname>
<given-names>Javaria</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western">
<surname>Sharif</surname>
<given-names>Muhammad</given-names>
</name>
<xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western">
<surname>Anjum</surname>
<given-names>Muhammad Almas</given-names>
</name>
<xref ref-type="aff" rid="aff-3">3</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western">
<surname>Siddiqa</surname>
<given-names>Ayesha</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western">
<surname>Kadry</surname>
<given-names>Seifedine</given-names>
</name>
<xref ref-type="aff" rid="aff-4">4</xref></contrib>
<contrib id="author-6" contrib-type="author" corresp="yes">
<name name-style="western">
<surname>Nam</surname>
<given-names>Yunyoung</given-names>
</name>
<xref ref-type="aff" rid="aff-5">5</xref></contrib>
<contrib id="author-7" contrib-type="author">
<name name-style="western">
<surname>Raza</surname>
<given-names>Mudassar</given-names>
</name>
<xref ref-type="aff" rid="aff-2">2</xref></contrib>
<aff id="aff-1"><label>1</label><institution>University of Wah</institution>, <addr-line>Wah Cantt</addr-line>, <country>Pakistan</country></aff>
<aff id="aff-2"><label>2</label><institution>COMSATS University Islamabad, Wah Campus</institution>, <country>Pakistan</country></aff>
<aff id="aff-3"><label>3</label><institution>National University of Technology (NUTECH)</institution>, <addr-line>IJP Road Islamabad</addr-line>, <country>Pakistan</country></aff>
<aff id="aff-4"><label>4</label><institution>Faculty of Applied Computing and Technology, Noroff University College</institution>, <addr-line>Kristiansand</addr-line>, <country>Norway</country></aff>
<aff id="aff-5"><label>5</label><institution>Department of Computer Science and Engineering, Soonchunhyang University</institution>, <addr-line>Asan, 31538, Korea</addr-line></aff>
</contrib-group>
<author-notes><corresp id="cor1">&#x002A;Corresponding Author: Yunyoung Nam. Email: <email>ynam@sch.ac.kr</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-05-31"><day>31</day><month>05</month><year>2021</year></pub-date>
<volume>69</volume>
<issue>1</issue>
<fpage>785</fpage>
<lpage>799</lpage>
<history>
<date date-type="received"><day>12</day><month>11</month><year>2020</year></date>
<date date-type="accepted"><day>13</day><month>02</month><year>2021</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2021 Amin et al.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Amin et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_15249.pdf"></self-uri>
<abstract>
<p>White blood cells (WBCs) are a vital part of the immune system that protect the body from different types of bacteria and viruses. Abnormal cell growth destroys the body&#x2019;s immune system, and computerized methods play a vital role in detecting abnormalities at the initial stage. In this research, a deep learning technique is proposed for the detection of leukemia. The proposed methodology consists of three phases. Phase I uses an open neural network exchange (ONNX) and YOLOv2 to localize WBCs. The localized images are passed to Phase II, in which 3D-segmentation is performed using deeplabv3 as a base network of the pre-trained Xception model. The segmented images are used in Phase III, in which features are extracted using the darknet-53 model and optimized using Bhattacharyya separately criteria to classify WBCs. The proposed methodology is validated on three publically available benchmark datasets, namely ALL-IDB1, ALL-IDB2, and LISC, in terms of different metrics, such as precision, accuracy, sensitivity, and dice scores. The results of the proposed method are comparable to those of recent existing methodologies, thus proving its effectiveness.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>YOLOv2</kwd>
<kwd>darknet53</kwd>
<kwd>Bhattacharyya separately criteria</kwd>
<kwd>ONNX</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Blood is a fluid that transports oxygen, providing energy to body cells that then produce carbon dioxide. It also plays a pivotal role in the immune system; blood circulating in living organisms contains 55% plasma, 40% red cells, 4% platelets, and 1% white blood cells (WBCs) [<xref ref-type="bibr" rid="ref-1">1</xref>]. The five primary types of WBCs are acidophilus, lymphocytes, monocytes, basophils, and neutrophils. These blood cells contain nuclei that differ from those of other cells [<xref ref-type="bibr" rid="ref-2">2</xref>]. WBC abnormalities are diagnosed by a blood smear test. Peripheral blood analysis is utilized for detection of diseases, such as malaria, leukemia, and anemia [<xref ref-type="bibr" rid="ref-3">3</xref>,<xref ref-type="bibr" rid="ref-4">4</xref>]. Such disorders are revealed by an increase and decrease the number of WBCs in the human body. Variations occur in the morphological structure of blood cells in terms of color, shape, and size, and such variations aid in the diagnosis of abnormalities in the WBCs [<xref ref-type="bibr" rid="ref-5">5</xref>]. Thus, segmentation and classification methods are used for the detection of WBCs. The manual evaluation of WBCs is laborious and time consuming [<xref ref-type="bibr" rid="ref-6">6</xref>], and computerized methods are a useful alternative that also minimize the workload of hematologists [<xref ref-type="bibr" rid="ref-7">7</xref>]. Segmentation and classification of WBCs are performed using conventional and deep learning methodologies. In conventional approaches, features are extracted manually; however, in deep learning, images features are learned automatically through a pipeline to improve efficiency [<xref ref-type="bibr" rid="ref-8">8</xref>]. In this study, an automated approach based on deep learning is proposed to segment and classify WBCs more accurately. The foremost contributions of the proposed work are as follows:
<list list-type="bullet">
<list-item><p>The Open Neural Network Exchange (ONNX) is applied with a YOLOv2 model, which detects the different types of WBCs. The features are extracted using activation-5 of the ONNX model. The extracted features are fed to the YOLOv2 model. The proposed framework accurately detects the region of interest (ROI).</p></list-item>
<list-item><p>The features are extracted using darknet-53, and the prominent features are selected based on Bhattacharyya separately criteria and fed to the shallow classifiers for the classification of WBCs.</p></list-item>
</list></p>
</sec>
<sec id="s2">
<label>2</label>
<title>Existing Literature</title>
<p>In the literature, significant work has been done for the detection of WBCs, and some of the recent works are discussed in this section [<xref ref-type="bibr" rid="ref-9">9</xref>,<xref ref-type="bibr" rid="ref-10">10</xref>]. The detection of WBCs comprises four primary steps: pre-processing, localization/segmentation, extracting discriminant features, and classification. Pre-processing is a crucial step that is performed for noise removal and eradicating unwanted distortion to enhance the lesion region used in the subsequent segmentation step [<xref ref-type="bibr" rid="ref-11">11</xref>]. Segmentation is another vital step; it is used to group the homogeneous pixels and segment the required region from the input images. WBC cells are difficult to segment because of variations in their appearance [<xref ref-type="bibr" rid="ref-12">12</xref>]. Traditionally, WBCs were detected manually by pathologists, which is time-consuming and can be inaccurate [<xref ref-type="bibr" rid="ref-13">13</xref>]. Recently, automated approaches have been used for the detection of WBCs. Unsupervised clustering methods [<xref ref-type="bibr" rid="ref-14">14</xref>], thresholding approaches [<xref ref-type="bibr" rid="ref-15">15</xref>], shape-based approaches [<xref ref-type="bibr" rid="ref-16">16</xref>], and saliency-based models [<xref ref-type="bibr" rid="ref-17">17</xref>] are commonly used to localize WBCs. Watershed and histogram orientation approaches are used for the segmentation of WBCs. A large amount of data is presented into a set of vectors in the feature extraction process [<xref ref-type="bibr" rid="ref-18">18</xref>]. Selection of the optimum diagnostic features is an important task for the detection of WBCs [<xref ref-type="bibr" rid="ref-19">19</xref>]. Several types of features with different classifiers were used to differentiate the types of WBCs [<xref ref-type="bibr" rid="ref-20">20</xref>]. Supervised methods, such as SVM, Bayesian, random forest [<xref ref-type="bibr" rid="ref-21">21</xref>], and Bayesian [<xref ref-type="bibr" rid="ref-22">22</xref>], are used for the classification of WBCs. However, even the best feature extraction and selection methods struggle with accurate classification&#x201D; or something similar [<xref ref-type="bibr" rid="ref-23">23</xref>]. Deep learning (DL) approaches are used widely to extract high-level information automatically [<xref ref-type="bibr" rid="ref-24">24</xref>] for the detection of ROIs, such as in WBC detection and classification [<xref ref-type="bibr" rid="ref-25">25</xref>]. Contour aware neural networks are used to segment the WBCs. Pixel by pixel classification is performed using a fully convolutional neural network (FCN) [<xref ref-type="bibr" rid="ref-26">26</xref>]. Mask R-CNN exhibits better classification as compared with other DL techniques [<xref ref-type="bibr" rid="ref-27">27</xref>].</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Proposed Methodology</title>
<p>The proposed approach comprises localization, segmentation, high-level feature extraction/selection, and classification steps for the analysis of WBCs. In the proposed approach, WBCs are detected/localized using ONNX as the backbone of YOLOv2. The localized cells are segmented using the proposed 3-D semantic segmentation model. Finally, the WBCs are classified using multi-SVM. An overview of the proposed method is presented in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Proposed method architecture for WBCs localization and the segmentation</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-1.png"/>
</fig>
<sec id="s3_1">
<label>3.1</label>
<title>Localization of the WBCs</title>
<p>In this research, WBCs are recognized by the suggested WBC-ONNX-YOLOv2 model, as shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>, where features are extricated from activation-5 LeakyReLU of the ONNX model. The extracted features are further fed to the YOLOv2 architecture. The proposed model has 26 layers in the ONNX model, namely 1 input, 6 Conv, 6 Bn, 6 activation, 2 elementwise-affine, and 5 max-pooling layers, and 9 YOLOv2 layers, namely 2 ReLU, 2 Bn, 2 Conv, 1 classification, 1 transform, and 1 output layer.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>ONNX-YOLOv2 for multi-class detection 
 
</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-2.png"/>
</fig>
<p>The layer-wise proposed model architecture is presented in <xref ref-type="table" rid="table-1">Tab. 1</xref>.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>The layered architecture of the proposed localization model</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Layers of the proposed model</th>
<th>Activation units</th>
<th>Layers of the proposed model</th>
<th>Activation units</th>
</tr>
</thead>
<tbody>
<tr>
<td>Image input</td>
<td><inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td><inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula> mp3</td>
<td><inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Multiple (mul)-mul (element wise affine)</td>
<td><inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td>Conv4</td>
<td><inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>256</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Add-add (element wise affine)</td>
<td><inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula></td>
<td>Bn4</td>
<td><inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>256</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Convolutional (conv)</td>
<td><inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn></mml:math></inline-formula></td>
<td>Act4</td>
<td><inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>8</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>256</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Batch-normalization (Bn)</td>
<td><inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn></mml:math></inline-formula></td>
<td><inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula> mp4</td>
<td><inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>256</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Activation (act) (leaky ReLU with 0.1 scale)</td>
<td><inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn></mml:math></inline-formula></td>
<td>Conv5</td>
<td><inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td><inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula> max-pooling (mp)</td>
<td><inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn></mml:math></inline-formula></td>
<td>Bn5</td>
<td><inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Conv1</td>
<td><inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn></mml:math></inline-formula></td>
<td>Act5</td>
<td><inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Bn1</td>
<td><inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn></mml:math></inline-formula></td>
<td>Conv1-YOLOv2</td>
<td><inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Act1(leaky ReLU with 0.1 scales)</td>
<td><inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn></mml:math></inline-formula></td>
<td>Bn1-YOLOv2</td>
<td><inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td><inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula> max-pooling-1 (mp)</td>
<td><inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn></mml:math></inline-formula></td>
<td>ReLU1-YOLOv2</td>
<td><inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Conv2</td>
<td><inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn></mml:math></inline-formula></td>
<td>Conv2-YOLOv2</td>
<td><inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Bn2</td>
<td><inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn></mml:math></inline-formula></td>
<td>Bn2-YOLOv2</td>
<td><inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Act2</td>
<td><inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>32</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn></mml:math></inline-formula></td>
<td>ReLU2-YOLOv2</td>
<td><inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td><inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula> mp2</td>
<td><inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>64</mml:mn></mml:math></inline-formula></td>
<td>YOLOv2-classification</td>
<td><inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>=</mml:mo><mml:mn>40</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Conv3</td>
<td><inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn></mml:math></inline-formula></td>
<td>YOLOv2-transform</td>
<td><inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn><mml:mo>=</mml:mo><mml:mn>40</mml:mn></mml:math></inline-formula></td>
</tr>
<tr>
<td>Bn3</td>
<td><inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn></mml:math></inline-formula></td>
<td>YOLOv2-output</td>
<td>&#x2013;</td>
</tr>
<tr>
<td>Act3</td>
<td><inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>16</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn></mml:math></inline-formula></td>
<td/>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
<p>The proposed model is trained using selected parameters as reported in <xref ref-type="table" rid="table-2">Tab. 2</xref>.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Proposed localization model training parameters</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Input image size</th>
<th>Training epochs</th>
<th>Batchsize</th>
<th>Optimizers</th>
<th>Learning rate</th>
<th>Training average precision rate (mAP)</th>
</tr>
</thead>
<tbody>
<tr>
<td><inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>128</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula>Shuffle = True(by default)</td>
<td>100</td>
<td>12</td>
<td>Stochastic gradient descent (Sgdm)</td>
<td>1e &#x2212;3</td>
<td>1.00</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>It is trained on 100 epochs, because after 100 epochs, the model performance is almost stable. The number of iterations with the respective loss during training is illustrated graphically in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Number of the iteration with respect to training loss 
 
</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-3.png"/>
</fig>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>3D-Segmentation of the Leukocytes</title>
<p>The semantic segmentation model is proposed for the segmentation of WBCs, in which deeplabv3 is used as a bottleneck in the Xception model. The pre-trained Xception model contains 205 layers, comprising 1 input, 88 2-D Conv, 46 Bn, 46 ReLU, 3 max-pooling, 12 addition, 4 crop 2D, 2 transpose Conv, 2 depth Conv, softmax, and pixel classification layers. The segmentation model was trained from scratch on the blood smear images. The training parameters of the presented model are listed in <xref ref-type="table" rid="table-3">Tab. 3</xref>.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Training parameters of the segmentation model</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Optimizer</th>
<th>Sgdm</th>
</tr>
</thead>
<tbody>
<tr>
<td>Batch-size</td>
<td>10</td>
</tr>
<tr>
<td>Training epochs</td>
<td>40</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The proposed model learning with convolutional layers is plotted with activation units, as presented in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Segmentation model with activation units 
 
</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-4.png"/>
</fig>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Deep Features Extraction and Classification</title>
<p>The deep features are extracted using a pre-trained darknet53 model, which contains 184 layers, namely 1 input, 53 Conv, 1 global pooling, 52 Bn, 52 LeakyReLU, and 23 addition layers, and softmax with cross-entropy loss. The features are extracted from Conv53 layers with dimensions of <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:mn>1</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>1000</mml:mn><mml:mo>.</mml:mo></mml:math></inline-formula> The selection of informative features from a pool of features is difficult. Therefore, the Bhattacharyya rank-based feature selection approach is used, in which the optimum 500 (50%) best features are selected out of 1000 features to improve the classification accuracy, also providing cost-effective and fast predictors. The best-selected features are further supplied to the multi-kernel SVM classifiers, such as Cubic-SVM, Quadratic SVM, O-SVM, and Gaussian SVM to classify the different types of blood cells, as depicted in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Feature extraction &#x0026; selection and classification process</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-5.png"/>
</fig>
<p>The SVM classifier with different kernels is trained on the best-selected feature vectors with optimum parameters, as listed in <xref ref-type="table" rid="table-4">Tab. 4</xref>.</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Parameters of SVM selection</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Classifier</th>
<th>Function of the kernel</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>SVM</td>
<td>Quadratic cubic</td>
<td>Kernel of the scale: automated constraint box level: 1Multilevel approach: One <italic>vs</italic>. One</td>
<td/>
</tr>
<tr>
<td/>
<td>Optimizable</td>
<td>Kernel scale and box constraint: 0.001&#x2013;100</td>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
 
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>Localization results of different kinds of WBCs</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Types of WBCS</th>
<th>IoU</th>
<th>mAP</th>
</tr>
</thead>
<tbody>
<tr>
<td>Acidophilus</td>
<td>0.95</td>
<td>1.00</td>
</tr>
<tr>
<td>Lymphocytes</td>
<td>0.92</td>
<td>1.00</td>
</tr>
<tr>
<td>Monocytes</td>
<td>0.91</td>
<td>1.00</td>
</tr>
<tr>
<td>Basophils</td>
<td>0.93</td>
<td>1.00</td>
</tr>
<tr>
<td>Neutrophils</td>
<td>0.90</td>
<td>0.80</td>
</tr>
<tr>
<td>Blast cells</td>
<td>0.97</td>
<td>0.93</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Localization results on benchmark datasets (a) log average rate (b) average precision of different types of WBCs (c) IoU (d) average precision of blast cells 
 
</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-6.png"/>
</fig>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Experimental Setup</title>
<p>In this research, three publicly available benchmark datasets are used for the method evaluation. ALL-IDB1 contains 107 blood smear images, of which 33 are blasts and 74 are non-blast cells, and ALL-IDB2 contains 260 blood smear images, comprising 130 blasts and 130 non-blast cells [<xref ref-type="bibr" rid="ref-28">28</xref>&#x2013;<xref ref-type="bibr" rid="ref-31">31</xref>]. The LISC dataset contains blood smear images of WBCs, including eosinophils, neutrophils, monocytes, lymphocytes, and basophils. The numbers of images for each type of WBC are not equal. To balance the different types of imaging data of WBCs, data augmentation is performed by rotating the images at different angles, such as 45<inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:msup><mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x2218;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>, 90<inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:msup><mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x2218;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>, 180<inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:msup><mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x2218;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>, and 360<inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:msup><mml:mrow></mml:mrow><mml:mrow><mml:mo>&#x2218;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>. After augmentation, 6250 images of five types of WBCs are obtained, with each type having 1250 blood smear images [<xref ref-type="bibr" rid="ref-32">32</xref>].</p>
<sec id="s4_1">
<label>4.1</label>
<title>Results &#x0026; Discussion</title>
<p>The proposed work performance is validated by performing three experiments. The first experiment is performed to validate the presented localization technique by different metrics such as mean precision (mAP) and intersection over the union (IoU). The second experiment is validated to compute the segmentation model performance, while the third experiment is performed to compute the classification model performance. All experiments in this research are performed on the MATLAB 2020 Ra toolbox with 1050 K Nvidia Graphic Card.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Experiment #1: Localization of Leukocytes</title>
<p>Experiment 1 was performed to validate the performance of the localization approach on three benchmark datasets, LISC, ALL-IDB1, and ALL-IDB2, using IoU and mAP as metrics, as shown in <xref ref-type="table" rid="table-5">Tab. 5</xref>. In this experiment, six types of WBCs were localized, and the localization results are graphically depicted in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Localization results and corresponding confidence score of the proposed method on LISC dataset. Column (a) and (d) represent input images; (b) and (e) localization results; (c) and (f) confidence score</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-7.png"/>
</fig>
<p>The localization outcomes in <xref ref-type="table" rid="table-5">Tab. 5</xref> indicate that the method achieved the highest 0.97 IoU on blast cells.</p>
<p>The proposed method localizes the WBCs with confidence scores, as shown in <xref ref-type="fig" rid="fig-7">Figs. 7</xref> and <xref ref-type="fig" rid="fig-8">8</xref>.</p>
<p>The localization results in <xref ref-type="fig" rid="fig-7">Figs. 7</xref> and <xref ref-type="fig" rid="fig-8">8</xref> reveal that the proposed method achieved the highest confidence scores of 0.97349, 0.96849, 0.95933, 0.95867, 0.94616, and 0.89726 for eosinophils, basophils, lymphocytes, monocytes, neutrophil, and blast cells, respectively.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Localized region on LISC and ALL-IDB datasets (a) blast images (b) localization (c) confidence scores</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-8.png"/>
</fig>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>Experiment 2: Segmentation of Leukocytes</title>
<p>In this experiment, the 3D segmented region is validated using different types of performance metrics, namely IoU, mean, weighted, and global accuracy, and F1-scores, as mentioned in <xref ref-type="table" rid="table-6">Tab. 6</xref>. The results of the proposed segmented WBCs are mapped pixel-by-pixel with ground annotated images, as illustrated in <xref ref-type="fig" rid="fig-9">Fig. 9</xref>.</p>
<table-wrap id="table-6">
<label>Table 6</label>
<caption>
<title>Segmentation results of the WBCs</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Accuracy (global)</th>
<th>Accuracy (mean)</th>
<th>IoU (mean)</th>
<th>IoU (weighted)</th>
<th>F1-score</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.99</td>
<td>0.98</td>
<td>0.97</td>
<td>0.98</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>3D-segmentation outcomes (a) WBCs (b) 3D segmentation (c) ground annotated masks 
 
</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-9.png"/>
</fig>
<p>The segmentation results in <xref ref-type="table" rid="table-6">Tab. 6</xref> indicate that the proposed method achieved the highest segmentation accuracy, obtained by the pixel-by-pixel comparison of the segmented images with ground annotated images.</p>
</sec>
<sec id="s4_4">
<label>4.4</label>
<title>Experiment #3: Classification Based on the Extracted Feature</title>
<p>In this experiment, an optimized feature vector is fed to a multi-kernel SVM for WBC classification, and the outcomes are computed in terms of accuracy, precision, recall, and F1 scores from the LISC dataset, as displayed in <xref ref-type="table" rid="table-7">Tabs. 7</xref>&#x2013;<xref ref-type="table" rid="table-9">9</xref>. The discrimination outcomes on the LISC and ALL-IDB1&#x0026;2 datasets with class labels are presented in <xref ref-type="fig" rid="fig-10">Fig. 10</xref>.</p>
<table-wrap id="table-7">
<label>Table 7</label>
<caption>
<title>WBCs classification results</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Types of WBCS</th>
<th>ACC (%)</th>
<th>PPV</th>
<th>Sensitivity</th>
<th>F1scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Eosinophils</td>
<td>100</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<td>Basophils</td>
<td>95.78</td>
<td>0.90</td>
<td>0.90</td>
<td>0.90</td>
</tr>
<tr>
<td>Lymphocytes</td>
<td>98.95</td>
<td>0.98</td>
<td>0.97</td>
<td>0.97</td>
</tr>
<tr>
<td>Neutrophils</td>
<td>96.26</td>
<td>0.90</td>
<td>0.92</td>
<td>0.91</td>
</tr>
<tr>
<td>Monocytes</td>
<td>100</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</table-wrap>
 
<table-wrap id="table-8">
<label>Table 8</label>
<caption>
<title>WBCs classification results</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Types of WBCS</th>
<th>ACC (%)</th>
<th>PPV</th>
<th>Sensitivity</th>
<th>F1scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Eosinophils</td>
<td>100</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<td>Basophils</td>
<td>92.06</td>
<td>0.87</td>
<td>0.89</td>
<td>0.88</td>
</tr>
<tr>
<td>Lymphocytes</td>
<td>97.56</td>
<td>0.99</td>
<td>0.94</td>
<td>0.96</td>
</tr>
<tr>
<td>Neutrophils</td>
<td>92.58</td>
<td>0.87</td>
<td>0.90</td>
<td>0.89</td>
</tr>
<tr>
<td>Monocytes</td>
<td>100</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</table-wrap>
 
<table-wrap id="table-9">
<label>Table 9</label>
<caption>
<title>WBCs classification results</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Types of WBCs</th>
<th>ACC (%)</th>
<th>PPV</th>
<th>Sensitivity</th>
<th>F1scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Eosinophils</td>
<td>100</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
<tr>
<td>Basophils</td>
<td>98.73</td>
<td>0.96</td>
<td>0.98</td>
<td>0.97</td>
</tr>
<tr>
<td>Lymphocytes</td>
<td>99.62</td>
<td>0.99</td>
<td>0.97</td>
<td>0.98</td>
</tr>
<tr>
<td>Neutrophils</td>
<td>98.83</td>
<td>0.97</td>
<td>0.97</td>
<td>0.97</td>
</tr>
<tr>
<td>Monocytes</td>
<td>100</td>
<td>1.0</td>
<td>1.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>A quantitative analysis is performed using an SVM with three different types of kernels, namely cubic, quadratic, and optimized. The SVM with the optimized kernel achieved a maximum overall accuracy of 98.4%. The classification results are also compared with the latest published work, as shown in <xref ref-type="table" rid="table-10">Tab. 10</xref>.</p>
<p><xref ref-type="table" rid="table-10">Tab. 10</xref> compares the classification results with the latest published existing work. The existing work achieved accuracies of 0.995, 0.984, 0.984, 0.961, and 0.950 for lymphocytes, monocytes, basophils, eosinophils, and neutrophils, respectively. In contrast, the proposed method exhibited improved classification accuracy, with 0.996, 1.00, 0.987, 1.00, and 0.988 for lymphocytes, monocytes, basophils, eosinophils, and neutrophils, respectively.</p>
<p>The classification results on the ALL-IDB1&#x0026;2 datasets are presented in <xref ref-type="table" rid="table-11">Tabs. 11</xref> and <xref ref-type="table" rid="table-12">12</xref>.</p>
<p>The classification results of blast/non-blast cells are presented in <xref ref-type="table" rid="table-11">Tabs. 11</xref> and <xref ref-type="table" rid="table-12">12</xref>. An accuracy of 99.57% was achieved on the ALL-IDB1 dataset and 98.25% on the ALL-IDB2 dataset, and the results are compared with a recently published work, as provided in <xref ref-type="table" rid="table-13">Tab. 13</xref>.</p>
<p><xref ref-type="table" rid="table-13">Tab. 13</xref> presents a comparison of the numerical results, wherein the competitive results obtained from the proposed method are compared to those of the latest published work.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Confusion matrix (a) LISC dataset (b) ALL-IDB1 dataset 
 
</title>
</caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_15249-fig-10.png"/>
</fig>
 
<table-wrap id="table-10">
<label>Table 10</label>
<caption>
<title>Proposed work comparison with latest published work on LISC dataset</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Ref</th>
<th>Year</th>
<th>Results (accuracy)</th>
</tr>
</thead>
<tbody>
<tr>
<td>[<xref ref-type="bibr" rid="ref-25">25</xref>]</td>
<td>2020</td>
<td>Lymphocyte = 0.995</td>
</tr>
<tr>
<td/>
<td/>
<td>Monocyte = 0.984</td>
</tr>
<tr>
<td/>
<td/>
<td>Basophil = 0.984</td>
</tr>
<tr>
<td/>
<td/>
<td>Eosinophil = 0.961</td>
</tr>
<tr>
<td/>
<td/>
<td>Neutrophil = 0.950</td>
</tr>
<tr>
<td colspan="2"><bold>Proposed method</bold></td>
<td>Lymphocyte = 0.996</td>
</tr>
<tr>
<td/>
<td/>
<td>Monocyte = 1.00</td>
</tr>
<tr>
<td/>
<td/>
<td>Basophil = 0.987</td>
</tr>
<tr>
<td/>
<td/>
<td>Eosinophil = 1.00</td>
</tr>
<tr>
<td/>
<td/>
<td>Neutrophil = 0.988</td>
</tr>
</tbody>
</table>
</table-wrap>
 
<table-wrap id="table-11">
<label>Table 11</label>
<caption>
<title>WBCs classification results</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Classes</th>
<th>ACC (%)</th>
<th>PPV</th>
<th>Sensitivity</th>
<th>F1scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Blast cell</td>
<td>99.57</td>
<td>1.0</td>
<td>0.99</td>
<td>1.0</td>
</tr>
<tr>
<td>Non-blast cells</td>
<td>99.57</td>
<td>0.99</td>
<td>1.0</td>
<td>1.0</td>
</tr>
</tbody>
</table>
</table-wrap>
 
<table-wrap id="table-12">
<label>Table 12</label>
<caption>
<title>WBCs classification results</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Classes</th>
<th>ACC (%)</th>
<th>PPV</th>
<th>Sensitivity</th>
<th>F1scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Blast cell</td>
<td>98.25</td>
<td>0.99</td>
<td>0.97</td>
<td>0.98</td>
</tr>
<tr>
<td>Non-blast cells</td>
<td>98.25</td>
<td>0.97</td>
<td>0.99</td>
<td>0.98</td>
</tr>
</tbody>
</table>
</table-wrap>
 
<table-wrap id="table-13">
<label>Table 13</label>
<caption>
<title>Results comparison</title>
</caption>

<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Ref</th>
<th>Year</th>
<th>Dataset</th>
<th>Results (accuracy) (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>[<xref ref-type="bibr" rid="ref-33">33</xref>]</td>
<td>2018</td>
<td>ALL-IDB</td>
<td>97.22</td>
</tr>
<tr>
<td>[<xref ref-type="bibr" rid="ref-34">34</xref>]</td>
<td>2018</td>
<td/>
<td>96.06</td>
</tr>
<tr>
<td>[<xref ref-type="bibr" rid="ref-35">35</xref>]</td>
<td>2020</td>
<td/>
<td>97.45</td>
</tr>
<tr>
<td>[<xref ref-type="bibr" rid="ref-36">36</xref>]</td>
<td>2020</td>
<td/>
<td>97.00</td>
</tr>
<tr>
<td>[<xref ref-type="bibr" rid="ref-37">37</xref>]</td>
<td>2020</td>
<td/>
<td>94.10</td>
</tr>
<tr>
<td colspan="2">Proposed approach</td>
<td/>
<td>99.57</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion</title>
<p>In this study, deep learning approaches are proposed for the detection of WBCs. Detecting WBCs is challenging because blood smear images contain different color distributions in the cytoplasm and nucleus regions, making it difficult to segment these regions accurately. A 3-D semantic segmentation model is proposed, in which deeplabv3 is used as a bottleneck and the Xception model is used as a classification head to accurately segment the WBCs. Feature extraction/selection is another challenge for the classification of WBCs. The features are extracted from the pre-trained darknet-53 model, and informative features are selected using Bhattacharyya separability criteria and passed to the SVM with different types of kernels, namely cubic, quadratic, and optimized. The proposed classification method achieved an accuracy of 99.57% on the ALL-IDB1 dataset, 98.25% for the ALL-IDB2 dataset, and 98.4% for LISC datasets using the optimizable SVM kernel. The overall experimental outcomes demonstrate that the proposed technique achieved competitive outcomes by optimizing the SVM kernel. The proposed new framework based on a CNN can be used for the detection of different types of cancer, such as lung and bone cancer. It detects and classifies leukocytes at an early stage, thereby increasing the survival rate of patients.</p>
</sec>
</body>
<back>
<ack><p>This research was supported by Korea Institute for Advancement of Technology (KIAT).</p></ack>
<fn-group><fn fn-type="other"><p><bold>Funding Statement:</bold> This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724, The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.</p></fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> All authors declare that they have no conflicts of interest to report regarding the present study.</p></fn></fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Srivastava</surname></string-name></person-group>, <source>Analysis on Bio-Mathematics</source>. <publisher-loc>Chhattisgarh, India</publisher-loc>: <publisher-name>Shashwat Publication</publisher-name>, <year>2020</year>. [Online]. Available: <uri>https://shashwatpublication.com/books/anasysis-on-bio-mathematics</uri>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Ali</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Tanveer</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Hussain</surname></string-name> and <string-name><given-names>S. U.</given-names> <surname>Rehman</surname></string-name></person-group>, &#x201C;<article-title>Identification of cancer disease using image processing approahes</article-title>,&#x201D; <source>International Journal of Intelligent Information Systems</source>, vol. <volume>9</volume>, no. <issue>2</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>10</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Abdulhay</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Mohammed</surname></string-name>, <string-name><given-names>D. A.</given-names> <surname>Ibrahim</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Arunkumar</surname></string-name> and <string-name><given-names>V.</given-names> <surname>Venkatraman</surname></string-name></person-group>, &#x201C;<article-title>Computer aided solution for automatic segmenting and measurements of blood leucocytes using static microscope images</article-title>,&#x201D; <source>Journal of Medical Systems</source>, vol. <volume>42</volume>, no. <issue>4</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>12</lpage>, <year>2018</year>. </mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. A.</given-names> <surname>Mohammed</surname></string-name>, <string-name><given-names>K. H.</given-names> <surname>Abdulkareem</surname></string-name>, <string-name><given-names>S. A.</given-names> <surname>Mostafa</surname></string-name>, <string-name><given-names>M. K. A.</given-names> <surname>Ghani</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Maashi</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Voice pathology detection and classification using convolutional neural network model</article-title>,&#x201D; <source>Applied Sciences</source>, vol. <volume>10</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>13</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Subathra</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Mohammed</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Maashi</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Garcia-Zapirain</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Sairamya</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Detection of focal and non-focal electroencephalogram signals using fast walsh-hadamard transform and artificial neural network</article-title>,&#x201D; <source>Sensors</source>, vol. <volume>20</volume>, no. <issue>17</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>20</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. K.</given-names> <surname>Abd Ghani</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Mohammed</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Arunkumar</surname></string-name>, <string-name><given-names>S. A.</given-names> <surname>Mostafa</surname></string-name>, <string-name><given-names>D. A.</given-names> <surname>Ibrahim</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Decision-level fusion scheme for nasopharyngeal carcinoma identification using machine learning techniques</article-title>,&#x201D; <source>Neural Computing and Applications</source>, vol. <volume>32</volume>, no. <issue>3</issue>, pp. <fpage>625</fpage>&#x2013;<lpage>638</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>O. I.</given-names> <surname>Obaid</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Mohammed</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Ghani</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Mostafa</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Taha</surname></string-name></person-group>, &#x201C;<article-title>Evaluating the performance of machine learning techniques in the classification of wisconsin breast cancer</article-title>,&#x201D; <source>International Journal of Engineering &#x0026; Technology</source>, vol. <volume>7</volume>, pp. <fpage>160</fpage>&#x2013;<lpage>166</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Arunkumar</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Mohammed</surname></string-name>, <string-name><given-names>M. K.</given-names> <surname>Abd Ghani</surname></string-name>, <string-name><given-names>D. A.</given-names> <surname>Ibrahim</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Abdulhay</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>K-means clustering and neural network for object detecting and identifying abnormality of brain tumor</article-title>,&#x201D; <source>Soft Computing</source>, vol. <volume>23</volume>, no. <issue>19</issue>, pp. <fpage>9083</fpage>&#x2013;<lpage>9096</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Eilertsen</surname></string-name>, <string-name><given-names>P. C.</given-names> <surname>S&#x00E6;ther</surname></string-name>, <string-name><given-names>C. E.</given-names> <surname>Henriksson</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Petersen</surname></string-name> and <string-name><given-names>T. A.</given-names> <surname>Hagve</surname></string-name></person-group>, &#x201C;<article-title>Evaluation of the detection of blasts by sysmex hematology instruments, cellavision DM96, and manual microscopy using flow cytometry as the confirmatory method</article-title>,&#x201D; <source>International Journal of Laboratory Hematology</source>, vol. <volume>41</volume>, no. <issue>3</issue>, pp. <fpage>338</fpage>&#x2013;<lpage>344</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H. H.</given-names> <surname>Inbarani</surname></string-name> and <string-name><given-names>A. T.</given-names> <surname>Azar</surname></string-name></person-group>, &#x201C;<article-title>Leukemia image segmentation using a hybrid histogram-based soft covering rough k-means clustering algorithm</article-title>,&#x201D; <source>Electronics</source>, vol. <volume>9</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>22</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Bai</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Lu</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>ONNX: Open neural network exchange</article-title>,&#x201D; <source>GitHub Repository</source>, <year>2019</year>. [Online]. Available: <uri>https://github.com/onnx/onnx</uri>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Redmon</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Farhadi</surname></string-name></person-group>, &#x201C;<article-title>YOLO9000: Better, faster, stronger</article-title>,&#x201D; in <source>Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition</source>, Honolulu, Hawaii, pp. <fpage>7263</fpage>&#x2013;<lpage>7271</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhu</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Papandreou</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Schroff</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Adam</surname></string-name></person-group>, &#x201C;<article-title>Encoder-decoder with atrous separable convolution for semantic image segmentation</article-title>,&#x201D; in <conf-name>Proc. of the European Conf. on Computer Vision</conf-name>, Glasgow, United Kingdom, pp. <fpage>801</fpage>&#x2013;<lpage>818</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Chollet</surname></string-name></person-group>, &#x201C;<article-title>Xception: Deep learning with depthwise separable convolutions</article-title>,&#x201D; in <conf-name>Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, Montreal, Canada, pp. <fpage>1251</fpage>&#x2013;<lpage>1258</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Redmon</surname></string-name></person-group>, &#x201C;<article-title>Darknet: Open source neural networks in C</article-title>,&#x201D; <year>2016</year>. [Online]. Available: <uri>https://pjreddie.com/darknet</uri>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Theodoridis</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Koutroumbas</surname></string-name></person-group>, &#x201C;<article-title>Pattern recognition</article-title>,&#x201D; <source>IEEE Transactions on Neural Networks</source>, vol. <volume>19</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>957</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Kollem</surname></string-name>, <string-name><given-names>K. R.</given-names> <surname>Reddy</surname></string-name> and <string-name><given-names>D. S.</given-names> <surname>Rao</surname></string-name></person-group>, &#x201C;<article-title>A review of image denoising and segmentation methods based on medical images</article-title>,&#x201D; <source>International Journal of Machine Learning and Computing</source>, vol. <volume>9</volume>, no. <issue>3</issue>, pp. <fpage>288</fpage>&#x2013;<lpage>295</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Al-Dulaimi</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Banks</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Nguyen</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Al-Sabaawi</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Tomeo-Reyes</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Segmentation of white blood cell, nucleus and cytoplasm in digital haematology microscope images: A review-challenges, current and future potential techniques</article-title>,&#x201D; <source>IEEE Reviews in Biomedical Engineering</source>, vol. <volume>14</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Dutta</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Karmakar</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Banerjee</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Ghatak</surname></string-name></person-group>, &#x201C;<article-title>Detection of leukemia in blood samples applying image processing using a novel edge detection method</article-title>,&#x201D; in <conf-name>Proc. of the Global AI Congress</conf-name>, Singapore, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Nassar</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Doan</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Filby</surname></string-name>, <string-name><given-names>O.</given-names> <surname>Wolkenhauer</surname></string-name>, <string-name><given-names>D. K.</given-names> <surname>Fogg</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Label-free identification of white blood cells using machine learning</article-title>,&#x201D; <source>Cytometry Part A</source>, vol. <volume>95</volume>, no. <issue>8</issue>, pp. <fpage>836</fpage>&#x2013;<lpage>842</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Khodashenas</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Ebrahimpour-komleh</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Nickfarjam</surname></string-name></person-group>, &#x201C;<article-title>White blood cell detection and counting based on genetic algorithm</article-title>,&#x201D; in <conf-name>Advances in Science and Engineering Technology International Conf.</conf-name>, Dubai, United Arab Emirates, pp. <fpage>1</fpage>&#x2013;<lpage>4</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>&#350;eng&#x00FC;r</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Akbulut</surname></string-name>, <string-name><given-names>&#x00DC;.</given-names> <surname>Budak</surname></string-name> and <string-name><given-names>Z.</given-names> <surname>C&#x00F6;mert</surname></string-name></person-group>, &#x201C;<article-title>White blood cell classification based on shape and deep features</article-title>,&#x201D; in <conf-name>Int. Artificial Intelligence and Data Processing Symp.</conf-name>, Malatya, Turkey, pp. <fpage>1</fpage>&#x2013;<lpage>4</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Nalepa</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Kawulok</surname></string-name></person-group>, &#x201C;<article-title>Selecting training sets for support vector machines: A review</article-title>,&#x201D; <source>Artificial Intelligence Review</source>, vol. <volume>52</volume>, no. <issue>2</issue>, pp. <fpage>857</fpage>&#x2013;<lpage>900</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Hussain</surname></string-name>, <string-name><given-names>L. B.</given-names> <surname>Mahanta</surname></string-name>, <string-name><given-names>C. R.</given-names> <surname>Das</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Choudhury</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Chowdhury</surname></string-name></person-group>, &#x201C;<article-title>A shape context fully convolutional neural network for segmentation and classification of cervical nuclei in pap smear images</article-title>,&#x201D; <source>Artificial Intelligence in Medicine</source>, vol. <volume>107</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Kutlu</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Avci</surname></string-name> and <string-name><given-names>F.</given-names> <surname>&#x00D6;zyurt</surname></string-name></person-group>, &#x201C;<article-title>White blood cells detection and classification based on regional convolutional neural networks</article-title>,&#x201D; <source>Medical Hypotheses</source>, vol. <volume>135</volume>, no. <issue>10</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Shankar</surname></string-name>, <string-name><given-names>M. M.</given-names> <surname>Deshpande</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Chaitra</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Aditi</surname></string-name></person-group>, &#x201C;<article-title>Automatic detection of acute lymphoblasitc leukemia using image processing</article-title>,&#x201D; in <conf-name>IEEE Int. Conf. on Advances in Computer Applications</conf-name>, Coimbatore, India, pp. <fpage>186</fpage>&#x2013;<lpage>189</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Dhieb</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Ghazzai</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Besbes</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Massoud</surname></string-name></person-group>, &#x201C;<article-title>An automated blood cells counting and classification framework using mask R-CNN deep learning model</article-title>,&#x201D; in <conf-name>31st Int. Conf. on Microelectronics</conf-name>, Cairo, Egypt, pp. <fpage>300</fpage>&#x2013;<lpage>303</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R. D.</given-names> <surname>Labati</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Piuri</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Scotti</surname></string-name></person-group>, &#x201C;<article-title>All-IDB: The acute lymphoblastic leukemia image database for image processing</article-title>,&#x201D; in <conf-name>18th IEEE Int. Conf. on Image Processing</conf-name>, Brussels, Belgium, pp. <fpage>2045</fpage>&#x2013;<lpage>2048</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Scotti</surname></string-name></person-group>, &#x201C;<article-title>Robust segmentation and measurements techniques of white cells in blood microscope images</article-title>,&#x201D; in <conf-name>2006 IEEE Instrumentation and Measurement Technology Conf. Proc.</conf-name>, Sorrento, Italy, pp. <fpage>43</fpage>&#x2013;<lpage>48</lpage>, <year>2006</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Scotti</surname></string-name></person-group>, &#x201C;<article-title>Automatic morphological analysis for acute leukemia identification in peripheral blood microscope images</article-title>,&#x201D; in <conf-name>CIMSA. IEEE Int. Conf. on Computational Intelligence for Measurement Systems and Applications</conf-name>, Giardini Naxos, Italy, pp. <fpage>96</fpage>&#x2013;<lpage>101</lpage>, <year>2005</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Piuri</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Scotti</surname></string-name></person-group>, &#x201C;<article-title>Morphological classification of blood leucocytes by microscope images</article-title>,&#x201D; in <conf-name>2004 IEEE Int. Conf. on Computational Intelligence for Measurement Systems and Applications</conf-name>, Boston, MA, USA, pp. <fpage>103</fpage>&#x2013;<lpage>108</lpage>, <year>2004</year>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S. H.</given-names> <surname>Rezatofighi</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Khaksari</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Soltanian-Zadeh</surname></string-name></person-group>, &#x201C;<article-title>Automatic recognition of five types of white blood cells in peripheral blood</article-title>,&#x201D; in <conf-name>Int. Conf. Image Analysis and Recognition</conf-name>, Berlin, Heidelberg, pp. <fpage>161</fpage>&#x2013;<lpage>172</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L. C.</given-names> <surname>de Faria</surname></string-name>, <string-name><given-names>L. F.</given-names> <surname>Rodrigues</surname></string-name> and <string-name><given-names>J. F.</given-names> <surname>Mari</surname></string-name></person-group>, &#x201C;<article-title>Cell classification using handcrafted features and bag of visual words</article-title>,&#x201D; in <conf-name>Anais do XIV Workshop de Vis&#x00E3;o Computacional</conf-name>, Brasil, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Shafique</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Tehsin</surname></string-name></person-group>, &#x201C;<article-title>Acute lymphoblastic leukemia detection and classification of its subtypes using pretrained deep convolutional neural networks</article-title>,&#x201D; <source>Technology in Cancer Research &#x0026; Treatment</source>, vol. <volume>17</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>7</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z. F.</given-names> <surname>Mohammed</surname></string-name> and <string-name><given-names>A. A.</given-names> <surname>Abdulla</surname></string-name></person-group>, &#x201C;<article-title>An efficient CAD system for all cell identification from microscopic blood images</article-title>,&#x201D; <source>Multimedia Tools and Applications</source>, vol. <volume>80</volume>, pp. <fpage>6355</fpage>&#x2013;<lpage>6368</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L. H.</given-names> <surname>Vogado</surname></string-name>, <string-name><given-names>R. M.</given-names> <surname>Veras</surname></string-name> and <string-name><given-names>K. R.</given-names> <surname>Aires</surname></string-name></person-group>, &#x201C;<article-title>LeukNet-A model of convolutional neural network for the diagnosis of leukemia</article-title>,&#x201D; in <conf-name>Anais Estendidos do XXXIII Conf. on Graphics, Patterns and Images</conf-name>, Bairro Agronomia, Porto Alegre, pp. <fpage>119</fpage>&#x2013;<lpage>125</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Di Ruberto</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Loddo</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Puglisi</surname></string-name></person-group>, &#x201C;<article-title>Blob detection and deep learning for leukemic blood image analysis</article-title>,&#x201D; <source>Applied Sciences</source>, vol. <volume>10</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>13</lpage>, <year>2020</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>