<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CSSE</journal-id>
<journal-id journal-id-type="nlm-ta">CSSE</journal-id>
<journal-id journal-id-type="publisher-id">CSSE</journal-id>
<journal-title-group>
<journal-title>Computer Systems Science &#x0026; Engineering</journal-title>
</journal-title-group>
<issn pub-type="ppub">0267-6192</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">22385</article-id>
<article-id pub-id-type="doi">10.32604/csse.2023.022385</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Skin Lesion Classification System Using Shearlets</article-title><alt-title alt-title-type="left-running-head">Skin Lesion Classification System Using Shearlets</alt-title><alt-title alt-title-type="right-running-head">Skin Lesion Classification System Using Shearlets</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Mohan Kumar</surname><given-names>S.</given-names></name><email>s.mohankumar.phd@gmail.com</email>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Kumanan</surname><given-names>T.</given-names></name>
</contrib>
<aff id="aff-1"><institution>Department of Computer Science and Engineering, Meenakshi Academy of Higher Education and Research</institution>, <addr-line>Chennai, 600078, Tamil Nadu</addr-line>, <country>India</country></aff>
</contrib-group><author-notes><corresp id="cor1"><label>&#x002A;</label>Corresponding Author: S. Mohan Kumar. Email: <email>s.mohankumar.phd@gmail.com</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-05-24"><day>24</day>
<month>05</month>
<year>2022</year></pub-date>
<volume>44</volume>
<issue>1</issue>
<fpage>833</fpage>
<lpage>844</lpage>
<history>
<date date-type="received"><day>05</day><month>8</month><year>2021</year></date>
<date date-type="accepted"><day>09</day><month>10</month><year>2021</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Mohan Kumar and Kumanan</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Mohan Kumar and Kumanan</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CSSE_22385.pdf"></self-uri>
<abstract>
<p>The main cause of skin cancer is the ultraviolet radiation of the sun. It spreads quickly to other body parts. Thus, early diagnosis is required to decrease the mortality rate due to skin cancer. In this study, an automatic system for Skin Lesion Classification (SLC) using Non-Subsampled Shearlet Transform (NSST) based energy features and Support Vector Machine (SVM) classifier is proposed. At first, the NSST is used for the decomposition of input skin lesion images with different directions like 2, 4, 8 and 16. From the NSST&#x2019;s sub-bands, energy features are extracted and stored in the feature database for training. SVM classifier is used for the classification of skin lesion images. The dermoscopic skin images are obtained from PH<sup>2</sup> database which comprises of 200 dermoscopic color images with melanocytic lesions. The performances of the SLC system are evaluated using the confusion matrix and Receiver Operating Characteristic (ROC) curves. The SLC system achieves 96&#x0025; classification accuracy using NSST&#x2019;s energy features obtained from 3<sup>rd</sup> level with 8-directions.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Skin lesion classification</kwd>
<kwd>non-subsampled shearlet transform</kwd>
<kwd>sub-band coefficients</kwd>
<kwd>energy feature</kwd>
<kwd>support vector machine</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>The unusual skin growth identifies the growth of skin cancer. The skin cancer looks like rash, irregular patch, nodule at the surface of the skin. The affected part may bleed easily. It spreads to other parts of the body easily and sometimes it leads to death. Multiclass SVM and Gray Level Co-occurrence Matrix (GLCM) based skin cancer classification is discussed in [<xref ref-type="bibr" rid="ref-1">1</xref>]. GLCM features like energy, contrast, homogeneity and entropy are extracted at first. Then multiclass SVM is used for the prediction of skin cancer. Non-Subsampled Contourlet Transform (NSCT) and bayes classification based melanoma image classification is described in [<xref ref-type="bibr" rid="ref-2">2</xref>]. The melanoma image is given to NSCT for decomposition and Bayes classifier is used for output prediction.</p>
<p>The detection and classification of skin lesions using dermoscopy images is discussed in [<xref ref-type="bibr" rid="ref-3">3</xref>]. At first, the dermoscopic images are given to median filter to remove hair, and then k-means clustering algorithm is used for segmentation. From the segmented image, color, texture and sub-region features are extracted. Wilkis Lambda method is used for feature selection. The prediction is made by SVM. SVM based skin cancer classification and detection is described in [<xref ref-type="bibr" rid="ref-4">4</xref>]. The input skin cancer images are converted to gray scale in the preprocessing stage and region of interest is extracted. Then, the border, asymmetry, diameter and circularity features are extracted. The redundant features are reduced by using principal component analysis. Finally, SVM is used to predict the output.</p>
<p>An early melanoma classification system is discussed in [<xref ref-type="bibr" rid="ref-5">5</xref>] using different color space. GLCM features like contrast, sum variance, average, autocorrelation, sum variance and average are extracted. The final prediction is made by SVM. Classification and segmentation of melanoma skin cancer images is discussed in [<xref ref-type="bibr" rid="ref-6">6</xref>] using k-means clustering approach. From the input melanoma skin images, features such as diameter, border, circularity and asymmetric features are extracted. The feature selection and dimension reduction are made by Relief algorithm. The classifiers like SVM, Decision Tree (DT) and K-Nearest Neighbor (KNN) are used.</p>
<p>Melanoma skin cancer classification and detection based on unsupervised and supervised learning is described in [<xref ref-type="bibr" rid="ref-7">7</xref>]. The input image is converted into grayscale and then median filter is applied in the pre processing stage. The thresholding method and <italic>k</italic>-means methods are used for segmentation. Then, the segmented images are decomposed by wavelet transform [<xref ref-type="bibr" rid="ref-8">8</xref>]. Finally, SVM is used for prediction. Skin cancer classification approach in [<xref ref-type="bibr" rid="ref-9">9</xref>] uses <italic>k</italic>-means technique for skin lesion segmentation. Features are extracted by local binary pattern and color coherence vector. The classification is made by using multiclass SVM.</p>
<p>Analysis of skin lesion fro skin cancer diagnosis is discussed in [<xref ref-type="bibr" rid="ref-10">10</xref>]. After resizing the input image, it is converted into grayscale. The edges are detected by using thresholding and morphological operations. Features such as perimeter, area circularity and irregularity index are extracted. KNN is used to predict the skin cancer. The system in [<xref ref-type="bibr" rid="ref-11">11</xref>] uses segmentation and classification approaches for skin cancer diagnosis. Initially, the images are enhanced and then Gaussian filter is used to smooth the image. Morphological operation is used to segment the lesion. Features like texture, color and histogram features are used and the prediction is made by SVM and KNN.</p>
<p>Deep learning and Gabor wavelet based skin cancer classification is discussed in [<xref ref-type="bibr" rid="ref-12">12</xref>]. The skin images are classified by using Gabor filter based CNN wavelet coefficient. The AlexNet and ResNet based Gabor filter is used for prediction. An efficient system for skin cancer diagnosis is described in [<xref ref-type="bibr" rid="ref-13">13</xref>]. At first, the contrast of the skin images are enhanced. The automatic thresholding technique is applied to red, green and blue color channels independently to create the binary mask. The large blob is identified and the edges are detected inside the blob. The extracted geometric features are used for the classification.</p>
<p>Melanocytic lesion classification using dermoscopic images is discussed in [<xref ref-type="bibr" rid="ref-14">14</xref>]. At first, the dermoscopic images are segmented using thresholding technique. The statistical features are extracted and DT classifier is used for classification. Multiwavelet based melanoma classification system is discussed in [<xref ref-type="bibr" rid="ref-15">15</xref>]. The input dermoscopic images are given to wiener filter to remove noise. Then multiwavelet transform is used for decomposition that produces sub band coefficients. The statistical features like mean, standard deviation, skewness and kurtosis are extracted and final prediction is made by SVM classifier.</p>
<p>In this study, an efficient SLC system using NSST based energy feature is proposed. The paper is organized as follows: Methods and materials used for SLC system is discussed in Section 2. Section 3 discusses the obtained results of the SLC system on PH<sup>2</sup> database images. The final section concludes the SLC system.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Methods and Materials</title>
<p>Shearlet transform is a multi-scale and multi-directional framework that can be applied to a variety of problems. Anisotropic properties are found in it. As a result, they have the capacity to identify directionality, which is an advantage over the conventional wavelet transform. Shearlet is a useful tool for efficiently handling such a wide variety of data and a large volume. In contrast to standard multi-scale transform, NSST has the benefit of being shift invariant. NSST&#x2019;s computational complexity is lower than that of NSCT.</p>
<p>In the proposed system, energy features are extracted from the NSST&#x2019;s sub-bands. The overall workflow of SLC system based on NSST and SVM is shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. In SLC system input melanoma images are given to NSST for decomposition with different directions (2, 4, 8 and 16) and it produces sub-bands. From these NSST&#x2019;s sub-bands, energy features are extracted and stored in feature database. Finally, classification is made by using SVM classifier.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Block diagram of SLC system</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-1.png"/>
</fig>
<sec id="s2_1">
<label>2.1</label>
<title>NSST Decomposition</title>
<p>The set of well-localized waveforms at different locations, orientations and scales are formed by Shearlets. The thin support of Shearlet is increased by higher scaling. The large of oriented trapezoids are present in Shearlet when compared to contour let and curvelet. It is constructed by using some generating functions like translation, shearing and parabolic scaling. It is supported by skinny and directional ridges with the parabolic scaling law in the fine scales. The orthonormal basis of <italic>H</italic><sup>2</sup>(<italic>K</italic><sup>2</sup>), they form a frame arbitrary function of <italic>f</italic>&#x2009;&#x2208;&#x2009;<italic>H</italic><sup>2</sup>(<italic>K</italic><sup>2</sup>).</p>
<p>The important property of Shearlets is that they produce sparse approximations. It has anisotropic features which are compactly support in the [0, 1]<italic><sup>2</sup></italic> while being close wise singularity curve <italic>Cu<sup>2</sup></italic> with bounded curvature. The <italic>M<sup>2</sup></italic> error is a decay error. In the shearlet approximation by taking the largest shearlet coefficients in the shearlet expansion in the optimal up to a log-factor the decay error occurs.<disp-formula id="eqn-1"><label>(1)</label>
<mml:math id="mml-eqn-1" display="block"><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mrow><mml:mi>f</mml:mi><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>S</mml:mi></mml:msub></mml:mrow><mml:msubsup><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mrow><mml:msup><mml:mi>M</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mn>2</mml:mn></mml:msubsup><mml:mo>&#x2264;</mml:mo><mml:mi>C</mml:mi><mml:mi>u</mml:mi><mml:msup><mml:mi>S</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>log</mml:mi><mml:mspace width="thinmathspace" /><mml:mi>S</mml:mi></mml:mrow><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mn>3</mml:mn></mml:msup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mi>S</mml:mi><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi mathvariant="normal">&#x221E;</mml:mi></mml:math>
</disp-formula>where the <italic>Cu</italic> is the constant depends on singularity curve and magnitudes of <inline-formula id="ieqn-1">
<mml:math id="mml-ieqn-1"><mml:mi>f</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thinmathspace" /><mml:msup><mml:mi>f</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msup><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mrow><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">d</mml:mi></mml:mrow><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mrow><mml:msup><mml:mi>f</mml:mi><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
</inline-formula>. It provides significantly the best <italic>S-</italic>term approximation in the rate of wavelets providing only the <italic>O</italic>(<italic>S</italic><sup>&#x2212;1</sup>) for the class functions [<xref ref-type="bibr" rid="ref-16">16</xref>]. <xref ref-type="fig" rid="fig-2">Fig. 2</xref> shows the decomposition process of NSST.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Decomposition process of NSST</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-2.png"/>
</fig>
<p>In this study, the decomposition process of NSST is made level by level and different directions from 2 to 64 and also it produces different low and high frequency sub-bands. NSST is also used in other fields like multifocal image fusion and computed tomography and magnetic resonance imaging brain image fusion [<xref ref-type="bibr" rid="ref-17">17</xref>].</p>
<p>Three level decomposition stages using NSST are shown in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>. The high frequency components are decomposed into three levels and multiple sub-band images. <xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows the 1<sup>st</sup> level NSST decomposition of a dermoscopic image with 4 directions. The low frequency image resembles the input image in <xref ref-type="fig" rid="fig-4">Fig. 4</xref> as it is the approximate version of the input image.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Three level decomposition using NSST</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-3.png"/>
</fig>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>1<sup>st</sup> level NSST decomposition with 4-directions</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-4.png"/>
</fig>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>NSST Based Energy Feature Extraction</title>
<p>From the NSST&#x2019;s sub-bands, energy features are extracted and stored in the feature database. The indication of energy is good in spatial and frequency level. Different energy distributions are available in different types of patterns in the frequency domain that can be effectively utilized for skin cancer diagnosis. The energy feature is defined by,<disp-formula id="eqn-2"><label>(2)</label>
<mml:math id="mml-eqn-2" display="block"><mml:mi>E</mml:mi><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>Q</mml:mi><mml:mi>R</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>Q</mml:mi></mml:munderover><mml:mrow></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>l</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>R</mml:mi></mml:munderover><mml:mrow><mml:mi>S</mml:mi><mml:mi>U</mml:mi><mml:mi>B</mml:mi><mml:mi>B</mml:mi><mml:mi>A</mml:mi><mml:mi>N</mml:mi><mml:mi>D</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mi>l</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mstyle></mml:math>
</disp-formula>where <italic>SUBBAND</italic> represents the NSST&#x2019;s sub-band of the dermoscopic image of size <italic>Q x R</italic>. The location of sub band coefficients is represented by <italic>(k, l)</italic>. In this study, the computed energy features by <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref> is used for the classification. The energy features are used in many other fields such as heart sound classification [<xref ref-type="bibr" rid="ref-18">18</xref>]. <xref ref-type="fig" rid="fig-5">Fig. 5</xref> shows the feature extraction stage of SLC system.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Feature extraction stage of SLC using NSST</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-5.png"/>
</fig>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>SLC Using SVM</title>
<p>It is a machine learning algorithm and commonly used for classification [<xref ref-type="bibr" rid="ref-19">19</xref>]. It can be used in multiclass and binary classification. It computes hyperplane from the feature space to separate the classes. The SVM model is a representation of the instances as points in space that are mapped such that the examples of the various categories are split by a clear separation that is as wide as feasible. New instances are then mapped into the same space and projected to fall into a category based on which side of the gap they land on. SVM hyper parameter and model parameters must be tuned in order to achieve better classification accuracy. With the kernel technique, SVMs may efficiently classify in a non-linear manner by implicitly mapping their inputs to high-dimensional feature spaces. The concept of SVM is illustrated in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Demonstration of support vector machine</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-6.png"/>
</fig>
<p>The SVM classifier is described as, let &#x007B;<italic>v</italic><sub>1</sub>, <italic>v</italic><sub>2</sub>, <italic>v</italic><sub>3</sub>, &#x2026;.<italic>v</italic><sub><italic>n</italic></sub>&#x007D; is the training set of image in class <italic>V</italic>, where <italic>V</italic>&#x2009;&#x2282;&#x2009;<italic>H</italic><sup><italic>n</italic></sup>. When the mapping function is <italic>&#x03C8;</italic>:<italic>V</italic>&#x2009;&#x2192;&#x2009;<italic>H</italic> is the feature space. Then the equation is given by,<disp-formula id="eqn-3"><label>(3)</label>
<mml:math id="mml-eqn-3" display="block"><mml:mo movablelimits="true" form="prefix">min</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow><mml:msup><mml:mrow><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mi>k</mml:mi><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>s</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>l</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>&#x03B6;</mml:mi><mml:mi>l</mml:mi></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>P</mml:mi></mml:mrow></mml:mstyle></mml:mstyle></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math>
</disp-formula>where <italic>k</italic>.<italic>&#x03B6;</italic>&#x2009;&#x2265;&#x2009;<italic>P</italic>&#x2009;&#x2212;&#x2009;<italic>&#x03B6;</italic>, let <italic>l&#x2009;&#x003D;&#x2009;1, 2,&#x2026;n</italic> and <italic>&#x03B6;</italic><sub><italic>l</italic></sub>&#x2009;&#x2265;&#x2009;0. Then <italic>&#x03B6;</italic> be the slack variable, and <italic>l</italic> is the number of samples. SVM has a high dimensional in space and it is more effective. In this study, SVM classifier is used for making output decision as normal or abnormal [<xref ref-type="bibr" rid="ref-20">20</xref>]. The performance of SVM classifier is measured using classification accuracy [<xref ref-type="bibr" rid="ref-21">21</xref>]. The SVM classifier is also used in brain image classification, electrocardiogram signal classification and gender classification using iris images. The classification stage of SLC is shown in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Classification stage of SLC using NSST</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-7.png"/>
</fig>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Results and Discussion</title>
<p>The SLC system performance is evaluated using the confusion matrix and ROC curve generated by MATLAB tool. The raw dermoscopic skin lesion images are obtained from PH<sup>2</sup> database [<xref ref-type="bibr" rid="ref-22">22</xref>]. It consists of 200 melanocytic images with lesions. The dermoscopic image resolution is 768&#x2009;&#x00D7;&#x2009;560 pixels. The 80 normal and 120 abnormal images are used for prediction using SVM. <xref ref-type="fig" rid="fig-8">Fig. 8</xref> shows some of the normal and abnormal images in the PH<sup>2</sup> database.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Sample skin images in PH<sup>2</sup> database used in SLC system</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-8.png"/>
</fig>
<p>The dermoscopic images are decomposed by NSST at different directions (2, 4, 8, 16 and 32) from 1 to 4 levels. From the NSST&#x2019;s sub-bands, energy features are extracted. Finally SVM classifier is used for the prediction of dermoscopic images as normal or abnormal. The SVM classification efficiency is evaluated by using 10 fold cross validation. The sensitivity, specificity and accuracy [<xref ref-type="bibr" rid="ref-23">23</xref>&#x2013;<xref ref-type="bibr" rid="ref-26">26</xref>] are calculated by using the correct identification of skin image samples. The sensitivity, specificity and accuracy is given by,<disp-formula id="eqn-4"><label>(4)</label>
<mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:mi>S</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:math>
</disp-formula><disp-formula id="eqn-5"><label>(5)</label>
<mml:math id="mml-eqn-5" display="block"><mml:mi>S</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:math>
</disp-formula><disp-formula id="eqn-6"><label>(6)</label>
<mml:math id="mml-eqn-6" display="block"><mml:mi>A</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mi>u</mml:mi><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:math>
</disp-formula>where, TP &#x2192; True Positive (correct classification of abnormal cases), FN &#x2192; False Negative (misclassification of abnormal cases), TN &#x2192; True Negative (correct classification of normal cases) and FP &#x2192; False Positive (misclassification of normal cases). The performances of SLC system based on NSST and SVM classifier are shown in <xref ref-type="table" rid="table-1">Tab. 1</xref>.</p>
<table-wrap id="table-1"><label>Table 1</label>
<caption>
<title>SLC system performances using NSST based energy features and SVM classifier</title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">NSST decomposition<break/>levels</th>
<th align="left" rowspan="2">NSST<break/>directions</th>
<th align="left" rowspan="2">NSST<break/>sub-bands</th>
<th align="left" colspan="3">Performance metrics (&#x0025;)</th>
</tr>
<tr>
<th align="left">Accuracy (&#x0025;)</th>
<th align="left">Sensitivity (&#x0025;)</th>
<th align="left">Specificity (&#x0025;)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="4">1</td>
<td align="left">2</td>
<td align="left">3</td>
<td align="left">67.33</td>
<td align="left">63.93</td>
<td align="left">72.50</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">5</td>
<td align="left">77.00</td>
<td align="left">70.00</td>
<td align="left">87.50</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">9</td>
<td align="left">90.00</td>
<td align="left">87.50</td>
<td align="left">93.75</td>
</tr>
<tr>
<td align="left">16</td>
<td align="left">17</td>
<td align="left">86.50</td>
<td align="left">83.33</td>
<td align="left">91.25</td>
</tr>
<tr>
<td align="left" rowspan="4">2</td>
<td align="left">2</td>
<td align="left">5</td>
<td align="left">71.00</td>
<td align="left">66.67</td>
<td align="left">77.50</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">9</td>
<td align="left">83.00</td>
<td align="left">80.00</td>
<td align="left">87.50</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">17</td>
<td align="left">93.00</td>
<td align="left">91.67</td>
<td align="left">95.00</td>
</tr>
<tr>
<td align="left">16</td>
<td align="left">33</td>
<td align="left">84.00</td>
<td align="left">81.67</td>
<td align="left">87.50</td>
</tr>
<tr>
<td align="left" rowspan="4">3</td>
<td align="left">2</td>
<td align="left">7</td>
<td align="left">71.00</td>
<td align="left">68.33</td>
<td align="left">75.00</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">13</td>
<td align="left">86.50</td>
<td align="left">81.67</td>
<td align="left">93.75</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">25</td>
<td align="left">96.00</td>
<td align="left">93.33</td>
<td align="left">100</td>
</tr>
<tr>
<td align="left">16</td>
<td align="left">49</td>
<td align="left">89.00</td>
<td align="left">83.33</td>
<td align="left">97.50</td>
</tr>
<tr>
<td align="left" rowspan="4">4</td>
<td align="left">2</td>
<td align="left">9</td>
<td align="left">69.50</td>
<td align="left">67.50</td>
<td align="left">72.50</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">17</td>
<td align="left">83.50</td>
<td align="left">79.17</td>
<td align="left">90.00</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">33</td>
<td align="left">94.50</td>
<td align="left">92.50</td>
<td align="left">97.50</td>
</tr>
<tr>
<td align="left">16</td>
<td align="left">65</td>
<td align="left">86.50</td>
<td align="left">80.00</td>
<td align="left">96.25</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>From <xref ref-type="table" rid="table-1">Tab. 1</xref>, it is observed that, comparing to other levels, the 3<sup>rd</sup> level produces the higher classification accuracy of 96&#x0025; using 8-directions NSST features. The sensitivity and specificity reaches up to 93.33&#x0025; and 100&#x0025; respectively in 3<sup>rd</sup> level with 8-directions. The accuracy is reduced in 4<sup>th</sup> level for all directions due to some redundant features. Only 8 directions in NSST produce higher classification accuracy in all 4 NSST levels comparing to other directions (2, 4 and 16). It produces 90&#x0025;, 93&#x0025; and 94&#x0025; accuracy for NSST level 1, 2 and 4. <xref ref-type="fig" rid="fig-9">Figs. 9</xref> and <xref ref-type="fig" rid="fig-10">10</xref> show the confusion matrices and ROCs for 1<sup>st</sup> level and 2<sup>nd</sup> level of NSST based energy features from 8-directions.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Confusion matrix for 8-directions and ROC for 1<sup>st</sup> level NSST feature</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-9.png"/>
</fig>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Confusion matrix for 8-directions and ROC for 2<sup>nd</sup> level NSST feature</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-10.png"/>
</fig>
<p>In <xref ref-type="fig" rid="fig-10">Fig. 10</xref>, confusion matrix shows that the 110 abnormal images are correctly classified and 4 abnormal images are wrongly classified. Also, 76 normal images are correctly classified. The sensitivity and specificity are 91.7&#x0025; and 95&#x0025; for 2<sup>nd</sup> level NSST features from 8-directions. From the ROC curve, it is observed that the maximum Area Under the Curve (AUC) is 0.93 at 8-directions and the minimum AUC is 0.71 at 2-directions using NSST based energy features with SVM classifier. <xref ref-type="fig" rid="fig-11">Fig. 11</xref> shows the confusion matrix and ROC for the 3<sup>rd</sup> level of NSST based energy features from 8-directions.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Confusion matrix for 8-directions and ROC for 3<sup>rd</sup> level NSST feature</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-11.png"/>
</fig>
<p>In <xref ref-type="fig" rid="fig-11">Fig. 11</xref>, confusion matrix shows that the 112 abnormal images are correctly classified and 8 abnormal images are wrongly classified. Also, 80 normal images are correctly classified. The sensitivity and specificity are 93.33&#x0025; and 100&#x0025;. From the ROC curve, it is observed that the maximum AUC is 0.96 at 8-directions and the minimum AUC is 0.71 at 2-directions using NSST based energy feature and SVM classifier. <xref ref-type="fig" rid="fig-12">Fig. 12</xref> shows the confusion matrix and ROC for the 4<sup>th</sup> level of NSST based energy features from 8-directions.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>Confusion matrix for 8-directions and ROC for 4<sup>th</sup> level NSST feature</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-12.png"/>
</fig>
<p>From <xref ref-type="fig" rid="fig-12">Fig. 12</xref>, confusion matrix shows that the 111 abnormal images are correctly classified and 9 abnormal images are wrongly classified. Also, 78 normal images are correctly classified and 2 normal images are wrongly. The sensitivity and specificity are 92.5&#x0025; and 97.5&#x0025;. From the ROC curve, it is observed that the maximum AUC is 0.94 at the direction 8 and the minimum AUC is 0.69 at 2-directions using NSST based energy feature and SVM classifier. From all the confusion matrices and ROCs for 8-directions, the 3<sup>rd</sup> level produces higher classification accuracy of 96&#x0025;. <xref ref-type="fig" rid="fig-13">Fig. 13</xref> shows the graphical representation of NSST direction 8 for all 4 levels.</p>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>Graphical representation of NSST direction 8 for all 4 levels</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_22385-fig-13.png"/>
</fig>
<p>From <xref ref-type="fig" rid="fig-13">Fig. 13</xref>, it is clearly observed that the 3<sup>rd</sup> level NSST features from 8-dierctions produces higher classification accuracy of 96&#x0025; comparing to other levels of NSST features by using the SVM classifier.</p>
</sec>
<sec id="s4">
<label>4</label>
<title>Conclusions</title>
<p>An efficient method for SLC system using NSST and SVM classifier is presented in this study. Initially, the NSST is used to decompose the input skin image that produces NSST&#x2019;s sub-bands in different directions. The thin support of Shearlet is increased by higher scaling. The energy features extracted from the NSST&#x2019;s sub-band are used as texture descriptors to predict the abnormality in the skin images. Finally, SVM classifier is used for the prediction using 10-fold cross validation method. The 3<sup>rd</sup> level NSST with 8-directions produces better classification accuracy of 96&#x0025; comparing to other levels. In future, NSST based temporal and spectral features can be utilized for skin cancer diagnosis. Also, multi-class SVM classifier can be directly used to classify the skin cancers.</p>
</sec>
</body>
<back><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> The authors received no specific funding for this study.</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Maury</surname></string-name>, <string-name><given-names>S. K.</given-names> <surname>Singh</surname></string-name>, <string-name><given-names>A. K.</given-names> <surname>Maurya</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name></person-group>, &#x201C;<article-title>GLCM and multi class support vector machine based automated skin cancer classification</article-title>,&#x201D; in <conf-name>IEEE Int. Conf. on Computing for Sustainable Global Development</conf-name>, <conf-loc>New Delhi, India</conf-loc>, pp. <fpage>444</fpage>&#x2013;<lpage>447</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Sonia</surname></string-name></person-group>, &#x201C;<article-title>Melanoma image classification system by NSCT features and Bayes classification</article-title>,&#x201D; <source>International Journal of Advances in Signal and Image Sciences</source>, vol. <volume>2</volume>, no. <issue>2</issue>, pp. <fpage>27</fpage>&#x2013;<lpage>33</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Suganya</surname></string-name></person-group>, &#x201C;<article-title>An automated computer aided diagnosis of skin lesions detection and classification for dermoscopy images</article-title>,&#x201D; in <conf-name>IEEE Int. Conf. on Recent Trends in Information Technology</conf-name>, <conf-loc>Chennai, India</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>5</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Alquran</surname></string-name>, <string-name><given-names>I. A.</given-names> <surname>Qasmieh</surname></string-name>, <string-name><given-names>A. M.</given-names> <surname>Alqudah</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Alhammouri</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Alawneh</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>The melanoma skin cancer detection and classification using support vector machine</article-title>,&#x201D; in <conf-name>IEEE Jordan Conf. on Applied Electrical Engineering and Computing Technologies</conf-name>, <conf-loc>Aqaba, Jordon</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>5</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R. S.</given-names> <surname>Sundar</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Vadivel</surname></string-name></person-group>, &#x201C;<article-title>Performance analysis of melanoma early detection using skin lesion classification system</article-title>,&#x201D; in <conf-name>IEEE Int. Conf. on Circuit, Power and Computing Technologies</conf-name>, <conf-loc>Nagercoil, India</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>5</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>N. C.</given-names> <surname>Lynn</surname></string-name> and <string-name><given-names>Z. M.</given-names> <surname>Kyu</surname></string-name></person-group>, &#x201C;<article-title>Segmentation and classification of skin cancer melanoma from skin lesion images</article-title>,&#x201D; in <conf-name>18th Int. Conf. on Parallel and Distributed Computing, Applications and Technologies</conf-name>, <conf-loc>Taipei, Taiwan</conf-loc>, pp. <fpage>117</fpage>&#x2013;<lpage>122</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>H. R.</given-names> <surname>Mhaske</surname></string-name> and <string-name><given-names>D. A.</given-names> <surname>Phalke</surname></string-name></person-group>, &#x201C;<article-title>Melanoma skin cancer detection and classification based on supervised and unsupervised learning</article-title>,&#x201D; in <conf-name>IEEE Int. Conf. on Circuits, Controls and Communications</conf-name>, <conf-loc>Bangaluru, India</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>5</lpage>, <year>2013</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Kumarapandian</surname></string-name></person-group>, &#x201C;<article-title>Melanoma classification using multiwavelet transform and support vector machine</article-title>,&#x201D; <source>International Journal of MC Square Scientific Research</source>, vol. <volume>10</volume>, no. <issue>3</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>7</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S. K.</given-names> <surname>Singh</surname></string-name> and <string-name><given-names>A. S.</given-names> <surname>Jalal</surname></string-name></person-group>, &#x201C;<article-title>A robust approach for automatic skin cancer disease classification</article-title>,&#x201D; in <conf-name>IEEE 1st India Int. Conf. on Information Processing</conf-name>, <conf-loc>Delhi, India</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>4</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>N. B.</given-names> <surname>Linsangan</surname></string-name>, <string-name><given-names>J. J.</given-names> <surname>Adtoon</surname></string-name> and <string-name><given-names>J. L.</given-names> <surname>Torres</surname></string-name></person-group>, &#x201C;<article-title>Geometric analysis of skin lesion for skin cancer using image processing</article-title>,&#x201D; in <conf-name>IEEE 10th International Conf. on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management</conf-name>, <conf-loc>Baguio City, Philippines</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>5</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Sumithra</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Suhil</surname></string-name> and <string-name><given-names>D. S.</given-names> <surname>Guru</surname></string-name></person-group>, &#x201C;<article-title>Segmentation and classification of skin lesions for disease diagnosis</article-title>,&#x201D; <source>Procedia Computer Science</source>, vol. <volume>45</volume>, pp. <fpage>76</fpage>&#x2013;<lpage>85</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Serte</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Demirel</surname></string-name></person-group>, &#x201C;<article-title>Gabor wavelet-based deep learning for skin lesion classification</article-title>,&#x201D; <source>Computers in Biology and Medicine</source>, vol. <volume>113</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>7</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Jain</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Pise</surname></string-name></person-group>, &#x201C;<article-title>Computer aided melanoma skin cancer detection using image processing</article-title>,&#x201D; <source>Procedia Computer Science</source>, vol. <volume>48</volume>, pp. <fpage>735</fpage>&#x2013;<lpage>740</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L. K.</given-names> <surname>Ferris</surname></string-name>, <string-name><given-names>J. A.</given-names> <surname>Harkes</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Gilbert</surname></string-name>, <string-name><given-names>D. G.</given-names> <surname>Winger</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Golubets</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Computer-aided classification of melanocytic lesions using dermoscopic images</article-title>,&#x201D; <source>Journal of the American Academy of Dermatology</source>, vol. <volume>73</volume>, no. <issue>5</issue>, pp. <fpage>769</fpage>&#x2013;<lpage>776</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Shamganth</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Melanoma</surname></string-name></person-group>, &#x201C;<article-title>Classification using multi wavelet transform and support vector machine</article-title>,&#x201D; <source>International Journal of MC Square Scientific Research</source>, vol. <volume>11</volume>, no. <issue>3</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>7</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Guorong</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Luping</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Dongzhu</surname></string-name></person-group>, &#x201C;<article-title>Multi-focus image fusion based on non-subsampled shearlet transform</article-title>,&#x201D; <source>IET Image Processing</source>, vol. <volume>7</volume>, no. <issue>6</issue>, pp. <fpage>633</fpage>&#x2013;<lpage>639</lpage>, <year>2013</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Ouerghi</surname></string-name>, <string-name><given-names>O.</given-names> <surname>Mourali</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Zagrouba</surname></string-name></person-group>, &#x201C;<article-title>Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ colour space</article-title>,&#x201D; <source>IET Image Processing</source>, vol. <volume>12</volume>, no. <issue>10</issue>, pp. <fpage>1873</fpage>&#x2013;<lpage>1880</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>O.</given-names> <surname>Deperlio&#x011F;lu</surname></string-name></person-group>, &#x201C;<article-title>Classification of heart sounds with re-sampled energy method</article-title>,&#x201D; in <conf-name>IEEE 26th Signal Processing and Communications Applications Conf.</conf-name>, <conf-loc>Izmir, Turkey</conf-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>4</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Mohankumar</surname></string-name></person-group>, &#x201C;<article-title>Analysis of different wavelets for brain image classification using support vector machine</article-title>,&#x201D; <source>International Journal of Advances in Signal and Image Sciences</source>, vol. <volume>2</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>4</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Vijaya Arjunan</surname></string-name></person-group>, &#x201C;<article-title>ECG signal classification based on statistical features with SVM classification</article-title>,&#x201D; <source>International Journal of Advances in Signal and Image Sciences</source>, vol. <volume>2</volume>, no. <issue>1</issue>, pp. <fpage>5</fpage>&#x2013;<lpage>10</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="web"><person-group person-group-type="author"><collab>PH2 Database Link</collab></person-group>: <uri xlink:href="https://www.fc.up.pt/addi/ph2&#x0025;20database.html">https://www.fc.up.pt/addi/ph2&#x0025;20database.html</uri>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Murugan</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Bhardwaj</surname></string-name>, and <string-name><given-names>T. R.</given-names> <surname>Ganeshbabu</surname></string-name></person-group>, &#x201C;<article-title>Object recognition based on empirical wavelet transform</article-title>,&#x201D; <source>International Journal of MC Square Scientific Research</source>, vol. <volume>7</volume>, no. <issue>1</issue>, pp. <fpage>74</fpage>&#x2013;<lpage>80</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Jose</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Gautam</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Tiwari</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Tiwari</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Suresh</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>An image quality enhancement scheme employing adolescent identity search algorithm in the NSST domain for multimodal medical image fusion</article-title>,&#x201D; <source>Biomedical Signal Processing and Control</source>, vol. <volume>66</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>10</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Khare</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Khare</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Srivastava</surname></string-name></person-group>, &#x201C;<article-title>Shearlet transform based technique for image fusion using median fusion rule</article-title>,&#x201D; <source>Multimedia Tools and Applications</source>, vol. <volume>80</volume>, no. <issue>8</issue>, pp. <fpage>11491</fpage>&#x2013;<lpage>11522</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Ding</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Zhou</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Nie</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Hou</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Liu</surname></string-name></person-group>, &#x201C;<article-title>Brain medical image fusion based on dual-branch CNNs in NSST domain</article-title>,&#x201D; <source>BioMedical Research International</source>, vol. <volume>2020</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>15</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Jacob</surname></string-name> and <string-name><given-names>J. D.</given-names> <surname>Rosita</surname></string-name></person-group>, &#x201C;<article-title>Fractal model for skin cancer diagnosis using probabilistic classifiers</article-title>,&#x201D; <source>International Journal of Advances in Signal and Image Sciences</source>, vol. <volume>7</volume>, no. <issue>1</issue>, pp. <fpage>21</fpage>&#x2013;<lpage>29</lpage>, <year>2021</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>