<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">14347</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2021.014347</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Automatic Segmentation of Liver from Abdominal Computed Tomography Images Using Energy Feature</article-title>
<alt-title alt-title-type="left-running-head">Automatic Segmentation of Liver from Abdominal Computed Tomography Images using Energy Feature</alt-title>
<alt-title alt-title-type="right-running-head">Automatic Segmentation of Liver from Abdominal Computed Tomography Images using Energy Feature</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western">
<surname>Rajamanickam</surname>
<given-names>Prabakaran</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-2" contrib-type="author" corresp="yes">
<name name-style="western">
<surname>Darmanayagam</surname>
<given-names>Shiloah Elizabeth</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<email>shiloah@annauniv.edu</email>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western">
<surname>Raj</surname>
<given-names>Sunil Retmin Raj Cyril</given-names>
</name>
<xref ref-type="aff" rid="aff-2">2</xref></contrib>
<aff id="aff-1"><label>1</label><institution>University Department of Computer Science Engineering, Anna University</institution>, <addr-line>Chennai, 600025</addr-line>, <country>India</country></aff>
<aff id="aff-2"><label>2</label><institution>University Department of Information Technology, MIT Campus</institution>, <addr-line>Chennai, 600044</addr-line>, <country>India</country></aff>
</contrib-group>
<author-notes><corresp id="cor1">&#x002A;Corresponding Author: Shiloah Elizabeth Darmanayagam. Email: <email>shiloah@annauniv.edu</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2020-11-23">
<day>23</day>
<month>11</month>
<year>2020</year>
</pub-date>
<volume>67</volume>
<issue>1</issue>
<fpage>709</fpage>
<lpage>722</lpage>
<history>
<date date-type="received">
<day>15</day>
<month>09</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>10</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2021 Rajamanickam, Darmanayagam and Raj</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Rajamanickam, Darmanayagam and Raj</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_14347.pdf"></self-uri>
<abstract>
<p>Liver Segmentation is one of the challenging tasks in detecting and classifying liver tumors from Computed Tomography (CT) images. The segmentation of hepatic organ is more intricate task, owing to the fact that it possesses a sizeable quantum of vascularization. This paper proposes an algorithm for automatic seed point selection using energy feature for use in level set algorithm for segmentation of liver region in CT scans. The effectiveness of the method can be determined when used in a model to classify the liver CT images as tumorous or not. This involves segmentation of the region of interest (ROI) from the segmented liver, extraction of the shape and texture features from the segmented ROI and classification of the ROIs as tumorous or not by using a classifier based on the extracted features. In this work, the proposed seed point selection technique has been used in level set algorithm for segmentation of liver region in CT scans and the ROIs have been extracted using Fuzzy C Means clustering (FCM) which is one of the algorithms to segment the images. The dataset used in this method has been collected from various repositories and scan centers. The outcome of this proposed segmentation model has reduced the area overlap error that could offer the intended accuracy and consistency. It gives better results when compared with other existing algorithms. Fast execution in short span of time is another advantage of this method which in turns helps the radiologist to ascertain the abnormalities instantly.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Liver segmentation</kwd>
<kwd>automatic seed point</kwd>
<kwd>tumor segmentation</kwd>
<kwd>classification</kwd>
<kwd>fuzzy C means clustering</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Liver is an auburn gland that carries out more than 500 essential functions of human life. Evolution of Chronic Liver Disease (CLD) is witnessed by many varying stages having distinct pathological characteristics. Steatosis otherwise known as Fatty liver infiltration is the preliminary stage of the liver anomaly that crop up due to surge of fat deposits in hepatocytes. Hepatic cirrhosis is an acute advanced malady of the hepatic organ liver that is characterized by metastasized damage to liver with scaring of the tissue and aggregation of cells. The extremity of cirrhosis is the Hepatocellular Carcinoma (HCC) or hepatoma is an initial malignancy of liver. Hepatitis B and Hepatitis C appears to be the most significant etiology of hepatocellular carcinoma. The high resolution techniques that are intended in the therapeutic field is decisively considered to be beneficial one. It offers the physician a better diagnosis and follows up in the patient&#x2019;s ailments.</p>
<p>Computer Aided Diagnosis (CAD) is a technical approach which assists the radiologists to elucidate the images more precisely and to discern the potential findings so as to exclude the fallacious interpretation. The CT images possess higher signal to noise ratio, so the detections are more exact. CT images are immensely preferred for hepatic anomaly diagnosis because they offer gross sectional image. Therefore, CAD procedures give the doctors a second opinion in the diagnosis of liver diseases.</p>
<p>Manual segmentation of liver has various problems. Because of the intricate nature of anatomy of human body, the ambit of liver cannot be ascertained to the exact. Often, different interventional radiologists identify it differently. These pitfalls lead to under segmentation or over segmentation [<xref ref-type="bibr" rid="ref-1">1</xref>]. The goal of medical image segmentation is to delineate the image areas depicting the different anatomies. This prevails as an arduous task because of the intersection and overlap of delicate tissues with meaty intra organ variation and parallel voxel intensities of the adjacent organs. Liver is an organ with high degree of vascularization, which implies that the liver encompasses many blood vessels within the tissue [<xref ref-type="bibr" rid="ref-2">2</xref>]. Owing to this fact, the conventional perceiving of the hepatic tumours from the similar gray intensity is inordinately cumbersome task. Liver segmentation is usually considered the toughest because of the similar intensities of neighboring organs.</p>
<p>The following are the four steps involved in CAD System.</p>
<p>Pre-processing&#x2013;-It is done to eliminate the unwanted noise level and to ameliorate consistency of the image.</p>
<p>Segmentation&#x2013;-It is the technique of extricating liver images from the ventral section of computed tomography images. Then the tumor section is disengaged from the segmented liver called region of interest (ROI).</p>
<p>Feature Extraction&#x2013;-From the segmented tumor region, texture and shape features are extracted for the purpose of classification.</p>
<p>Classification&#x2013;-Where classifiers are adopted to ascertain the hepatic ailments based on the extracted elements.</p>
<p>The remainder of the paper is organized as follows: Section 2 discusses the related work, Section 3 explains the architecture of the proposed work, Section 4 presents the experimental result and analysis, and Section 5 presents the conclusion and future work.</p>
<p>The CT image dataset used in this work have been collected from scan centers and repositories. MATLAB 2018 has been used to implement the proposed work.</p>
<p>This work focuses on automatic seed point selection using energy feature for segmentation of liver.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Work</title>
<p>Cheema et al. [<xref ref-type="bibr" rid="ref-3">3</xref>] have suggested a method that effects the segmentation of liver with greater particularity by debarring the noise from CT images. The noiseless images are extracted by modulating the residual convolutional neural network (LER-CN) [<xref ref-type="bibr" rid="ref-3">3</xref>] that flecks and bring outs a liver model with ultimate noiseless which could enable better interpretation. This convoluted noise removal component (NRC) discharge the extraction by transmitting the Low Dose CT (LDCT) images which is noise removed to structural preservation component (SPC) in order to revamp and collate the acquired image by sustaining its entire modules. LER-CN differentiated the fringes of liver followed by the discrimination of the organ&#x2019;s texture to the required level. The working model of the NRC and SPC is kept synchronized so that the input and output images do well with the mapped estimation. This LER-CN inculcates fast output by endorsing last in first out (LIFO) in which the last NRC layer is linked to the initial layer of SPC and the first NRC layers are connected with the last SPC layers. By adopting LIFO, the LER-CN is ensured with a fine flow of data. This LDCT images are extracted with the usage successive de-convolutional layer, so that the output images are tuned with input images. The specified dimensions of overlapped patches are inculcated in order to augment the training samples.</p>
<p>Song et al. [<xref ref-type="bibr" rid="ref-4">4</xref>] have proposed the improved confidence connected livers segmentation method based on the imaging characteristics in three views [<xref ref-type="bibr" rid="ref-4">4</xref>] that are the reducing noise, automatic seed point selection and finally combining the liver contours that are extracted from coronal, sagittal and cross sectional images. This method reduces the frequency of the wrong classification that is caused due to the closeness of the neighboring organs. Further this method enhances the accuracy and automatic segmentation without user intervention. Also it efficiently excludes the complexity of classifying the edges of the liver.</p>
<p>Elaziz et al. [<xref ref-type="bibr" rid="ref-5">5</xref>] have proposed an advanced algorithm for automatic segmentation of liver and hepatic tumor from abdominal CT images. Their proposed algorithm [<xref ref-type="bibr" rid="ref-5">5</xref>] abates the computation time by excluding the sections of the other structures as most strategy in present day are extremely time exhausting. Region growing method is adopted from the segmented liver that initiated from a seed point which is selected automatically.</p>
<p>Peng et al. [<xref ref-type="bibr" rid="ref-6">6</xref>] have put forth the level set method for segmentation of the input image based on the local section gradient. In some of the existing method, because of similar intensity and weak edges, the proper results can&#x2019;t achieve. So they have considered local gradient information to overcome this issue. At first, the correspondence between the segmented object and image gradient to local minima and maxima all over the picture element are presented based on two assumptions, from which new pixel classification method is introduced based on weight of Euclidean distance [<xref ref-type="bibr" rid="ref-6">6</xref>]. Secondly, to improve the anti-noise capacity of the proposed gradient information based model, a combination of variational level set and image spatial adjacent information is implemented. Thirdly a new promulgation procedure with the edge indicator function is encompassed into the level set function to classify the picture element in similar regions of the segmentation object and also to make the proposed method more insensitive to origin contours and static numerical implementation.</p>
<p>Li et al. [<xref ref-type="bibr" rid="ref-7">7</xref>] have proposed a neoteric embark on automatic liver segmentation, which potentially coalesce the shape based initialization and deformable graph cut methodology along with shape limitation that intercepts the arising snag which soared on by the specific oddity of the liver&#x2019;s anatomical pattern and caliber of the picture element [<xref ref-type="bibr" rid="ref-7">7</xref>]. As on exemplification it&#x2019;s higher rendition, 50 CT scan images have been put up in a communal accessible domain. The tentative outcome of their method was efficacious and impeccable for progressive perception of the liver surface. They asserted that their proposed method can identify the hepatic surface with lower error and can successfully cope with under-segmentation and over segmentation.</p>
<p>Singh et al. [<xref ref-type="bibr" rid="ref-8">8</xref>] have proposed segmentation of liver using Hybrid K-means clustering and Level set. Their method [<xref ref-type="bibr" rid="ref-8">8</xref>] uses crossbreed approach that is the hybrid clustering algorithm to sniff out the auburn gland. The hybrid clustering is accomplished by determining the specified number of clusters in the image using K-means clustering technique and following to that Ant Colony Optimization (ACO) is exerted to treat the fallacious graded k-clusters. Thus applying level set after clustering increases accuracy of segmentation.</p>
<p>Altarawneh et al. [<xref ref-type="bibr" rid="ref-9">9</xref>] have proposed a revamped model that is the modified distance regularized level set that adds on innovative balloon force to the existing Distance regularized level set estimation (DRLSE) model to superintend the orientation of the growing silhouette&#x2019;s by the various preferred overtures. The newly included balloon force [<xref ref-type="bibr" rid="ref-9">9</xref>] discourages the evolving contour from exceeding the liver boundary or leaking at a region that is associated with a weak edge, or does not have an edge. This model deals with over-segmentation more effectively compared to DRLSE model.</p>
<p>Heckel et al. [<xref ref-type="bibr" rid="ref-10">10</xref>] have proposed a swift approach for partial volume correction for solid lesions in CT scans. Their algorithm [<xref ref-type="bibr" rid="ref-10">10</xref>] is an inductive reasoning of SPVA (Segmentation-based Partial Volume Analysis) proposed by Kuhnigk. Also it incorporates the augmented Kuhnigk&#x2019;s approach, that it can be administered to any designated segmentation outcomes after the manual evolution of any rigid firm target that comprises non-analogous abscess and lesions with the non-identical physical encompassing. The accuracy of the volumetric measurement is highly improved the veracity of the volumetric quantification, obviously that has been shown on various phantom data. The algorithm has apparently pronounced to be more authentic method for volume estimation when relate with the voxel counting for the most normal abscess or tumors which are assessed during all kinds of therapeutic treatment including chemotherapy. It is compatible to other solid abscess, since all vital information&#x2019;s are extricated from the given CT image, based on a given segmentation mask.</p>
<p>Li et al. [<xref ref-type="bibr" rid="ref-11">11</xref>] have put forward a supervised variational level set to mobilize the statistical energy function with a weighted probability approximation that is applied in liver segmentation. For better estimation of absolute potential distribution [<xref ref-type="bibr" rid="ref-11">11</xref>] and discriminate the statistical intensity differences in all dimensional aspects. The prevailing impreciseness in the potential distribution is resolved by the numerical function for all its rear end and the user interface.</p>
<p>Liu et al. [<xref ref-type="bibr" rid="ref-12">12</xref>] have put forth a technique in which pixel altogether with the region level conditions are frequently clustered by the suggested region level Hidden Markov Random Field-Fuzzy C Means (HMRF-FCM) algorithm, in which region-level contextual information is structured by Region level Markov Random Field (RMRF), and is further amplified by the average template of memberships. The HMRF-FCM algorithm [<xref ref-type="bibr" rid="ref-12">12</xref>] has been enhanced in accordance with abstracting potential outcomes into a level-one-region where voluminous parameters are modeled by the MRF and are enhanced by the region-level mean template of fuzzy memberships.</p>
<p>Liu et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] have proposed an advanced unsupervised FCM-based image segmentation modus by holding scrupulous study in the selection of local information. For better monitoring over the strength and latitude of the responding pixel, they have integrated the region level local information into the fuzzy clustering technique. First, a novel variant task is initiated by merging region based and pixel based distance functions together. This is done in order to enhance the relationship between pixels which have similar local characteristics. Secondly, a probability function is propounded by assimilating the variances between neighboring region into the mean model of the fuzzy membership function which is accustomed to select the local spatial limitations [<xref ref-type="bibr" rid="ref-13">13</xref>] by a tradeoff weight in accordance with whether the pixel belongs to homogeneous region or not. The proposed modus intensified the prevalence between the pixel within the parallel region and averts the segment fringes becoming smooth by integrating the section based information into the spatially extend limitations.</p>
<p>From the related work, it is found that fixing the seed point automatically is a challenging task because many neighboring organs are connected. So, in this paper an automatic seed point selection strategy using energy feature has been proposed. This segmentation approach has resulted in a reduction in area overlapping error and execution time as compared to the existing work.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Architecture</title>
<p>The proposed model for automatic seed point selection can be used in level set segmentation technique which will automatically detect liver from CT scans and a classifier can be used to classify whether tumor is present or not. The system requires CT images as input. A guided filter is used for pre-processing of the CT images. With the help of energy feature which is one of the texture features, the seed point is selected automatically and the value of the x and y axis of the seed point is given as the initial starting point of the level set algorithm. This method helps to segment the liver and then the segmented liver portion is stored separately. A fuzzy c-means algorithm is now applied on the stored liver part to segment the tumors. Additional features are extracted for the tumor. The extracted features are labelled with class labels to form the training set. Finally the SVM classifier is made ready for classification. <xref ref-type="fig" rid="fig-1">Fig. 1</xref> shows an architecture of automatic liver tumor sementation.</p>
<fig id="fig-1">
<label>Figure 1</label> 
<caption>
<title>Architecture of the Automatic Liver Tumor Segmentation Model</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-1.png"/>
</fig>
<sec id="s3_1">
<label>3.1</label>
<title>Preprocessing</title>
<p>The preprocessing is the procedure done before taking an image into the system. The original CT images of various sessions possess different contrast. So an adjustment of the contrast is done using the histogram of liver intensity. A guided filter is used as an edge preserving smoothing operator like the desired bilateral filter. It has a better behavior around edges of an object.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Liver Segmentation</title>
<p>Segmentation of an image is the method of fixing a label to each and every pixel in an intended object. The segmentation of liver is basically the toughest part, since the intensity levels of the neighboring organs and liver are quite similar in a CT scan [<xref ref-type="bibr" rid="ref-14">14</xref>]. Here, automatically the seed point is selected with the help of energy feature and a modified level set algorithm is used for liver segmentation. This is the most important process as any under segmentation or over segmentation will result in missing a part or whole of the tumors.</p>
<sec id="s3_2_1">
<label>3.2.1</label>
<title>Energy Based Seed Point Selection</title>
<p>The first step in liver segmentation is to choose the position of the seed point from where the contour grows. This is done using the energy feature.</p>
<fig id="fig-11">
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-11.png"/>
</fig>
</sec>
<sec id="s3_2_2">
<label>3.2.2</label>
<title>Level Set Segmentation</title>
<p>Numerous region based level set methods have been suggested for segmenting a particular image. The image driven forces advances to all its desired boundary of the target. The user specifies an initial guess for the contour, which is then moved by image driven forces to the boundaries of the desired objects. So the seed point generated by energy based seed point selection algorithm is considered as initial point. In this method, two different kinds of forces are taken into account, an internal force which ascertain within the curves are made to retain the model in leveled up manner during the deformation process. While the external force that are determined from the pooling image data are ascertained to advance the model towards the target or specified features within the image. A proposed modified distance regularized level-set algorithm, shows the existence of balloon force which controls the growth of the internal forces within the curve along rough edges.</p>
</sec>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Tumor Candidate Segmentation</title>
<p>Fuzzy C-Means Clustering (FCM) is administered on the segmented liver image to extract tumor portions from it. FCM is commonly used for segmentation. It assigns the membership to image pixels related to every cluster midpoint on the grounding of the extension between the cluster point and image pixels. Then, the segmented tumor portions are saved.</p>
<p>Working of the FCM:</p>
<p>This algorithm works by grouping or clustering data points related to the center of clusters. Here, an aggregation of every data point membership it must be equal to one. Then after each epoch membership and center of clusters should be re-corrected by using the given formula:</p>
<p><disp-formula id="eqn-1">
<label>(1)</label><alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-1.png"/>
<tex-math id="tex-eqn-1"><![CDATA[$$\begin{equation}Mij=\frac{1}{\sum_{k=1}^{n} \left(\frac{aij}{aik}\right)^{2/b-1}}
 \label{eqn-1} \end{equation}$$]]></tex-math>
<mml:math id="mml-eqn-1" display="block"><mml:mrow></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mstyle displaystyle='true'><mml:mstyle displaystyle='true'><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo> </mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo lspace='0pt' rspace='0pt'>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup></mml:mstyle></mml:mstyle><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>a</mml:mi><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>/</mml:mo><mml:mi>b</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow><mml:mrow></mml:mrow></mml:math>
</alternatives></disp-formula></p>
<p><disp-formula id="eqn-2">
<label>(2)</label><alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-2.png"/>
<tex-math id="tex-eqn-2"><![CDATA[$$\begin{equation}vj=\frac{ \left(\sum_{i=1}^{d} \left(Mij\right)^{f}Xi\right)}{ \left(\sum_{i=1}^{d} \left(Mij\right)^{f}\right)}\quad \forall j = 1, 2, 3, 4\ldots c\label{eqn-2}\end{equation}$$]]></tex-math>
<mml:math id="mml-eqn-2" display="block"><mml:mrow></mml:mrow><mml:mrow><mml:mi>v</mml:mi><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:mstyle displaystyle='true'><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo> </mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo lspace='0pt' rspace='0pt'>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msubsup></mml:mstyle></mml:mstyle><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msup><mml:mi>X</mml:mi><mml:mi>i</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle='true'><mml:mstyle displaystyle='true'><mml:msubsup><mml:mrow><mml:mo>&#x2211;</mml:mo> </mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo lspace='0pt' rspace='0pt'>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msubsup></mml:mstyle></mml:mstyle><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mspace width="1em"/><mml:mo>&#x2200;</mml:mo><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x2026;</mml:mo><mml:mi>c</mml:mi></mml:mrow><mml:mrow></mml:mrow></mml:math>
</alternatives></disp-formula></p>
<p>where,</p>
<p>&#x2018;<italic>d</italic>&#x2019;&#x2013;-The number of data points.</p>
<p>&#x2018;<italic>vj</italic>&#x2019;&#x2013;-<italic>j</italic>th cluster center</p>
<p>&#x2018;<italic>f</italic>&#x2019;&#x2013;-The index of fuzziness m with the limit of [1, <inline-formula id="ieqn-5"><alternatives><inline-graphic xlink:href="ieqn-5.png"/><tex-math id="tex-ieqn-5"><![CDATA[$\infty$]]></tex-math><mml:math id="mml-ieqn-5"><mml:mi>&#x221E;</mml:mi></mml:math></alternatives></inline-formula>].</p>
<p>&#x2018;<italic>n</italic>&#x2019;&#x2013;-The total number of cluster center</p>
<p>&#x2018;<italic>Mij</italic>&#x2019;&#x2013;-The membership of <italic>i</italic>th data to <italic>j</italic>th cluster center.</p>
<p>&#x2018;<italic>aij</italic>&#x2019;&#x2013;-The Euclidean distance between <italic>i</italic>th data and <italic>j</italic>th cluster center.</p>
<p>The ultimate aim of FCM algorithm is to minimize:</p>
<p><disp-formula id="eqn-3">
<label>(3)</label><alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-3.png"/>
<tex-math id="tex-eqn-3"><![CDATA[$$\begin{equation}
J \left(U,V\right)=\sum_{i=1}^{d}\sum_{j=1}^{n} \left(Mij\right)^{b} \left\| xi~ \;-vj\right\| ^{2}
 \label{eqn-3}
\end{equation}$$]]></tex-math>
<mml:math id="mml-eqn-3" display="block"><mml:mi>J</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>U</mml:mi><mml:mo>,</mml:mo><mml:mi>V</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:mstyle displaystyle='true'><mml:munderover><mml:mrow><mml:mo>&#x2211;</mml:mo> </mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo lspace='0pt' rspace='0pt'>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:munderover></mml:mstyle></mml:mstyle><mml:mstyle displaystyle='true'><mml:mstyle displaystyle='true'><mml:munderover><mml:mrow><mml:mo>&#x2211;</mml:mo> </mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo lspace='0pt' rspace='0pt'>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover></mml:mstyle></mml:mstyle><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mrow><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mspace width=".3em" /><mml:mo>&#x2013;</mml:mo><mml:mi>v</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mo>&#x2225;</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></alternatives></disp-formula></p>
<p>where, &#x2018;<italic>&#x7c;&#x7c;x<sub>i</sub></italic> &#x2013; <italic>v<sub>j</sub>&#x7c;&#x7c;&#x2019;</italic>&#x2013;-The Euclidean distance between <italic>j</italic>th cluster center <italic>and i</italic>th data.</p>
<list list-type="order">
<list-item><p>The cluster centers &#x2018;<italic>n</italic>&#x2019; are selected randomly.</p></list-item>
<list-item><p>Determine the fuzzy membership &#x2018;<italic>M<sub>ij</sub></italic>&#x2019; by the given formula given in <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>.</p></list-item>
<list-item><p>Calculate the updated cluster centers &#x2018;<inline-formula id="ieqn-6"><alternatives><inline-graphic xlink:href="ieqn-6.png"/><tex-math id="tex-ieqn-6"><![CDATA[$\textit{v}_{j}$]]></tex-math><mml:math id="mml-ieqn-6"><mml:msub><mml:mrow><mml:mstyle class="text"><mml:mtext class="textit" mathvariant="italic">v</mml:mtext></mml:mstyle></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></alternatives></inline-formula>&#x2019; using the formula given in <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref>.</p></list-item>
<list-item><p>Continue the previous two steps until the minimal &#x2018;<italic>J</italic>&#x2019; value is attained</p></list-item>
</list>
<p>&#x2018;<italic>J</italic>&#x2019;&#x2013;-objective function.</p>
<p>Segmentation of ROI</p>
<p><bold>Input:</bold> Segmented liver image (.jpeg),</p>
<p><bold>Output:</bold> Tumor Candidate (if any),</p>
<p>1. Choose the number of clusters c between 2 to n, and the exponential weight Mu</p>
<p>(<inline-formula id="ieqn-7"><alternatives><inline-graphic xlink:href="ieqn-7.png"/><tex-math id="tex-ieqn-7"><![CDATA[$1 < \text{M}\text{u}< \infty$]]></tex-math><mml:math id="mml-ieqn-7"><mml:mn>1</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mstyle class="text"><mml:mtext>M</mml:mtext></mml:mstyle><mml:mstyle class="text"><mml:mtext>u</mml:mtext></mml:mstyle><mml:mo>&#x003C;</mml:mo><mml:mi>&#x221E;</mml:mi></mml:math></alternatives></inline-formula>), and termination criteria and partition matrix, U.</p>
<p>2. Determine the cluster center of fuzzy <inline-formula id="ieqn-8"><alternatives><inline-graphic xlink:href="ieqn-8.png"/><tex-math id="tex-ieqn-8"><![CDATA[$v_{i}^{1}| i=1, 2, 3, 4\ldots, c$]]></tex-math><mml:math id="mml-ieqn-8"><mml:msubsup><mml:mrow><mml:mi>v</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>|</mml:mo><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>c</mml:mi></mml:math></alternatives></inline-formula> by using U<sup>1</sup>.</p>
<p>3. Estimate the new partition matrix U<sup><italic>l</italic>+1</sup>,</p>
<p>4. Find the updated partition matrix <inline-formula id="ieqn-9"><alternatives><inline-graphic xlink:href="ieqn-9.png"/><tex-math id="tex-ieqn-9"><![CDATA[$\Delta = || \mathrm{U}^{l+1}-\mathrm{U}^{l}|| =\max_{ij}|u_{ij}^{l+1}-u_{ij}^{l}|$]]></tex-math><mml:math id="mml-ieqn-9"><mml:mi>&#x0394;</mml:mi><mml:mo>=</mml:mo><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant="normal"><mml:mi>U</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant="normal"><mml:mi>U</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msup><mml:mo>|</mml:mo><mml:mo>|</mml:mo><mml:mo>=</mml:mo><mml:munder class="msub"><mml:mrow><mml:mo class="qopname"> max</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:munder><mml:mo>|</mml:mo><mml:msubsup><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi><mml:mo lspace='0pt' rspace='0pt'>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msubsup><mml:mo>|</mml:mo></mml:math></alternatives></inline-formula>.</p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Experimental Results and Analysis</title>
<p>MATLAB has been used for the implementation of the proposed work. The abdominal CT image is fed as input. The CT image is used to generate seed point using the proposed automatic seed point selection algorithm.</p>
<p>A sample input CT image is shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>. It is processed using guided filter to smoothen the boundaries to facilitate more accurate level set segmentation. The processed CT image shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref> is now used for level set segmentation where segmented liver region is obtained.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Abdominal CT input image</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-2.png"/>
</fig>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Seed point selection using the proposed algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-3.png"/>
</fig>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Processing of the input CT method being applied image using guided filter</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-4.png"/>
</fig>
<p>The above <xref ref-type="fig" rid="fig-3">Fig. 3</xref> shows the image with seed point selection marked in it and <xref ref-type="fig" rid="fig-5">Fig. 5</xref> shows the growing contour which covers the liver image.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Level set method being applied to the processed liver image</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-5.png"/>
</fig>
<p>After isolating the liver region from the abdominal CT image which is shown in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>. FCM algorithm is applied on it and the mask of the tumor region is obtained. Then the masked region is cut from the liver image to get the segmented liver tumor region shown in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Segmented liver region</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-6.png"/>
</fig>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Segmented tumor region 
</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-7.png"/>
</fig>
<p><bold><italic>Performance Analysis</italic></bold></p>
<p>Different parameters have been used to analyze the performance of the segmentation of liver, segmentation of tumor and classification of healthy and tumorous image.</p>
<p>The calculated results are mentioned in <xref ref-type="table" rid="table-1">Tabs. 1</xref>&#x2013;<xref ref-type="table" rid="table-3">3</xref>, which gives the comparison between the existing and proposed techniques on the basis of Area Overlap Error and execution time. The results were taken for 50 CT scan images. The images were processed using MATLAB.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Comparison of area overlap error of segmented liver image obtained from level set with the proposed automatic seed point selection and those obtained using level set without automatic seed point selection</title>
</caption><table>
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Image number</th>
<th>Level set with proposed auto seed point</th>
<th>Level set without proposed auto seed point</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>6.7639</td>
<td>8.6768</td>
</tr>
<tr>
<td>2</td>
<td>12.7750</td>
<td>13.3588</td>
</tr>
<tr>
<td>3</td>
<td>6.6745</td>
<td>6.7777</td>
</tr>
<tr>
<td>4</td>
<td>20.8371</td>
<td>21.8371</td>
</tr>
<tr>
<td>5</td>
<td>5.9003</td>
<td>6.4657</td>
</tr>
<tr>
<td>6</td>
<td>8.1206</td>
<td>8.9852</td>
</tr>
<tr>
<td>7</td>
<td>3.7608</td>
<td>4.6017</td>
</tr>
<tr>
<td>8</td>
<td>11.3981</td>
<td>13.0199</td>
</tr>
<tr>
<td>9</td>
<td>4.6502</td>
<td>5.7634</td>
</tr>
<tr>
<td>10</td>
<td>7.3267</td>
<td>8.8825</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Comparison of area overlap error of segmented liver image obtained from level set with the proposed automatic seed point selection and existing algorithms</title>
</caption><table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Image</th>
<th>Level set with automatic seed point</th>
<th>Region growing segmentation</th>
<th>Histogram based segmentation</th>
<th>Clustering based segmentation</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>6.7639</td>
<td>8.2314</td>
<td>10.8753</td>
<td>7.2951</td>
</tr>
<tr>
<td>2</td>
<td>12.7750</td>
<td>15.4876</td>
<td>16.9982</td>
<td>15.9372</td>
</tr>
<tr>
<td>3</td>
<td>6.6745</td>
<td>8.0397</td>
<td>9.9132</td>
<td>9.0373</td>
</tr>
<tr>
<td>4</td>
<td>7.3267</td>
<td>9.5746</td>
<td>11.0367</td>
<td>10.0346</td>
</tr>
<tr>
<td>5</td>
<td>5.9003</td>
<td>7.1906</td>
<td>8.8201</td>
<td>8.1364</td>
</tr>
<tr>
<td>6</td>
<td>8.1206</td>
<td>10.9315</td>
<td>13.1642</td>
<td>11.5831</td>
</tr>
<tr>
<td>7</td>
<td>3.7608</td>
<td>5.2122</td>
<td>6.9972</td>
<td>4.7075</td>
</tr>
<tr>
<td>8</td>
<td>11.3981</td>
<td>14.0379</td>
<td>16.0023</td>
<td>14.8852</td>
</tr>
<tr>
<td>9</td>
<td>4.6502</td>
<td>5.9921</td>
<td>8.0135</td>
<td>7.2300</td>
</tr>
<tr>
<td>10</td>
<td>20.8371</td>
<td>25.4683</td>
<td>27.4682</td>
<td>26.6647</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Comparison of the existing and the proposed technique on basis of the execution time (ms)</title>
</caption><table>
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Image number</th>
<th>Segmentation without proposed algorithm</th>
<th>Segmentation with proposed algorithm</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0.5123</td>
<td>0.098</td>
</tr>
<tr>
<td>2</td>
<td>0.4321</td>
<td>0.083</td>
</tr>
<tr>
<td>3</td>
<td>0.5100</td>
<td>0.096</td>
</tr>
<tr>
<td>4</td>
<td>0.4521</td>
<td>0.086</td>
</tr>
<tr>
<td>5</td>
<td>0.5076</td>
<td>0.079</td>
</tr>
<tr>
<td>6</td>
<td>0.5904</td>
<td>0.089</td>
</tr>
<tr>
<td>7</td>
<td>0.4989</td>
<td>0.091</td>
</tr>
<tr>
<td>8</td>
<td>0.5612</td>
<td>0.094</td>
</tr>
<tr>
<td>9</td>
<td>0.4532</td>
<td>0.086</td>
</tr>
<tr>
<td>10</td>
<td>0.5121</td>
<td>0.096</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="table-3">Tab. 3</xref> represents execution time of the entire process with and without the proposed algorithm. Though an automatic seed point selection algorithm is appended, the execution time has reduced, because it segments the liver region properly. Execution time will increase if under or over segmentation happens. But here this problem has been rectified and this has reduced the execution time.</p>
<p><bold><italic>Area Overlap Error</italic></bold></p>
<p>Area Overlap error is used to compare the accuracy of the image segmented using the proposed method and the ground truth. Area Overlap error can be computed using the formula.</p>
<p><disp-formula id="eqn-4">
<label>(4)</label><alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-4.png"/>
<tex-math id="tex-eqn-4"><![CDATA[$$\begin{equation}
\mathit{Area}\;\mathit{Overlap}\;\mathit{Error}= \left(1-\frac{A\cap B}{A\cup B}\right)\times 100
 \label{eqn-4}
\end{equation}$$]]></tex-math>
<mml:math id="mml-eqn-4" display="block"><mml:mstyle mathvariant="italic"><mml:mi>A</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi></mml:mstyle><mml:mspace width="2.77695pt"/><mml:mstyle mathvariant="italic"><mml:mi>O</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>p</mml:mi></mml:mstyle><mml:mspace width="2.77695pt"/><mml:mstyle mathvariant="italic"><mml:mi>E</mml:mi><mml:mi>r</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi></mml:mstyle><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2229;</mml:mo><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x222A;</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>100</mml:mn></mml:math></alternatives></disp-formula></p>
<p>where A and B are the areas of the liver image obtained from the proposed segmentation method and ground truth respectively.</p>
<p>The comparison of area overlap error in level set algorithm with and without automatic seed point selection is mentioned in <xref ref-type="table" rid="table-1">Tab. 1</xref>. Similarly, <xref ref-type="table" rid="table-2">Tab. 2</xref> represents the area overlap error of proposed method with other existing techniques. An execution time of overall system mentioned in <xref ref-type="table" rid="table-3">Tab. 3</xref>.</p>
<p>The above <xref ref-type="fig" rid="fig-8">Fig. 8</xref> shows comparison graph of Area overlap error of segmented liver image obtained from level set with the proposed automatic seed point selection and those obtained using level set without automatic seed point selection. Similarly, <xref ref-type="fig" rid="fig-9">Fig. 9</xref> represents, comparison graph of the proposed and existing algorithms with respect to area overlap error. And <xref ref-type="fig" rid="fig-10">Fig. 10</xref> represents the comparison graph of overall system execution time.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Comparison graph of area overlap error of segmented liver image obtained from level set with the proposed automatic seed point selection and those obtained using level set without automatic seed point selection 
</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-8.png"/>
</fig>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Comparison graph of area overlap error of segmented liver image obtained from level set with the proposed automatic seed point selection and existing algorithms 
</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-9.png"/>
</fig>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Comparison graph between with and without proposed algorithm of execution time 
</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-10.png"/>
</fig>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion and Future work</title>
<p>The automatic detection of liver tumors from CT scans has been proposed keeping in mind the challenges prevailing in the field of medical imaging in detecting the cancer at an early stage. Early detection of cancer is important in the field of cancer diagnosis and treatment. The early the cancer is detected, higher the chances of cure. This paper has been visualized in a holistic approach considering the critical issues that are daunting in the domain. The proposed framework for automatic detection of liver tumors has been developed in such a way that it is as easy as possible to implement. The results are fairly consistent when tested with different datasets. Experimental results show that the proposed work is as good as existing systems with lower area overlap error and lesser time for operation. In the future work, a better system may be developed that reduces the error rates and further strengthens the chances of detection and classification. Also, this proposed system can be extended to work for 3D images and also images from different medical imaging modalities such as MRI, Ultrasound. Moreover, the shape and texture features can be extracted from the segmented ROI and those features can be used to classify the healthy and tumorous images using Support Vector Machine (SVM).</p>
</sec>
</body>
<back>
<fn-group><fn fn-type="other"><p><bold>Funding Statement:</bold> The author(s) received no specific funding for this study.</p></fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p></fn></fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Song</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Cai</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhou</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Lesion detection and characterization with context driven approximation in thoracic FDG PET-CT images of NSCLC studies</article-title>,&#x201D; <source>IEEE Transactions on Medical Imaging</source>, vol. <volume>33</volume>, no. <issue>2</issue>, pp. <fpage>408</fpage>&#x2013;<lpage>421</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>G. B.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zhou</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Random feature subspace ensemble based extreme learning machine for liver tumor detection and segmentation</article-title>,&#x201D; <source>IEEE Transactions on Medical Imaging</source>, vol. <volume>4</volume>, no. <issue>1</issue>, pp. <fpage>4675</fpage>&#x2013;<lpage>4678</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. N.</given-names> <surname>Cheema</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Nazrir</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Sheng</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Qin</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Liver extraction using residual convolution neural networks from low dose CT images</article-title>,&#x201D; <source>IEEE Transactions on Biomedical Engineering</source>, vol. <volume>66</volume>, no. <issue>9</issue>, pp. <fpage>2641</fpage>&#x2013;<lpage>2650</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>Song</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Deng</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhuang</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Zeng</surname></string-name></person-group>, &#x201C;<article-title>An improved confidence connected liver segmentation method based on three views of CT images</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>7</volume>, pp. <fpage>58429</fpage>&#x2013;<lpage>58434</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>O. F.</given-names> <surname>Elaziz</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Sayed</surname></string-name> and <string-name><given-names>M. I.</given-names> <surname>Abdullah</surname></string-name></person-group>, &#x201C;<article-title>Liver tumors segmentation from abdominal CT images using region growing and morphological processing</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>2</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Peng</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Ma</surname></string-name></person-group>, &#x201C;<article-title>A level set method to image segmentation based on local direction gradient</article-title>,&#x201D; <source>KSII Transactions on Internet and Information Systems</source>, vol. <volume>12</volume>, no. <issue>4</issue>, pp. <fpage>1760</fpage>&#x2013;<lpage>1778</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Shi</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Zhu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Tian</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Automatic liver segmentation based on shape constraints and deformable graph cut in CT images</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>24</volume>, no. <issue>8</issue>, pp. <fpage>5315</fpage>&#x2013;<lpage>5329</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>I.</given-names> <surname>Singh</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Gupta</surname></string-name></person-group>, &#x201C;<article-title>Segmentation of liver using hybrid K-means clustering and level set</article-title>,&#x201D; <source>IEEE Transactions on Medical Imaging</source>, vol. <volume>5</volume>, no. <issue>8</issue>, pp. <fpage>742</fpage>&#x2013;<lpage>746</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N. M.</given-names> <surname>Altarawneh</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Luo</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Regan</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Sun</surname></string-name></person-group>, &#x201C;<article-title>A modified distance regularized level set model for liver segmentation from CT images</article-title>,&#x201D; <source>IEEE Transactions on Biomedical Imaging</source>, vol. <volume>6</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Heckel</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Meine</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Moltz</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Kuhnigk</surname></string-name>, <string-name><given-names>J. T.</given-names> <surname>Heverhagen</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Segmentation based partial volume correction for volume estimation of solid lesions in CT</article-title>,&#x201D; <source>IEEE Transactions on Medical Imaging</source>, vol. <volume>33</volume>, no. <issue>2</issue>, pp. <fpage>462</fpage>&#x2013;<lpage>479</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Eberl</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Fulham</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Yin</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Supervised variational model with statistical inference and its application in medical image segmentation</article-title>,&#x201D; <source>IEEE Transactions on Biomedical Engineering</source>, vol. <volume>62</volume>, no. <issue>1</issue>, pp. <fpage>196</fpage>&#x2013;<lpage>207</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Zhao</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Image fuzzy clustering based on the region level markov random field model</article-title>,&#x201D; <source>IEEE Geoscience and Remote Sensing Letters</source>, vol. <volume>12</volume>, no. <issue>8</issue>, pp. <fpage>1770</fpage>&#x2013;<lpage>1774</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Incorporating adaptive local information into fuzzy clustering for image segmentation</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>24</volume>, no. <issue>11</issue>, pp. <fpage>3990</fpage>&#x2013;<lpage>4000</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Rikxoort</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Arzhaeva</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Ginneken</surname></string-name></person-group>, &#x201C;<article-title>Automatic segmentation of the liver in computed tomography scans with voxel classification and atlas matching</article-title>,&#x201D; in <conf-name>Proc. MICCAI Workshop 3-D Segmentation</conf-name>, <source>Clinic: A Grand Challenge</source>, <conf-loc>Brisbane, Australia</conf-loc>, pp. <fpage>101</fpage>&#x2013;<lpage>108</lpage>, <year>2007</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>