<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">IASC</journal-id>
<journal-id journal-id-type="nlm-ta">IASC</journal-id>
<journal-id journal-id-type="publisher-id">IASC</journal-id>
<journal-title-group>
<journal-title>Intelligent Automation &#x0026; Soft Computing</journal-title>
</journal-title-group>
<issn pub-type="epub">2326-005X</issn>
<issn pub-type="ppub">1079-8587</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">21929</article-id>
<article-id pub-id-type="doi">10.32604/iasc.2022.021929</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A Morphological Image Segmentation Algorithm for Circular Overlapping Cells</article-title><alt-title alt-title-type="left-running-head">A Morphological Image Segmentation Algorithm for Circular Overlapping Cells</alt-title><alt-title alt-title-type="right-running-head">A Morphological Image Segmentation Algorithm for Circular Overlapping Cells</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Zhang</surname><given-names>Fuchu</given-names></name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-2" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Wu</surname><given-names>Yanpeng</given-names></name>
<xref ref-type="aff" rid="aff-2">2</xref><email>xjzxwyp@hnfnu.edu.cn</email>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Xu</surname><given-names>Miaoqing</given-names></name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Liu</surname><given-names>Sanjun</given-names></name>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Peng</surname><given-names>Changling</given-names></name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib id="author-6" contrib-type="author">
<name name-style="western"><surname>Gao</surname><given-names>Zhichen</given-names></name>
<xref ref-type="aff" rid="aff-4">4</xref>
</contrib>
<aff id="aff-1"><label>1</label><institution>Department of Information Engineering, Shaoyang University</institution>, <addr-line>Shaoyang, 422000</addr-line>, <country>China</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of Information Science and Engineering, Hunan First Normal University</institution>, <addr-line>Changsha, 410205</addr-line>, <country>China</country></aff>
<aff id="aff-3"><label>3</label><institution>School of Resource Environment and Safety Engineering, South China University</institution>, <addr-line>Hengyang, 421001</addr-line>, <country>China</country></aff>
<aff id="aff-4"><label>4</label><institution>Department of Applied Mathematics and Statistics, College of Engineering and Applied Sciences, Stony Brook University</institution>, <addr-line>NY, 11794</addr-line>, <country>USA</country></aff>
</contrib-group><author-notes><corresp id="cor1">&#x002A;Corresponding Author: Yanpeng Wu. Email: <email>xjzxwyp@hnfnu.edu.cn</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-10-4"><day>04</day>
<month>10</month>
<year>2021</year></pub-date>
<volume>32</volume>
<issue>1</issue>
<fpage>301</fpage>
<lpage>321</lpage>
<history>
<date date-type="received"><day>20</day><month>7</month><year>2021</year></date>
<date date-type="accepted"><day>21</day><month>8</month><year>2021</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Zhang et al.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Zhang et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_IASC_21929.pdf"></self-uri>
<abstract>
<p>Cell segmentation is an important topic in medicine. A cell image segmentation algorithm based on morphology is proposed. First, some morphological operations, including top-hat transformation, bot-hat transformation, erosion operation, dilation operation, opening operation, closing operation, majority operation, skeleton operation, etc., are applied to remove noise or enhance cell images. Then the small blocks in the cell image are deleted as noise, the medium blocks are removed and saved as normal cells, and the large blocks are segmented as overlapping cells. Each point on the edge of the overlapping cell area to be divided is careful checked. If the shape of the surrounding area is a corner and its angle is smaller than the specified value, the overlapping cell will be divided along the midline of the corner. The length of each division is about a quarter of the diameter of a normal cell. Then small blocks are deleted, and medium blocks are removed and saved, after the edges of all blocks are smoothed. This step is repeated until no dividing point is found. The last remaining image, plus the saved blocks, is the final segmentation result of the cell image. The experimental results show that this algorithm has high segmentation accuracy for lightly or moderately overlapping cells.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Cell image</kwd>
<kwd>circular overlapping cells</kwd>
<kwd>segmentation</kwd>
<kwd>morphology</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Cells are the basic unit of biological tissues. State analysis of cells is an important subject in medicine. Researchers study the shape, life cycle, position, speed, trajectory and other information of cells by observing the morphology of cells. For the research of cells&#x2019; behavior, it is necessary perform quantitative analysis of cells, such as cell size measurement, cell counting, and cell tracking, etc. The accuracy of cell segmentation operation will affect the results cell counting and cell tracking.</p>
<p>With the advancement of microscopic imaging technology, researchers can easily obtain a large number of high-quality cell images, which also provides a basis for cell analysis. However, data analysis of medical cell images usually encounters the following difficulties:<list list-type="simple"><list-item><label>(1)</label>
<p>Cell images have low resolution and are difficult to segment. Therefore, cell images usually require preprocessing.</p></list-item><list-item><label>(2)</label>
<p>Overlapping cells often appear in cell images, which look like a large cell and lead to under-segmentation.</p></list-item><list-item><label>(3)</label>
<p>For cell images with uneven background distribution, high noise, and dense cell populations, it is difficult for traditional image segmentation algorithms to obtain accurate segmentation results.</p></list-item></list></p>
<p>The acquisition cost of cell images is higher than that of ordinary images, so training samples may be insufficient.</p>
<p>Due to the richness, heterogeneity and complexity of cell images, it is difficult to manage, process and analyze them. With the development of image processing technology based on computer vision, automatic analysis of biomedical images has become possible and has gradually become the key to the development of cell biology. In this article, morphological methods will be used for quality enhancement and edge segmentation of overlapping cell images.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Works</title>
<p>The purpose of cell image segmentation is to simplify cell images and extract interesting information from them, and to provide a basis for subsequent image analysis and understanding. Image segmentation is the process of dividing an image into several meaningful areas. The cell image segmentation algorithm can realize cell segmentation and detection, and prepare for cell tracking and cell counting.</p>
<p>Current imaging tools and medical image processing and analysis platforms are developing rapidly, such as Cell Profiler [<xref ref-type="bibr" rid="ref-1">1</xref>] and Image J [<xref ref-type="bibr" rid="ref-2">2</xref>], which integrate a large number of modular image processing algorithms, which can process images in batches and provide extended interfaces.</p>
<p>At present, thousands of image segmentation algorithms have been proposed, but the development of cell segmentation algorithms is relatively lagging. Most cell segmentation algorithms are designed for specific types of cells. Therefore, there is currently no universal cell segmentation algorithm.</p>
<p>In order to find the most suitable segmentation method for the target cell image and improve the accuracy of cell segmentation and classification, it is necessary to fully understand the characteristics of various algorithms and the characteristics of cell images, and comprehensively consider the factors of cell structure. The hardware platform, the running speed of the algorithm, the segmentation accuracy of the algorithm, the generalization ability of the algorithm, the training speed of the sample, the difficulty of making the sample, etc.</p>
<p>Common segmentation algorithms include threshold segmentation methods [<xref ref-type="bibr" rid="ref-3">3</xref>&#x2013;<xref ref-type="bibr" rid="ref-6">6</xref>], region-based segmentation methods [<xref ref-type="bibr" rid="ref-7">7</xref>&#x2013;<xref ref-type="bibr" rid="ref-9">9</xref>], active contour-based segmentation methods [<xref ref-type="bibr" rid="ref-10">10</xref>,<xref ref-type="bibr" rid="ref-11">11</xref>], graph theory-based segmentation methods [<xref ref-type="bibr" rid="ref-12">12</xref>&#x2013;<xref ref-type="bibr" rid="ref-14">14</xref>] and images segmentation method based on deep learning [<xref ref-type="bibr" rid="ref-15">15</xref>&#x2013;<xref ref-type="bibr" rid="ref-17">17</xref>] etc.</p>
<p>Based on the Otsu algorithm, Cseke [<xref ref-type="bibr" rid="ref-18">18</xref>] maximized the intra-class variance between the black, gray, and white regions in the cell image through a recursive method, and automatically selected the segmentation threshold to achieve cell segmentation. However, this method cannot divide the cytoplasm.</p>
<p>Rezatofighi et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] used Gram-Schmidt orthogonalization to enhance the color vector of the cell area, and then determined the segmentation threshold according to the gray histogram, and segmented the nucleus, and finally used the Snake algorithm to segment the cytoplasm.</p>
<p>Hou et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] used an improved Snake model to segment and repair overlapping cells.</p>
<p>Arteta et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] proposed a cell segmentation method based on MSER detection, which has achieved good results in many types of cell detection tasks, but cannot obtain accurate cell edges.</p>
<p>Dimopoulos et al. [<xref ref-type="bibr" rid="ref-22">22</xref>] proposed a cell segmentation algorithm based on MPCS, which uses a circle detection algorithm based on Hough transform to detect seed regions. Although the algorithm can obtain more accurate cell edges, the detection results are very unstable.</p>
<p>In 2012, Mohapatra et al. [<xref ref-type="bibr" rid="ref-23">23</xref>] proposed cell segmentation based on linked networks. The algorithm classifies the color of the pixels in the cell image and divides the cells into nucleus and cytoplasm.</p>
<p>Huang et al. [<xref ref-type="bibr" rid="ref-24">24</xref>] used co-occurrence matrix and morphological information to extract 85 texture and morphological features, then used PCA for feature dimension reduction, and K-Means for five-category classification. The classification accuracy of this method for neutrophils is relatively low.</p>
<p>Wang et al. [<xref ref-type="bibr" rid="ref-25">25</xref>] used color information conversion, distance conversion and GVF Snake method to extract cells, and used SVM to classify the cells.</p>
<p>Gu et al. [<xref ref-type="bibr" rid="ref-26">26</xref>] used cell edge phase angle information and linear interpolation to subdivide the cytoplasm, but this method is not suitable for cell images with severe cell overlap.</p>
<p>Ding et al. [<xref ref-type="bibr" rid="ref-27">27</xref>] used a sub-level algorithm for image segmentation of overlapping cells.</p>
<p>Hou et al. [<xref ref-type="bibr" rid="ref-28">28</xref>] used mathematical morphology to segment bone marrow cells.</p>
<p>Wang et al. [<xref ref-type="bibr" rid="ref-29">29</xref>] used the regional growth method to achieve the segmentation of breast cells.</p>
<p>Ruberto et al. [<xref ref-type="bibr" rid="ref-30">30</xref>] proposed a limit corrosion algorithm, which repeatedly corrodes the overlapping cells through morphological corrosion until the seed point disappears. After the seed point image is obtained, the segmentation operation is performed through the watershed segmentation algorithm [<xref ref-type="bibr" rid="ref-31">31</xref>] to obtain the cell segmentation image.</p>
<p>Hou et al. [<xref ref-type="bibr" rid="ref-32">32</xref>] used distance transformation to convert the pixel information of adhering cells into position information, and then segmented the cells through watershed transformation to obtain segmented images. The distance transformation algorithm is likely to cause over-segmentation problems.</p>
<p>Qu et al. [<xref ref-type="bibr" rid="ref-33">33</xref>] used the chain code theory to convert the boundary information of the image into chain code information, and then looked for possible pits as separation points, and finally separated the overlapping cells. The contour tracking algorithm will cause under-segmentation.</p>
<p>In summary, there are some algorithms for overlapping cell segmentation, but their segmentation effect needs to be further improved.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Basic Morphology Operations of Binary Image</title>
<p>Mathematical morphology was proposed in 1964 by Tan et al. [<xref ref-type="bibr" rid="ref-34">34</xref>]. Mathematical morphology is based on set-based algebra, and its main research content is to quantitatively analyze the morphological and structural characteristics of objects. Mathematical morphology is a new method used in the fields of image processing and pattern recognition. At present, mathematical morphology has become a very important and most valuable research field in digital image processing, and has been successfully applied to fingerprint recognition, medical microscopic image analysis, traffic detection, robot vision and other fields.</p>
<p>Mathematical morphology has a complete foundation of mathematical theory. The most notable feature is that it can be directly used to process the geometric structure of the image surface. Compared with spatial domain or frequency domain algorithms, mathematical morphology algorithms have obvious advantages in image processing and analysis. For example, in the process of image restoration, a morphological filter based on mathematical morphology can effectively reduce the noise in the image, while using prior recognition of features and morphological operators to retain important information in the image.</p>
<p>Mathematical morphology can be used to process binary images. The gray value in a binary image has only two values, 0 and 1, which are used as background pixels and target pixels, respectively. Binary morphological processing is the process of transforming structural elements in an image and completing operations such as intersection and union. In binary morphology, the two basic operations of erosion and dilation can be combined to produce many other actual morphological operations.</p>
<p>Before using mathematical morphology to process binary images, it is necessary to design a structural element to collect the information in the image. Structural elements can usually be represented by some small and relatively simple sets, such as of spherical, linear, rectangular, etc. Different image processing results can be obtained by selecting structural elements of different scales or shapes.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Erosion Operation</title>
<p>The erosion operation of the binary image is the most basic operation in mathematical morphology. Most mathematical operations are based on the erosion operation. The erosion of the set <italic>A</italic> is represented by the set <italic>B</italic> as the structural element.<disp-formula id="eqn-1"><label>(1)</label>
<mml:math id="mml-eqn-1" display="block"><mml:mi>A</mml:mi><mml:mo>&#x2299;</mml:mo><mml:mi>B</mml:mi><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mi>B</mml:mi><mml:mo>+</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x2282;</mml:mo><mml:mi>A</mml:mi><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>, <italic>A</italic> is the input image, and B is the structural element. <italic>A</italic> &#x2299; <italic>B</italic> is composed of all the points <italic>x</italic> that are still included in the set <italic>A</italic> after the structural element <italic>B</italic> is translated by <italic>x</italic>.</p>
<p>The role of the erosion operation of the binary image in image processing is to eliminate the target points in the image that are smaller than the structural elements, and to disconnect the narrow connection between the two targets.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Dilation Operation</title>
<p>Set <italic>A</italic> is dilated by set <italic>B</italic>, which can be expressed as<disp-formula id="eqn-2"><label>(2)</label>
<mml:math id="mml-eqn-2" display="block"><mml:mi>A</mml:mi><mml:mo>&#x2295;</mml:mo><mml:mi>B</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:msup><mml:mi>A</mml:mi><mml:mi>c</mml:mi></mml:msup><mml:mo>&#x2299;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:msup><mml:mo stretchy="false">]</mml:mo><mml:mi>c</mml:mi></mml:msup></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref>, <italic>&#x2212;B</italic> is the reflection of set <italic>B</italic> on origin as a structural element of erosion operation, and <italic>A<sup>C</sup></italic> is the supplementary set of binary image <italic>A</italic>.</p>
<p>The erosion operation is based on the &#x201C;subtraction&#x201D; operation, and all morphological calculations given later are based on the erosion calculation.</p>
<p>The dilation operation will smooth and filter the outer boundary of the target image.</p>
<p>The dilation operation with appropriate structural elements can fill holes in the image that are smaller than the selected structural element and connect two objects closer together.</p>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Opening Operation</title>
<p>The process of performing dilation operation and then erosion operation on the target image is called open operation. The opening operation can remove the tiny connections between the image target areas, such as some small burrs and protrusions, to make the image smoother.</p>
<p>Opening operation processing can be expressed as<disp-formula id="eqn-3"><label>(3)</label>
<mml:math id="mml-eqn-3" display="block"><mml:mi>A</mml:mi><mml:mo>&#x2218;</mml:mo><mml:mi>B</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2299;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2295;</mml:mo><mml:mi>B</mml:mi></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>, A is the input image, and B is the structural element.</p>
<p>The opening operation can remove isolated small areas in the target area of the image and smooth the burrs on the target boundary while hardly change the size of the target area. Since expansion and erosion are not mutually operated, they can be cascaded together as a new operation.</p>
</sec>
<sec id="s3_4">
<label>3.4</label>
<title>Closing Operation</title>
<p>It is also possible to perform the erosion operation and then the expansion processing, which means the closing operation. The closing operation can remove the small holes in the image target or the concave part at the boundary to smooth the outer boundary of the target area.</p>
<p>Closing operation processing can be expressed as<disp-formula id="eqn-4"><label>(4)</label>
<mml:math id="mml-eqn-4" display="block"><mml:mi>A</mml:mi><mml:mo>&#x2219;</mml:mo><mml:mi>B</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2295;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2299;</mml:mo><mml:mi>B</mml:mi></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref>, <italic>A</italic> is the input image, and <italic>B</italic> is the structural element.</p>
<p>The closing operation can fill the small holes in the target area, connect adjacent areas, and smooth the boundary of the area while hardly change the size of the target area.</p>
<p>Both the binary open operation and the binary close operation can effectively remove the details of the image that are smaller than the structural elements.</p>
</sec>
<sec id="s3_5">
<label>3.5</label>
<title>Top-Hat Transformation</title>
<p>The result obtained by subtracting the morphological gray-scale opening operation from the original gray-scale image is called Top-hat transformation, and its mathematical expression is:<disp-formula id="eqn-5"><label>(5)</label>
<mml:math id="mml-eqn-5" display="block"><mml:mi>T</mml:mi><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>h</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>f</mml:mi><mml:mo>&#x2218;</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
<p>The Top-hat transformation process can extract the sharp peaks in the original gray-scale image, and the result of the Bot-hat transformation is the low value in the original gray-scale image. Generally, the contrast of the original gray image can be enhanced by adding the high-hat transform result and subtracting the Bot-hat transform result.</p>
</sec>
<sec id="s3_6">
<label>3.6</label>
<title>Bot-Hat Transformation</title>
<p>The Bot-hat transformation is the difference between the gray-scale image and the original image after the morphological gray-scale closing operation. Its mathematical expression is:<disp-formula id="eqn-6"><label>(6)</label>
<mml:math id="mml-eqn-6" display="block"><mml:mi>B</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>h</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>f</mml:mi><mml:mo>&#x2219;</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>f</mml:mi></mml:math>
</disp-formula></p>
<p>Top-hat transformation and the Bot-hat transformation are also used to correct the unevenly illuminated image. The Top-hat transform is mainly used to extract bright objects in the dark background area, while the Bot-hat transform is usually used to extract dark objects in the bright background area.</p>
</sec>
<sec id="s3_7">
<label>3.7</label>
<title>Filling Holes</title>
<p>Filling holes can complete the hole filling effect in the area. The process of area filling starts from a point on the boundary. The 8-connected boundary points in the neighborhood of the point that meet the filling conditions are filled with 1.<disp-formula id="eqn-7"><label>(7)</label>
<mml:math id="mml-eqn-7" display="block"><mml:msub><mml:mi>X</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2295;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2229;</mml:mo><mml:msup><mml:mi>A</mml:mi><mml:mi>c</mml:mi></mml:msup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mo>&#x2026;</mml:mo></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-7">Eq. (7)</xref>, <italic>X<sub>0</sub></italic>&#x2009;&#x003D;&#x2009;<italic>p</italic>, using a 3&#x002A;3 &#x201C;cross&#x201D; structure element. The iterative process ends when until <italic>X<sub>k</sub></italic>&#x2009;&#x003D;&#x2009;<italic>X<sub>k</sub></italic>&#x2009;&#x002B;&#x2009;1.</p>
</sec>
<sec id="s3_8">
<label>3.8</label>
<title>Boundary Extraction</title>
<p>Boundary extraction can be expressed as<disp-formula id="eqn-8"><label>(8)</label>
<mml:math id="mml-eqn-8" display="block"><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2299;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-8">Eq. (8)</xref>, <italic>A</italic> is the input image, and <italic>B</italic> is the structural element.</p>
<p>The edge of image <italic>A</italic> can be obtained by subtracting the result of erosion operation on <italic>A</italic> by structural element <italic>B</italic> from <italic>A</italic>.</p>
</sec>
<sec id="s3_9">
<label>3.9</label>
<title>Majority Operation</title>
<p>Keep a pixel set to 1 if 5 or more pixel(the majority) in its 3-by-3, 8-connected neighborhood are set to 1; otherwise, set the pixel to 0.</p>
</sec>
<sec id="s3_10">
<label>3.10</label>
<title>Skeletonization</title>
<p>Skeleton is an important topological description of image geometry, which can maintain the topological characteristics of the original target. It has the same number of connections and holes as the original target.</p>
<p>The morphological skeleton of the binary image <italic>A</italic> can be obtained by selecting a suitable structural element <italic>B</italic> and performing continuous etching and opening operations on A. Let <italic>S</italic>(<italic>A</italic>) denote the skeleton of <italic>A</italic>, then the expression of the skeleton of image <italic>A</italic> is<disp-formula id="eqn-9"><label>(9)</label>
<mml:math id="mml-eqn-9" display="block"><mml:mi>S</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:munderover><mml:mo movablelimits="false">&#x22C3;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math>
</disp-formula></p>
<p><disp-formula id="eqn-10"><label>(10)</label>
<mml:math id="mml-eqn-10" display="block"><mml:mi>S</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2299;</mml:mo><mml:mi>n</mml:mi><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2299;</mml:mo><mml:mi>n</mml:mi><mml:mi>B</mml:mi><mml:mo>&#x2218;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">]</mml:mo></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-9">Eqs. (9)</xref> and <xref ref-type="disp-formula" rid="eqn-10">(10)</xref>, <italic>S<sub>n</sub></italic>(<italic>A</italic>) is the n<italic>-</italic>th skeleton subset of <italic>A</italic>, and <italic>N</italic> is the maximum number of iterations, that is, <italic>N</italic> &#x003D; max&#x007B;<italic><underline>n</underline></italic>&#x007C;(<italic>A</italic>&#x2299;<italic>nB</italic>)&#x2009;&#x2260;&#x2009;&#x2205;)&#x2218;(<italic>A</italic>&#x2299;<italic>nB</italic>) means to use <italic>B</italic> to corrode <italic>A</italic> continuously <italic>n</italic> times, namely<disp-formula id="eqn-11"><label>(11)</label>
<mml:math id="mml-eqn-11" display="block"><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2299;</mml:mo><mml:mi>n</mml:mi><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2299;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2299;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2299;</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2299;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
<p>Since the sets (<italic>A</italic> &#x2299; <italic>nB</italic>) and ((<italic>A</italic> &#x2299; <italic>nB</italic>)&#x2218;<italic>B</italic>) differ only at the protruding points of the boundary, the difference of the sets (<italic>A</italic> &#x2299; <italic>nB</italic>)-((<italic>A</italic> &#x2299; <italic>nB</italic>)&#x2218;<italic>B</italic>) only includes the boundary points of the skeleton.</p>
<p>Knowing the skeleton of the image, the original image can be reconstructed by the method of morphological transformation. This is actually the inverse process of finding the skeleton. The image <italic>A</italic> reconstructed by skeleton <italic>S<sub>n</sub></italic>(<italic>A</italic>) can be expressed as<disp-formula id="eqn-12"><label>(12)</label>
<mml:math id="mml-eqn-12" display="block"><mml:mi>A</mml:mi><mml:mo>=</mml:mo><mml:munderover><mml:mo movablelimits="false">&#x22C3;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:munderover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>S</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>A</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x2295;</mml:mo><mml:mi>n</mml:mi><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-12">Eq. (12)</xref>, <italic>B</italic> is the structural element: (<italic>S<sub>n</sub></italic>(<italic>X</italic>) &#x2295; <italic>nB</italic>) means that <italic>S<sub>n</sub></italic>(<italic>X</italic>) is expanded with <italic>B</italic> for <italic>n</italic> consecutive times, that is<disp-formula id="eqn-13"><label>(13)</label>
<mml:math id="mml-eqn-13" display="block"><mml:msub><mml:mi>S</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>X</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2295;</mml:mo><mml:mi>n</mml:mi><mml:mi>B</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>X</mml:mi><mml:mo>&#x2295;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2295;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2295;</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2295;</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>The Segmentation Algorithm for Overlapping Cells Based on Morphology</title>
<sec id="s4_1">
<label>4.1</label>
<title>Features of Cell Image</title>
<p>Observing the cell image shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>, it is easy to find that this cell image has the following characteristics:<list list-type="simple"><list-item><label>(1)</label>
<p>The cell spacing is very small, and it is easy to overlap.</p></list-item><list-item><label>(2)</label>
<p>The cell shape is close to circle.</p></list-item><list-item><label>(3)</label>
<p>The color of the edge of the unit is darker, which is completely different from the background and is easy to distinguish.</p></list-item><list-item><label>(4)</label>
<p>The nucleus is a bright area. The brightness of the cell nucleus has a small difference from the background, so it can easily be mistaken for the background.</p></list-item></list></p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Original cell image</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-1.png"/>
</fig>
<p>Based on the above characteristics of cells, threshold-based image segmentation algorithms and region-based image segmentation algorithms can be used for cell image segmentation, which will also cause a lot of over-segmentation and under-segmentation.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>The Method of Searching the Dividing Points</title>
<p>When two round cells overlap each other, the junction should be an acute or obtuse angle. <xref ref-type="fig" rid="fig-2">Figs. 2a</xref>&#x2013;<xref ref-type="fig" rid="fig-2">2c</xref> shows the state diagram of 2 cells overlapping, including slightly overlapping, moderately overlapping and severely overlapping.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Three kinds of overlapping cell diagram (a) slightly overlapping (b) moderately overlapping (c) severely overlapping</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-2.png"/>
</fig>
<p>Each edge pit in the cell image is a potential cell dividing point. However, edge pits may also be caused by cell deformation, noise and other factors. Severely overlapping cells are difficult to identify because the connecting part is very similar to the deformed part of the single cell edge. Obviously, the lower the overlap, the easier it is to find the dividing point. Therefore, this article is limited to the study of the recognition and segmentation of lightly overlapping cell images and moderately overlapping cell images.</p>
<p>Let <italic>C<sub>i</sub> </italic>denote the set of <italic>i</italic> from the point <italic>q</italic>.<disp-formula id="eqn-14"><label>(14)</label>
<mml:math id="mml-eqn-14" display="block"><mml:msub><mml:mi>C</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo fence="false" stretchy="false">|</mml:mo><mml:mrow><mml:mo fence="false" stretchy="false">|</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mi>q</mml:mi></mml:mrow><mml:mo fence="false" stretchy="false">|</mml:mo><mml:mo>=</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:mrow><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math>
</disp-formula></p>
<p>Let <italic>n<sub>i</sub> </italic>denote the number of non-zero elements in <italic>C<sub>i</sub></italic>.<disp-formula id="eqn-15"><label>(15)</label>
<mml:math id="mml-eqn-15" display="block"><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:munder><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mi>j</mml:mi></mml:munder><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo fence="false" stretchy="false">|</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>p</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math>
</disp-formula></p>
<p>Let <italic>z<sub>i</sub> </italic>denote the number of zero elements in <italic>C<sub>i</sub></italic>.<disp-formula id="eqn-16"><label>(16)</label>
<mml:math id="mml-eqn-16" display="block"><mml:msub><mml:mi>z</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:munder><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mi>j</mml:mi></mml:munder><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo fence="false" stretchy="false">|</mml:mo><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>p</mml:mi><mml:mi>j</mml:mi></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msub><mml:mi>C</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math>
</disp-formula></p>
<p>Let <italic>u<sub>i</sub></italic> denote the proportion of zero-valued elements in <italic>C<sub>i</sub></italic>.<disp-formula id="eqn-17"><label>(17)</label>
<mml:math id="mml-eqn-17" display="block"><mml:msub><mml:mi>u</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:math>
</disp-formula></p>
<p>Let <italic>a</italic> denote the recessed angle of <italic>q</italic>. Let <italic>b</italic> denote the maximum value of <italic>a</italic>. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> is a schematic diagram of the dividing point <italic>q</italic>. It can be seen from <xref ref-type="fig" rid="fig-3">Fig. 3</xref> that<disp-formula id="eqn-18"><label>(18)</label>
<mml:math id="mml-eqn-18" display="block"><mml:msub><mml:mi>u</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>n</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:mi>a</mml:mi><mml:mo>&#x2264;</mml:mo><mml:mi>b</mml:mi></mml:mstyle></mml:math>
</disp-formula></p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Determination of the dividing point and its dividing direction</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-3.png"/>
</fig>
<p><xref ref-type="disp-formula" rid="eqn-18">Eq. (18)</xref> is the necessary condition for <italic>q</italic> to be the dividing point.</p>
<p>Once the position of <italic>q</italic> is determined, it can be divided along the midline of the angle <italic>a</italic>.</p>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>The Basic Process of Segmentation Algorithm for Overlapping Cells Based on Morphology</title>
<p>For the convenience of expression, the following symbols are defined as shown in <xref ref-type="table" rid="table-1">Tab. 1</xref>.</p>
<table-wrap id="table-1"><label>Table 1</label>
<caption>
<title>Variable symbols and their definitions</title></caption>
<table><colgroup><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Symbol</th>
<th align="left">Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><italic>s</italic></td>
<td align="left">Average area of cell</td>
</tr>
<tr>
<td align="left"><italic>r</italic></td>
<td align="left">Average diameter of cell</td>
</tr>
<tr>
<td align="left"><italic>e</italic></td>
<td align="left">Minimum area of cell</td>
</tr>
<tr>
<td align="left"><italic>t</italic></td>
<td align="left">Maximum area of cell</td>
</tr>
<tr>
<td align="left"><italic>d</italic></td>
<td align="left">Split length</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Comments: <italic>s</italic> is the statistical value. The calculation equations for defining <italic>e</italic>, <italic>t</italic>, <italic>d</italic> are as follows:<italic>e</italic>&#x2009;&#x003D;&#x2009;0.5&#x2009;&#x22C5;&#x2009;<italic>s</italic>, <italic>t</italic>&#x2009;&#x003D;&#x2009;1.5&#x2009;&#x22C5;&#x2009;<italic>s</italic>,<inline-formula id="ieqn-1">
 <mml:math id="mml-ieqn-1"><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>0.25</mml:mn><mml:mo>&#x22C5;</mml:mo><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn><mml:mo>&#x22C5;</mml:mo><mml:msqrt><mml:mrow><mml:mfrac><mml:mi>s</mml:mi><mml:mi>&#x03C0;</mml:mi></mml:mfrac></mml:mrow></mml:msqrt></mml:math>
</inline-formula>.</p>
<p>The basic steps of segmentation algorithm for overlapping cells based on morphology are as follows:</p>
<p>Step 1: Read the cell image and convert it to gray-scale image format.</p>
<p>Step 2: Enhance the cell image with Top-hat and Bot-hat transformation.</p>
<p>Step 3: Convert the cell image from a gray-scale image into a binarized image.</p>
<p>Step 4: Remove small blocks with area less than 0.5&#x002A;<italic>e</italic> in the foreground.</p>
<p>Step 5: Remove small blocks with area less than 0.25&#x002A;<italic>e</italic> in the background.</p>
<p>Step 6: Erode the blocks first and then dilate the blocks to smooth the edge of blocks. An opening operation can also achieve the same effect.</p>
<p>Step 7: A majority operation is applied to every pixel to reduce false dividing points, and thus increase the accuracy of searching the dividing points of overlapping cells in the cell image.</p>
<p>Step 8: Remove small blocks. Small blocks with area less than <italic>e</italic> are deleted as noise parts.</p>
<p>Step 9: Search the dividing points from edge of blocks.</p>
<p>Step 10: Segment blocks. After calculating the dividing direction, set all pixels with the distance from the dividing point less than <italic>d</italic> and the coordinate position in the dividing direction to 0.</p>
<p>Step 11: According to the area size of each block, blocks are treated in three categories. Medium blocks with area between <italic>e</italic> and <italic>t</italic> are moved to a final target cell image as independent cells. If the final target cell image doesn&#x0027;t exist, create a blank image as target cell image. Large blocks with area more than <italic>t</italic> are reserved in the cell image as potential overlapping cells.</p>
<p>Repeat step 9&#x2013;11 until no dividing points are found.</p>
<p>Step 12: The last remaining image, plus the saved blocks, is the final segmentation result of the cell image Output the final segmentation result.</p>
<p>The flow chart of proposed algorithm is shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Flow chart of segmentation algorithm for overlapping cells based on morphology</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-4.png"/>
</fig>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Experimental Results and Analysis</title>
<p>A program is compiled to perform overlapping cell separation operations on the cell image shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<p>First, the cell image is transformed by Top-hat and Bot-hat transformation, and the result is shown in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Top-hat &#x0026; bot-hat transformation</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-5.png"/>
</fig>
<p>Then the cell image is converted into a binary image, and the result is shown in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>The binarized cell image</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-6.png"/>
</fig>
<p>Small isolated blocks are eliminated in the cell image, including scatter dots on the cells and noise in the background. The result is shown in <xref ref-type="fig" rid="fig-7">Figs. 7</xref> and <xref ref-type="fig" rid="fig-8">8</xref>.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Remove scattered dots on the cells</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-7.png"/>
</fig>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Remove noise in the background</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-8.png"/>
</fig>
<p>Tiny noises in the cell image such as lines are eliminated with an erosion operation, as shown in <xref ref-type="fig" rid="fig-9">Fig. 9</xref>.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Erosion operation</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-9.png"/>
</fig>
<p>A dilation operation is performed on the cell image to smooth the cell edge, as shown in <xref ref-type="fig" rid="fig-10">Fig. 10</xref>.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Dilation operation</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-10.png"/>
</fig>
<p>The cell image is ready to start cell edge segmentation after a majority operation is executed and small blocks are deleted, as shown in <xref ref-type="fig" rid="fig-11">Fig. 11</xref>.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Delete blocks less than 2000</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-11.png"/>
</fig>
<p>The cell image is separated into two images according to the area size. The area of normal cells is moderate, as shown in <xref ref-type="fig" rid="fig-12">Fig. 12</xref>, while the area of overlapping cells is larger, as shown in <xref ref-type="fig" rid="fig-13">Fig. 13</xref>.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>Blocks between 2000 and 8000</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-12.png"/>
</fig>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>Blocks larger than 8000</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-13.png"/>
</fig>
<p>Next the potential dividing points of overlapping cells are found. Since the cells are round, all the connecting parts of the overlapping cells must be concave, as point A in <xref ref-type="fig" rid="fig-13">Fig. 13</xref>. <xref ref-type="fig" rid="fig-14">Fig. 14</xref> shows an enlarged view of point A. However, not all recessed areas are the connecting parts of the overlapping cells, as point B in <xref ref-type="fig" rid="fig-13">Fig. 13</xref>. <xref ref-type="fig" rid="fig-15">Fig. 15</xref> shows an enlarged view of point B. Therefore, it is necessary to select a suitable depression angle threshold to determine whether a pixel in the cell image is the connection point of the overlapping cells.</p>
<fig id="fig-14">
<label>Figure 14</label>
<caption>
<title>Enlarged point A</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-14.png"/>
</fig>
<fig id="fig-15">
<label>Figure 15</label>
<caption>
<title>Enlarged point B</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-15.png"/>
</fig>
<p>For the dividing point of overlapping cells, perform cell segmentation along the midline, as shown in <xref ref-type="fig" rid="fig-16">Fig. 16</xref>. Note that there may be a dividing line with a large deviation in direction. For the overlapping cells after segmentation, it is also necessary to perform erosion and dilation operations to smooth their boundaries. The result is shown in <xref ref-type="fig" rid="fig-17">Fig. 17</xref>.</p>
<fig id="fig-16">
<label>Figure 16</label>
<caption>
<title>Dividing lines of overlapping cells</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-16.png"/>
</fig>
<fig id="fig-17">
<label>Figure 17</label>
<caption>
<title>Smoothed edges of overlapping cells</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-17.png"/>
</fig>
<p>Overlapping cells may be divided into multiple pieces. Note that too small an area may be noise and should be deleted. Divide the segmentation target into two parts according to the size of the area, and the area with a larger area needs to be segmented continuously, as shown in <xref ref-type="fig" rid="fig-18">Fig. 18</xref>. The smaller area belongs to normal cells and should not be divided, as shown in <xref ref-type="fig" rid="fig-19">Fig. 19</xref>.</p>
<fig id="fig-18">
<label>Figure 18</label>
<caption>
<title>Blocks larger than 6000</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-18.png"/>
</fig>
<fig id="fig-19">
<label>Figure 19</label>
<caption>
<title>Blocks less than 6000</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-19.png"/>
</fig>
<p>The dividing process proceeds until no cells can be divided, as shown in <xref ref-type="fig" rid="fig-20">Figs. 20</xref>&#x2013;<xref ref-type="fig" rid="fig-23">23</xref>.</p>
<fig id="fig-20">
<label>Figure 20</label>
<caption>
<title>The 2nd dividing process</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-20.png"/>
</fig>
<fig id="fig-21">
<label>Figure 21</label>
<caption>
<title>The 3rd dividing process</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-21.png"/>
</fig>
<fig id="fig-22">
<label>Figure 22</label>
<caption>
<title>The 4th dividing process</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-22.png"/>
</fig>
<fig id="fig-23">
<label>Figure 23</label>
<caption>
<title>The 5th dividing process</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-23.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-23">Fig. 23</xref> shows that no cells were divided in the 5th dividing process.</p>
<p>Perform the erosion operation on the final cell image and remove blocks less than 2000, as shown in <xref ref-type="fig" rid="fig-24">Figs. 24</xref> and <xref ref-type="fig" rid="fig-25">25</xref>.</p>
<fig id="fig-24">
<label>Figure 24</label>
<caption>
<title>Erosion of remaining cells</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-24.png"/>
</fig>
<fig id="fig-25">
<label>Figure 25</label>
<caption>
<title>Remove blocks less than 2000</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-25.png"/>
</fig>
<p>A dilation operation is excuted to smooth remaining cells, as shown in <xref ref-type="fig" rid="fig-26">Fig. 26</xref>.</p>
<fig id="fig-26">
<label>Figure 26</label>
<caption>
<title>Smoothed remaining blocks</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-26.png"/>
</fig>
<p>Combine the result of dilation operation with the cell area confirmed and saved to obtain the final cell image, as shown in <xref ref-type="fig" rid="fig-27">Figs. 27</xref>&#x2013;<xref ref-type="fig" rid="fig-29">29</xref>.</p>
<fig id="fig-27">
<label>Figure 27</label>
<caption>
<title>Saved blocks</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-27.png"/>
</fig>
<fig id="fig-28">
<label>Figure 28</label>
<caption>
<title>Overview of blocks</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-28.png"/>
</fig>
<fig id="fig-29">
<label>Figure 29</label>
<caption>
<title>Final segmentation results</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_21929-fig-29.png"/>
</fig>
<p>In order to verify the effectiveness of the algorithm proposed in this paper, it is compared with the segmentation results of the distance transformation algorithm and the limit corrosion algorithm. Since the proposed algorithm does not obtain accurate cell edges, the accuracy of cell counting is used as an evaluation measure for cell segmentation. The proposed algorithm is applied to the segmentation and counting of 300 cell images. The segmentation accuracy rates of cells with overlapping degrees of 0&#x0025;, 15&#x0025;, 30&#x0025;, 45&#x0025;, and 60&#x0025; were respectively counted, as shown in <xref ref-type="table" rid="table-2">Tab. 2</xref>.</p>
<table-wrap id="table-2"><label>Table 2</label>
<caption>
<title>Accuracy of cell segmentation</title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<td align="left">Algorithms</td>
<td align="left">0&#x0025;</td>
<td align="left">15&#x0025;</td>
<td align="left">30&#x0025;</td>
<td align="left">45&#x0025;</td>
<td align="left">60&#x0025;</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Distance transformation algorithm</td>
<td align="left">98.7</td>
<td align="left">98.1</td>
<td align="left">97.3</td>
<td align="left">92.8</td>
<td align="left">88.9</td>
</tr>
<tr>
<td align="left">Limit corrosion algorithm</td>
<td align="left">96.7</td>
<td align="left">94.6</td>
<td align="left">90.5</td>
<td align="left">88.7</td>
<td align="left">83.9</td>
</tr>
<tr>
<td align="left">Proposed algorithm</td>
<td align="left">99.3</td>
<td align="left">98.8</td>
<td align="left">97.5</td>
<td align="left">92.4</td>
<td align="left">86.1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>It can be seen from <xref ref-type="table" rid="table-2">Tab. 2</xref> that the accuracy of the three overlapping cell image segmentation and counting methods decreases as the degree of overlap increases. When the degree of cell overlap is low, the segmentation accuracy of the algorithm proposed in this paper is higher than the counting accuracy of the distance transformation algorithm and the limit corrosion algorithm. When the degree of cell overlap is high, the segmentation accuracy of the proposed algorithm is similar to the other two algorithms.</p>
</sec>
<sec id="s6">
<label>6</label>
<title>Conclusions</title>
<p>This paper proposes a cell image segmentation algorithm based on morphology, which uses morphological operations such as erosion and dilation to accurately segment and count cell images. The experimental results show that when the proposed algorithm is applied to segment lightly or moderately overlapping cells, the accuracy is very high. And when it is used to segment heavily overlapping cells, the effect is normal. However, it must be pointed out that since the algorithm in this paper eliminates part of the edge information in the process of cell segmentation, further processing is needed to obtain accurate cell edge information.</p>
</sec>
</body>
<back><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> This work was supported in part by the National Natural Science Foundation of China under Grant 61621062, Grant 61773407, and Grant 61872408; in part by the Natural Science Foundation of Hunan Province of China under Grant 2019JJ40050; and in part by the Key Scientific Research Project of Education Department of Hunan Province of China under Grant 19A099.</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J. W.</given-names> <surname>Wills</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Robertson</surname></string-name>, <string-name><given-names>H. D.</given-names> <surname>Summers</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Miniter</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Barnes</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Image-based cell profiling enables quantitative tissue microscopy in gastroenterology</article-title>,&#x201D; <source>Cytometry Part A</source>, vol. <volume>97</volume>, no. <issue>12</issue>, pp. <fpage>1222</fpage>&#x2013;<lpage>1237</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>An</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Shen</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Pathology</surname></string-name></person-group>, &#x201C;<article-title>Image-pro plus and imageJ: Comparison and application in image analysis of biological tissues</article-title>,&#x201D; <source>Chinese Journal of Stereology and Image Analysis</source>, vol. <volume>20</volume>, no. <issue>2</issue>, pp. <fpage>185</fpage>&#x2013;<lpage>196</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Cai</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Cao</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Xia</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Xu</surname></string-name></person-group>, &#x201C;<article-title>A new iterative triclass thresholding technique in image segmentation</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>23</volume>, no. <issue>3</issue>, pp. <fpage>1038</fpage>&#x2013;<lpage>1046</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Ghamisi</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Couceiro</surname></string-name>, <string-name><given-names>F. M. L.</given-names> <surname>Martins</surname></string-name> and <string-name><given-names>J. A.</given-names> <surname>Benediktsson</surname></string-name></person-group>, &#x201C;<article-title>Multilevel image segmentation based on fractional-order Darwinian particle swarm optimization</article-title>,&#x201D; <source>IEEE Transactions on Geoscience &#x0026; Remote Sensing</source>, vol. <volume>52</volume>, no. <issue>5</issue>, pp. <fpage>2382</fpage>&#x2013;<lpage>2394</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Renugambal</surname></string-name> and <string-name><given-names>K. S.</given-names> <surname>Bhuvaneswari</surname></string-name></person-group>, &#x201C;<article-title>Image segmentation of brain MR images using OTSU&#x0027;s based hybrid WCMFO algorithm</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>64</volume>, no. <issue>2</issue>, pp. <fpage>681</fpage>&#x2013;<lpage>700</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Harrabi</surname></string-name> and <string-name><given-names>E. B.</given-names> <surname>Braiek</surname></string-name></person-group>, &#x201C;<article-title>Color image segmentation using multi-level thresholding approach and data fusion techniques: Application in the breast cancer cells images</article-title>,&#x201D; <source>Eurasip Journal on Image and Video Processing</source>, vol. <volume>2012</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Liu</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Zhou</surname></string-name></person-group>, &#x201C;<article-title>Multi-focus image region fusion and registration algorithm with multi-scale wavelet</article-title>,&#x201D; <source>Intelligent Automation &#x0026; Soft Computing</source>, vol. <volume>26</volume>, no. <issue>4</issue>, pp. <fpage>1493</fpage>&#x2013;<lpage>1501</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Yang</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Yu</surname></string-name></person-group>, &#x201C;<article-title>Research and implementation of watershed segmentation algorithm based on CCD infrared images</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>62</volume>, no. <issue>3</issue>, pp. <fpage>509</fpage>&#x2013;<lpage>519</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Larlus</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Verbeek</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Jurie</surname></string-name></person-group>, &#x201C;<article-title>Category level object segmentation by combining bag-of-words models with Dirichlet processes and random fields</article-title>,&#x201D; <source>International Journal of Computer Vision</source>, vol. <volume>88</volume>, pp. <fpage>238</fpage>&#x2013;<lpage>253</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Gui</surname></string-name> and <string-name><given-names>M. D.</given-names> <surname>Fox</surname></string-name></person-group>, &#x201C;<article-title>Distance regularized level set evolution and its application to image segmentation</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>19</volume>, no. <issue>12</issue>, pp. <fpage>3243</fpage>&#x2013;<lpage>3254</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Hu</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Han</surname></string-name></person-group>, &#x201C;<article-title>A fast filling algorithm for image restoration based on contour parity</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>62</volume>, no. <issue>3</issue>, pp. <fpage>509</fpage>&#x2013;<lpage>519</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Salembier</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Pardas</surname></string-name></person-group>, &#x201C;<article-title>Hierarchical morphological segmentation for image sequence coding</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>3</volume>, no. <issue>5</issue>, pp. <fpage>639</fpage>&#x2013;<lpage>651</lpage>, <year>1994</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D. E.</given-names> <surname>Llea</surname></string-name> and <string-name><given-names>P. F.</given-names> <surname>Whelan</surname></string-name></person-group>, &#x201C;<article-title>Image segmentation based on the integration of colour&#x2013;texture descriptors&#x2014;A review</article-title>,&#x201D; <source>Pattern Recognition</source>, vol. <volume>44</volume>, no. <issue>10&#x2013;11</issue>, pp. <fpage>2479</fpage>&#x2013;<lpage>2501</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>Jiao</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Chen</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Dong</surname></string-name></person-group>, &#x201C;<article-title>An unsupervised image segmentation method combining graph clustering and high-level feature representation</article-title>,&#x201D; <source>Neurocomputing</source>, vol. <volume>409</volume>, pp. <fpage>83</fpage>&#x2013;<lpage>92</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P. P.</given-names> <surname>Banik</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Saha</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Kim</surname></string-name></person-group>, &#x201C;<article-title>An automatic nucleus segmentation and CNN model based classification method of white blood cell</article-title>,&#x201D; <source>Expert Systems with Applications</source>, vol. <volume>149</volume>, pp. <fpage>113211</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Hu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Duan</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Zhai</surname></string-name></person-group>, &#x201C;<article-title>Research on the cancer cell&#x0027;s recognition algorithm based on the combination of competitive FHNN and FBPNN</article-title>,&#x201D; <source>International Journal of Computing</source>, vol. <volume>7</volume>, no. <issue>3</issue>, pp. <fpage>229</fpage>&#x2013;<lpage>238</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Erdem</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Bayram</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Bakirman</surname></string-name>, <string-name><given-names>O. C.</given-names> <surname>Bayrak</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Akpinar</surname></string-name></person-group>, &#x201C;<article-title>An ensemble deep learning based shoreline segmentation approach(WaterNet) from Landsat 8 OLI images</article-title>,&#x201D; <source>Advances in Space Research</source>, vol. <volume>67</volume>, no. <issue>3</issue>, pp. <fpage>964</fpage>&#x2013;<lpage>974</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>I.</given-names> <surname>Cseke</surname></string-name></person-group>, &#x201C;<article-title>A fast segmentation scheme for white blood cell images</article-title>,&#x201D; <source>11th IAPR Int. Conf. on Pattern Recognition</source>, vol. <volume>3</volume>, pp. <fpage>530</fpage>&#x2013;<lpage>533</lpage>, <year>1992</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. H.</given-names> <surname>Rezatofighi</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Khaksari</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Soltanian-Zadeh</surname></string-name></person-group>, &#x201C;<article-title>Automatic recognition of five types of white blood cells in peripheral blood</article-title>,&#x201D; <source>Int. Conf. Image Analysis and Recognition, ICIAR 2010, Lecture Notes in Computer Science</source>, vol. <volume>6112</volume>, pp. <fpage>161</fpage>&#x2013;<lpage>172</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Hou</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Xiao</surname></string-name></person-group>, &#x201C;<article-title>Improved snake model and its application in image edge detection</article-title>,&#x201D; <source>Journal of Data Acquisition &#x0026; Processing</source>, vol. <volume>23</volume>, no. <issue>2</issue>, pp. <fpage>153</fpage>&#x2013;<lpage>157</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Arteta</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Lempitsky</surname></string-name>, <string-name><given-names>J. A.</given-names> <surname>Noble</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Zisserman</surname></string-name></person-group>, &#x201C;<article-title>Learning to detect cells using non-overlapping extremal regions</article-title>,&#x201D; <source>Medical Image Computing and Computer-Assisted Intervention-MICCAI 2012</source>, vol. <volume>15</volume>, no. <issue>1</issue>, pp. <fpage>348</fpage>&#x2013;<lpage>356</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Dimopoulos</surname></string-name>, <string-name><given-names>C. E.</given-names> <surname>Mayer</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Rudolf</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Stelling</surname></string-name></person-group>, &#x201C;<article-title>Accurate cell segmentation in microscopy images using membrane patterns</article-title>,&#x201D; <source>Bioinformatics</source>, vol. <volume>30</volume>, no. <issue>18</issue>, pp. <fpage>2644</fpage>&#x2013;<lpage>2651</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Mohapatra</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Patra</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Kumar</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Satpathy</surname></string-name></person-group>, &#x201C;<article-title>Lymphocyte image segmentation using functional link neural architecture for acute leukemia detection</article-title>,&#x201D; <source>Biomedical Engineering Letters</source>, vol. <volume>2</volume>, no. <issue>2</issue>, pp. <fpage>100</fpage>&#x2013;<lpage>110</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Hung</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Chan</surname></string-name></person-group>, &#x201C;<article-title>A computer assisted method for leukocyte nucleus segmentation and recognition in blood smear images</article-title>,&#x201D; <source>The Journal of Systems and Software</source>, vol. <volume>85</volume>, no. <issue>9</issue>, pp. <fpage>2104</fpage>&#x2013;<lpage>2118</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Su</surname></string-name></person-group>, &#x201C;<article-title>Blood cell image segmentation on color and GVF snake for leukocyte classification on SVM</article-title>,&#x201D; <source>Optics and Precision Engineering</source>, vol. <volume>20</volume>, no. <issue>12</issue>, pp. <fpage>2781</fpage>&#x2013;<lpage>2790</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Gu</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Cui</surname></string-name></person-group>, &#x201C;<article-title>Flexible combination segmentation algorithm for leukocyte images</article-title>,&#x201D; <source>Chinese Journal of Scientific Instrument</source>, vol. <volume>29</volume>, no. <issue>9</issue>, pp. <fpage>1977</fpage>&#x2013;<lpage>1981</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Ding</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>wang</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Shi</surname></string-name></person-group>, &#x201C;<article-title>Research and implementation of segmentation algorithm for overlapping cells</article-title>,&#x201D; <source>Computer Applications and Software</source>, vol. <volume>25</volume>, no. <issue>4</issue>, pp. <fpage>202</fpage>&#x2013;<lpage>203</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Hou</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Ma</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Pei</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Xin</surname></string-name></person-group>, &#x201C;<article-title>Research on segmentation and recognition of marrow cells image</article-title>,&#x201D; <source>Intelligent Systems Design and Applications, ISDA &#x2018;06. Sixth Int. Conf.</source>, vol. <volume>2</volume>, pp. <fpage>329</fpage>&#x2013;<lpage>332</lpage>, <year>2006</year>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Hu</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Xie</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Li</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Liu</surname></string-name></person-group>, &#x201C;<article-title>Image segmentation of breast cells based on multi-scale region-growing and splitting model</article-title>,&#x201D; <source>Computer Applications and Software</source>, vol. <volume>36</volume>, no. <issue>7</issue>, pp. <fpage>1653</fpage>&#x2013;<lpage>1659</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C. D.</given-names> <surname>Ruberto</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Dempster</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Khan</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Jarra</surname></string-name></person-group>, &#x201C;<article-title>Analysis of infected blood cell images using morphological operators</article-title>,&#x201D; <source>Image and Vision Computing</source>, vol. <volume>20</volume>, no. <issue>2</issue>, pp. <fpage>133</fpage>&#x2013;<lpage>146</lpage>, <year>2002</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Mohanapriya</surname></string-name> and <string-name><given-names>D. B.</given-names> <surname>Kalaavathi</surname></string-name></person-group>, &#x201C;<article-title>Adaptive image enhancement using hybrid particle swarm optimization and watershed segmentation</article-title>,&#x201D; <source>Intelligent Automation &#x0026; Soft Computing</source>, vol. <volume>25</volume>, no. <issue>4</issue>, pp. <fpage>663</fpage>&#x2013;<lpage>672</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Hou</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Shi</surname></string-name></person-group>. &#x201C;<article-title>Application of the improved watershed algorithm based on distance transform in white blood cell segmentation</article-title>,&#x201D; <source>Computing Technology and Automation</source>, vol. <volume>35</volume>, no. <issue>3</issue>, pp. <fpage>81</fpage>&#x2013;<lpage>84</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Qu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Guo</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Ju</surname></string-name></person-group>., <person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Liu</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Lin</surname></string-name></person-group>, &#x201C;<article-title>The algorithm of accelerated cracks detection and extracting skeleton by direction chain code in concrete surface image</article-title>,&#x201D; <source>The Imaging Science Journal</source>, vol. <volume>64</volume>, no. <issue>3</issue>, pp. <fpage>119</fpage>&#x2013;<lpage>130</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Tan</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Tan</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Xiang</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Tang</surname></string-name> and <string-name><given-names>W.</given-names> <surname>Pan</surname></string-name></person-group>, &#x201C;<article-title>Automatic detection of aortic dissection based on morphology and deep learning</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>62</volume>, no. <issue>3</issue>, pp. <fpage>1201</fpage>&#x2013;<lpage>1215</lpage>, <year>2020</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>