<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">26625</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2022.026625</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Fuzzy Hybrid Coyote Optimization Algorithm for Image Thresholding</article-title>
<alt-title alt-title-type="left-running-head">Fuzzy Hybrid Coyote Optimization Algorithm for Image Thresholding</alt-title>
<alt-title alt-title-type="right-running-head">Fuzzy Hybrid Coyote Optimization Algorithm for Image Thresholding</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Li</surname><given-names>Linguo</given-names></name><xref ref-type="aff" rid="aff-1">1</xref>
<xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Huang</surname><given-names>Xuwen</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Qian</surname><given-names>Shunqiang</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Li</surname><given-names>Zhangfei</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-5" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Li</surname><given-names>Shujing</given-names></name><xref ref-type="aff" rid="aff-2">2</xref><email>lsjing1981@163.com</email>
</contrib>
<contrib id="author-6" contrib-type="author">
<name name-style="western"><surname>Mansour</surname><given-names>Romany F.</given-names></name><xref ref-type="aff" rid="aff-3">3</xref></contrib>
<aff id="aff-1"><label>1</label><institution>School of Computer, Nanjing University of Posts and Telecommunications</institution>, <addr-line>Nanjing, 210003</addr-line>, <country>China</country></aff>
<aff id="aff-2"><label>2</label><institution>School of Computer and Information Engineering, Fuyang Normal University</institution>, <addr-line>Fuyang, 236041</addr-line>, <country>China</country></aff>
<aff id="aff-3"><label>3</label><institution>Department of Mathematics, Faculty of Science, New Valley University</institution>, <addr-line>El-Kharga, 72511</addr-line>, <country>Egypt</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Shujing Li. Email: <email>lsjing1981@163.com</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-03-26"><day>26</day>
<month>03</month>
<year>2022</year></pub-date>
<volume>72</volume>
<issue>2</issue>
<fpage>3073</fpage>
<lpage>3090</lpage>
<history>
<date date-type="received"><day>31</day><month>12</month><year>2021</year></date>
<date date-type="accepted"><day>11</day><month>2</month><year>2022</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Li et al.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Li et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_26625.pdf"></self-uri>
<abstract>
<p>In order to address the problems of Coyote Optimization Algorithm in image thresholding, such as easily falling into local optimum, and slow convergence speed, a Fuzzy Hybrid Coyote Optimization Algorithm (hereinafter referred to as FHCOA) based on chaotic initialization and reverse learning strategy is proposed, and its effect on image thresholding is verified. Through chaotic initialization, the random number initialization mode in the standard coyote optimization algorithm (COA) is replaced by chaotic sequence. Such sequence is nonlinear and long-term unpredictable, these characteristics can effectively improve the diversity of the population in the optimization algorithm. Therefore, in this paper we first perform chaotic initialization, using chaotic sequence to replace random number initialization in standard COA. By combining the lens imaging reverse learning strategy and the optimal worst reverse learning strategy, a hybrid reverse learning strategy is then formed. In the process of algorithm traversal, the best coyote and the worst coyote in the pack are selected for reverse learning operation respectively, which prevents the algorithm falling into local optimum to a certain extent and also solves the problem of premature convergence. Based on the above improvements, the coyote optimization algorithm has better global convergence and computational robustness. The simulation results show that the algorithm has better thresholding effect than the five commonly used optimization algorithms in image thresholding when multiple images are selected and different threshold numbers are set.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Coyote optimization algorithm</kwd>
<kwd>image segmentation</kwd>
<kwd>multilevel thresholding</kwd>
<kwd>logistic chaotic map</kwd>
<kwd>hybrid inverse learning strategy</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1"><label>1</label><title>Introduction</title>
<p>Image thresholding is a main method in computer vision [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>], which is widely used in pattern recognition, medical diagnosis, target detection, damage detection, agricultural pest recognition and other fields [<xref ref-type="bibr" rid="ref-3">3</xref>&#x2013;<xref ref-type="bibr" rid="ref-6">6</xref>]. The goal of this method is to subdivide an image into multiple complementary and non-coincident pixel groups based on a specific set of thresholds, so as to extract the region ofinterest or feature information from the original image [<xref ref-type="bibr" rid="ref-7">7</xref>]. This method can be divided into two categories: Bilevel and Multilevel. In the first category, the image is divided into foreground and background by a single threshold. The second category is the extension of the first one, which uses multiple thresholds to divide the image into more than two regions. Generally, bi-level threshold is only suitable for images with clear foreground and background and simple pixel information, while multi-level threshold is more suitable for images with more target units and complex scenes. As image acquisition equipment improves continuously, it enriches the resolution, color and texture of digital images. As a result, multi-level threshold is more widely used in image thresholding [<xref ref-type="bibr" rid="ref-2">2</xref>]. However, with the increase of threshold numbers, the computational complexity becomes increasingly higher. Therefore, a large number of nature-inspired meta-heuristic algorithms (NIMA) are used to improve thecomputational efficiency [<xref ref-type="bibr" rid="ref-8">8</xref>].</p>
<p>In practical application of NIMA, multi-thresholding segmentation is modeled as an objective function optimization problem. The first problem to be solved is the selection of objective function. In the field of multilevel image thresholding, the objective function is generally realized by the statistical function of image histogram, and the most commonly used objective functions are Chouksey et al. [<xref ref-type="bibr" rid="ref-9">9</xref>]. Pare et al. [<xref ref-type="bibr" rid="ref-10">10</xref>] used the fused Kapur, Otsu and Tsalli to form an energy objective function, and based on the search ability of NIMA, improved the efficiency and robustness of image thresholding. Experiments showed that Kapur entropy assisted bacterial foraging optimization (BFO) algorithm and differential evolution (DE) has the best visual effect. Song et al. [<xref ref-type="bibr" rid="ref-11">11</xref>] took Otsu and Kapur as objective functions and used electromagnetic field optimization (EFO) algorithm for optimal threshold extraction. Compared with artificial bee colony (ABC) and wind driven optimization (WDO), this algorithm has better robustness in thresholding of color images. Upadhyay et al. [<xref ref-type="bibr" rid="ref-12">12</xref>] proposed a multilevel thresholding method based on Kapur entropy, which has the advantages of fewer parameters and avoidance of prematurity. Compared with particle swarm optimization (PSO), moth flame optimization (MFO), DE, cuckoo search (CS) and grey wolf optimizer (GWO), in the comparative experiment of parameters such as PSNR, SSIM and FSIM, this algorithm is superior in thresholding quality and consistency. Singh et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] used Otsu and Kapur entropy to combine dragonfly algorithm (DA) and firefly algorithm (FA). By segmenting the benchmark image of Berkeley segmentation dataset (BSD 500), it shows better results in convergence iteration times, threshold quality and segmentation effect when compared with NIMA such as EMO, GA, PSO and BFO. Houssein et al. [<xref ref-type="bibr" rid="ref-14">14</xref>] used black widow optimization (BWO) to obtain the optimal threshold based on Otsu and Kapur objective functions. Compared with the six heuristic algorithms of GWO, MFO, whale optimization algorithm (WOA), sine cosine algorithm (SCA), slap swarm algorithm (SSA) and equilibrium optimization (EO), this method is better in PSNR, SSIM and FSIM.</p>
<p>The above summarizes the use of objective function in multilevel thresholding in recent years. From the analyses, it can be seen that Kapur entropy has been widely used. After analyzing and determining the objective function, the following will analyze applications of NIMA using Kapur as the objective function. Bhandari et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] segmented color images through electromagnetic like mechanism optimization (EMO), and compared it with bat algorithm (BA), backtracking search algorithm (BSA), FA, PSO and WDO, this method has better effect on average error, PSNR and other parameters. Li et al. [<xref ref-type="bibr" rid="ref-16">16</xref>] optimized the Kapur entropy through the improved GWO algorithm to obtain the optimal thresholds. compared with the standard GWO, EO and DE algorithm, it showed better performance in the optimal objective function and threshold stability. Raj [<xref ref-type="bibr" rid="ref-17">17</xref>] used DE method to realize image thresholding, and takes PSNR, SSIM and SNR as evaluation indices. Compared with BFO, bees&#x2019; algorithm and their improved algorithms, it has better effect in standard deviation of objective function and computing efficiency. Taking fuzzy Kapur as the objective function, Li et al. [<xref ref-type="bibr" rid="ref-18">18</xref>] improved GWO and used the algorithm to obtain the optimal threshold. Compared with the standard GWO and DE, such algorithm obtains better objective function value and more stable segmentation effect in the evaluation parameters such as standard deviation, average value and PSNR. With the same objective function, Li et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] improved the local search ability of the ABC, and tried to aggregate domain information in median, mean and iterative mean modes. Compared with EMO, fuzzy DE and standard ABC, this method is superior in convergence speed, convergence stability and running time. Houssein et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] verified the improved effect of opposition-based learning (OBL) on MPA (Marine predictors algorithm) convergence and search performance in multilevel thresholding. Compared with algorithms such as DE based on OBL, HHO and MPA, this method shows advantages in convergence speed of the optimization algorithm. At the same time, the author also experimentally analyzes the quantitative statistical performance in image thresholding. Singh et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] summarized nearly 30 kinds of NIMA, including DE, FA, GA, PSO, ABC from 2005 to 2021. Compared with ANN (artificial neural networks), growing region, edge-based algorithms and other kinds, thresholding method has been widely used because it requires less prior knowledge and minimal steps. Based on [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>], Li et al. [<xref ref-type="bibr" rid="ref-22">22</xref>] improved COA through differential and fuzzy strategies, and conducted image thresholding using Otsu and fuzzy Kapur as objective functions. The experimental results indicate that compared with GWO and other methods in [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>], FCOA with fuzzy Kapur obtains better objective function value and segmentation quality. On this basis, with the similar framework, this paper introduces chaos initialization and reverse learning strategy to further improve it and form FHCOA.</p>
<p>The remainder of this paper is organized as follows: Section 2 introduces the selection of objective function and COA. Section 3 presents the improved COA by chaotic initialization and reverse learning in detail. Section 4 analyzes the performance of the improved optimization algorithm in parameter selection and segmentation quality through detailed experimental comparison, and provides a detailed quantitative and visual result analysis. Section 5 introduces the application of FHCOA in brain medical image thresholding. Finally, Section 6 concludes this study.</p>
</sec>
<sec id="s2"><label>2</label><title>Objective Function Analysis and COA Introduction</title>
<p>Image thresholding requires two core steps, one is the selection of objective function, the other is application of threshold optimization of NIMA. Based on the analysis of Section 1, Kapur entropy and Ostu are still the main objective functions used in many thresholding methods. Some authors of this paper in [<xref ref-type="bibr" rid="ref-16">16</xref>] first verified the effect of Kapur entropy in image thresholding, they also analyzed and compared Otsu and fuzzy Kapur in [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>]. From the experimental analysis, fuzzy Kapur is better than Otsu in most cases. Combined with the effectiveness of fuzzy Kapur in improving COA which is verified in [<xref ref-type="bibr" rid="ref-22">22</xref>], this study still takes fuzzy Kapur as the objective function. let <italic>i</italic> represent the gray level of the image, then the statistical quantity of each gray level can be expressed as <italic>h(i)</italic>, thus the probability distribution of the gray level of the image can be obtained. On this basis, a group of Kapur entropy can be easily obtained (for detailed calculation equation, please refer to references [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>,<xref ref-type="bibr" rid="ref-22">22</xref>]). Therefore, the fuzzy Kapur objective function used in this paper can be expressed as:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:msub><mml:mi>H</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>t</mml:mi><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mo>&#x2026;</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>t</mml:mi><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:math></disp-formula>where <italic>th</italic> is the multi threshold vector, <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>t</mml:mi><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> (<italic>i&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>0, 1,&#x2026;, n) is the <italic>i-th</italic> threshold.</p>
<p>In determining the objective function, the second core step involves selection and application of NIMA. This paper focuses on verifying the performance of improved COA in threshold optimization.</p>
<sec id="s2_1"><label>2.1</label><title>Coyote Groups and Initialization of Their Parameters</title>
<p>Firstly, the population formed by <italic>N</italic> coyotes is randomly assigned to <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>P</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> coyote packs (groups). The number of coyotes in each group is <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>, which can be formulated as <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>p</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2217;</mml:mo><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>, and the maximum number of iterations of the algorithm (<italic>nfevalMax</italic>) is set. The COA is designed according to the social conditions and environmental adaptability of coyotes in nature, thus the social condition <italic>Soc</italic> (namely, the decision variable of the <italic>c-th</italic> coyote of the <italic>p-th</italic> group at <italic>t-th</italic> instant of time) can be defined as:
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>N</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>Wherein <italic>N</italic> is the search space dimension, it is also the threshold quantity in this paper. Therefore, at <italic>t-th</italic> instant, the initialization mode of the <italic>c-th</italic> coyote of the <italic>p-th</italic> group in the <italic>i-th</italic> dimension is shown in <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>. The fitness value (i.e., objective function) of each group is shown in <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref>.
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula>
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:mrow><mml:msub><mml:mi>j</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>Wherein <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> are respectively the lower and upper bounds of the problem to be solved, which are 0 and 255 in image thresholding, <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is the real random numbers inside the range [0, 1] generated under uniform probability, and <italic>Obj</italic> is the objective function to be optimized by COA.</p>
</sec>
<sec id="s2_2"><label>2.2</label><title>Updating Coyotes in the Group</title>
<p>In order to improve the optimization effect, the growth of coyotes was affected by many factors, including: the best coyotes <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>A</mml:mi><mml:mi>l</mml:mi><mml:mi>p</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:msup><mml:mi>a</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula>, and cultural trend <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>C</mml:mi><mml:mi>u</mml:mi><mml:mi>l</mml:mi><mml:mrow><mml:msup><mml:mi>t</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> in the group. <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>A</mml:mi><mml:mi>l</mml:mi><mml:mi>p</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:msup><mml:mi>a</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> was computed by <xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref>, and <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi>C</mml:mi><mml:mi>u</mml:mi><mml:mi>l</mml:mi><mml:mrow><mml:msup><mml:mi>t</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> by <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref> (where <italic>Asc</italic> indicates that the social conditions of all coyotes in a group which are ranked in ascending order), <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mrow><mml:msub><mml:mi>&#x03B4;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mrow><mml:msub><mml:mi>&#x03B4;</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> were computed by randomly selecting two coyotes in the group, as in <xref ref-type="disp-formula" rid="eqn-7">Eqs. (7)</xref> and <xref ref-type="disp-formula" rid="eqn-8">(8)</xref>. Thus, the growth pattern of coyotes can be illustrated in <xref ref-type="disp-formula" rid="eqn-9">Eq. (9)</xref>, where <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is a new solution obtained with the <italic>c-th</italic> coyote in <italic>p-th</italic> group at the <italic>t-th</italic> instant. When the coyotes in the group grow up, the objective function values need to be recalculated as shown in <xref ref-type="disp-formula" rid="eqn-12">Eq. (12)</xref>, which provides the basis for the next iteration.
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mi>A</mml:mi><mml:mi>l</mml:mi><mml:mi>p</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:msup><mml:mi>a</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo fence="false" stretchy="false">}</mml:mo></mml:mrow></mml:msub></mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>O</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></disp-formula>
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mi>C</mml:mi><mml:mi>u</mml:mi><mml:mi>l</mml:mi><mml:msubsup><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mi>A</mml:mi><mml:mi>s</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mtext>&#x00A0;is&#x00A0;odd</mml:mtext></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mi>A</mml:mi><mml:mi>s</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:mi>A</mml:mi><mml:mi>s</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>c</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mtext>&#x00A0;is&#x00A0;even</mml:mtext></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mrow><mml:msub><mml:mi>&#x03B4;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>A</mml:mi><mml:mi>l</mml:mi><mml:mi>p</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:msup><mml:mi>a</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula>
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mrow><mml:msub><mml:mi>&#x03B4;</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>C</mml:mi><mml:mi>u</mml:mi><mml:mi>l</mml:mi><mml:mrow><mml:msup><mml:mi>t</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula>
<disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2217;</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B4;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2217;</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03B4;</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:msubsup><mml:mi>j</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>When the coyotes in the group have grown up, update the coyotes&#x2019; social conditions according to the quality of the objective function (fitness value), that is, update the coyotes. As can be shown in <xref ref-type="disp-formula" rid="eqn-11">Eq. (11)</xref>.
<disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:msubsup><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mi>c</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:msubsup><mml:mi>j</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:msubsup><mml:mi>j</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
</sec>
<sec id="s2_3"><label>2.3</label><title>Population Regeneration and Elimination</title>
<p>In order to avoid the algorithm falling into the local optimum, new coyotes need to be added constantly. In the standard COA, the addition of newborn coyotes is affected by the social conditions and environment of parent coyotes (which are randomly selected), as in <xref ref-type="disp-formula" rid="eqn-12">Eq. (12)</xref>. Where <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mspace width="thickmathspace" /><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> are the two coyotes randomly selected in group <italic>P</italic>, <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mrow><mml:mtext>and</mml:mtext></mml:mrow><mml:mspace width="thickmathspace" /><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> are the threshold sequence numbers randomly selected in the two groups, and <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mrow><mml:msub><mml:mrow><mml:mtext>R</mml:mtext></mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> is a randomly generated set of legitimate thresholds.
<disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:mi>P</mml:mi><mml:mi>u</mml:mi><mml:msubsup><mml:mi>p</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003C;</mml:mo><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mspace width="thickmathspace" /><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow><mml:mspace width="thickmathspace" /><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mspace width="thickmathspace" /><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>In addition, <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> represents scattering probability, and <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> the correlation probability, these two parameters determine the cultural diversity of coyote groups. In the standard COA, they are defined as:
<disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>D</mml:mi></mml:math></disp-formula>
<disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:math></disp-formula></p>
<p>COA keeps the population constant by synchronizing the birth and death of coyotes. If only one coyote in the group has poorer social adaptability than the newborn coyote, the coyote dies and the newborn survives, then its age is set to age&#x2009;&#x003D;&#x2009;0; If the social adaptability of multiple coyotes in the group is worse than that of the newborn one, the oldest coyote with poor social adaptability will die, and the newborn will survive, and the age is set to 0; If there is no coyote with worse social adaptability than the newborn coyote, then the latter will die.</p>
</sec>
<sec id="s2_4"><label>2.4</label><title>Eviction and Acceptance of Coyotes</title>
<p>During the natural evolution of coyotes, in order to maintain the continuity of the population or improve the population quality, individual coyotes need to be constantly updated according to their adaptability (some low-quality coyotes are evicted and some high-quality coyotes are accepted into the population). In the algorithm design, to reflect this natural phenomenon, the expulsion and acceptance of coyotes occur with certain probability, as is shown in <xref ref-type="disp-formula" rid="eqn-15">Eq. (15)</xref>.
<disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>0.005</mml:mn><mml:mo>&#x2217;</mml:mo><mml:msubsup><mml:mi>N</mml:mi><mml:mi>c</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:math></disp-formula></p>
</sec>
</sec>
<sec id="s3"><label>3</label><title>Improved Coyote Optimization Algorithm</title>
<p>Population initialization is the key initial condition for all NIMA. Although the population can be optimized and updated continuously in algorithm optimization, a better population initialization method can ensure the convergence speed of the algorithm and prevent algorithm from falling into local optimum in a certain extent. The population initialization of standard COA algorithm uses random function to complete the assignment of coyote social conditions. It is highly probable that this method can cause a coyote group to fall into local optimization [<xref ref-type="bibr" rid="ref-22">22</xref>], consequently it puts higher requirements on COA population initialization.</p>
<sec id="s3_1"><label>3.1</label><title>Logistic Chaotic Mapping</title>
<p>Chaotic mapping is a random sequence generated by a simple deterministic system, which is characterized by nonlinearity, ergodicity, randomness and long-term unpredictability [<xref ref-type="bibr" rid="ref-23">23</xref>]. In optimization process, chaotic mapping can be used to replace the pseudo-random generator and initialize the population by chaotic sequence, so as to further improve the convergence of the algorithm and the accuracy of the final solution. In view of this, this paper uses the logistic mapping method to initialize the population of COA, and its equation is defined as:
<disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:mi>C</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mo>&#x2217;</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2217;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula>where <italic>t</italic> is the number of iterations, <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mrow><mml:mtext>a</mml:mtext></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:math></inline-formula>,<inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mrow><mml:mtext>&#x00A0;&#x00A0;</mml:mtext></mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> represents the <italic>t-th</italic> chaotic number. It can be observed from the equation that with the continuous increase of <italic>a,</italic> the chaos of the sequence becomes greater, and when <italic>a&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>4, the system is in a completely chaotic state. In this paper, when <italic>a&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>4, the chaotic number obtained by <xref ref-type="disp-formula" rid="eqn-16">Eq. (16)</xref> is used to replace the random number in <xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref>, thus the initialization equation of improved <xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref> can be formulated as:
<disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>C</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2217;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
</sec>
<sec id="s3_2"><label>3.2</label><title>Best Coyote Lens Reverse Learning Strategy</title>
<p>In the improvement of NIMA, reverse learning takes the current solution (coyote) as the reference object, obtains the corresponding reverse solution through the reverse learning strategy, and updates the optimal solution through comparison of the objective function values. In this study, a hybrid reverse learning strategy is obtained by combining the lens imaging reverse learning strategy with the worst coyote reverse learning strategy [<xref ref-type="bibr" rid="ref-20">20</xref>]. When the COA updates all of the coyotes each time, the optimal coyote and the worst coyote in this iteration are selected for reverse learning operation to prevent the algorithm falling into local optimum prematurely.</p>
<p>In the process of optimal solution exploration, the current optimal solution is mirrored by a convex lens according to the lens imaging principle, so as to increase the differentiation of the updated solution, and maximize the search for a new optimal solution, thus the problem of the algorithm falling into local optimization can be overcome. Its working principle is shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1"><label>Figure 1</label><caption><title>Lens imaging reverse learning</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_26625-fig-1.png"/></fig>
<p>In practical applications, the coordinate origin o is the midpoint of the lower and upper bounds of the threshold, and a convex lens with focal length <italic>r</italic> is placed at the origin. According to the principle of convex lens imaging, the object <italic>M</italic> with height <italic>h</italic> forms a mirror image <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mrow><mml:msup><mml:mi>M</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow></mml:math></inline-formula> with height <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mrow><mml:msup><mml:mi>h</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow></mml:math></inline-formula> on the other side of the lens. In this paper, the optimal coyote <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mi>s</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> in the group is taken as a point on the abscissa according to the absolute value, and the projection on the abscissa is the newly generated optimal coyote <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mspace width="thickmathspace" /><mml:mi>s</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> (inverse solution). Therefore, the following equation can be obtained based on the principle of convex lens imaging:
<disp-formula id="eqn-18"><label>(18)</label><mml:math id="mml-eqn-18" display="block"><mml:mfrac><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>&#x2212;</mml:mo><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mi>h</mml:mi><mml:mrow><mml:mrow><mml:msup><mml:mi>h</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>Let <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mi>h</mml:mi><mml:mrow><mml:mrow><mml:msup><mml:mi>h</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>=</mml:mo><mml:mi>n</mml:mi></mml:math></inline-formula>, After transformation, the <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> is formulated as:
<disp-formula id="eqn-19"><label>(19)</label><mml:math id="mml-eqn-19" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:mfrac><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mi>n</mml:mi></mml:mrow></mml:mfrac><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mi>n</mml:mi></mml:mfrac></mml:math></disp-formula></p>
<p>As can be observed from <xref ref-type="disp-formula" rid="eqn-19">Eq. (19)</xref>, by selecting different <italic>n</italic>, it can obtain new individuals different from the optimal coyote. When <italic>n&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>1, the lens imaging reverse learning is simplified to an ordinary reverse learning strategy. To improve the differentiation of reverse solutions, <italic>n&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>6000 is set in this work. when the lens reverse learning is completed, the selection rule of the optimal solution is shown in <xref ref-type="disp-formula" rid="eqn-20">Eq. (20)</xref>.
<disp-formula id="eqn-20"><label>(20)</label><mml:math id="mml-eqn-20" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:msubsup><mml:mi>j</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:msubsup><mml:mi>j</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
</sec>
<sec id="s3_3"><label>3.3</label><title>Worst Coyote Reverse Learning Strategy</title>
<p>Lens reverse learning solves the problem of updating the optimal coyote. In order to further increase the diversity of coyote population, this paper uses the worst coyote reverse learning strategy to update the worst coyote, so as to ensure the overall quality of coyote population. The equation is:
<disp-formula id="eqn-21"><label>(21)</label><mml:math id="mml-eqn-21" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>l</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mo>&#x2217;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>u</mml:mi><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-21">Eq. (21)</xref>, <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> represents the worst coyote in the current coyote group, <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> the new solution obtained by the worst coyote reverse learning strategy, and <italic>rand</italic> is a random real number inside [0, 1]. Finally, the population of the worst coyote is optimized according to <xref ref-type="disp-formula" rid="eqn-22">Eq. (22)</xref>.
<disp-formula id="eqn-22"><label>(22)</label><mml:math id="mml-eqn-22" display="block"><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi><mml:mi mathvariant="normal">&#x005F;</mml:mi><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:msubsup><mml:mi>j</mml:mi><mml:mrow><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>O</mml:mi><mml:mi>b</mml:mi><mml:msubsup><mml:mi>j</mml:mi><mml:mrow><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:msubsup><mml:mi>c</mml:mi><mml:mrow><mml:mi>w</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>In each iteration of COA, when the population update is completed, the best coyote and the worst coyote are updated respectively through <xref ref-type="disp-formula" rid="eqn-19">Eqs. (19)</xref> and <xref ref-type="disp-formula" rid="eqn-21">(21)</xref>, and the relatively better individuals are selected through <xref ref-type="disp-formula" rid="eqn-20">Eqs. (20)</xref> and <xref ref-type="disp-formula" rid="eqn-22">(22)</xref>.</p>
</sec>
</sec>
<sec id="s4"><label>4</label><title>Comparison and Analysis of Experimental Results</title>
<p>The standard image segmentation test set BSD 500 [<xref ref-type="bibr" rid="ref-18">18</xref>] published by Berkeley is selected as the experimental comparative analysis object, and five kinds of NIMA are selected for quantitative and visual analysis and discussion. Some authors of this study have done many comparative experiments in [<xref ref-type="bibr" rid="ref-16">16</xref>,<xref ref-type="bibr" rid="ref-18">18</xref>&#x2013;<xref ref-type="bibr" rid="ref-20">20</xref>]. For example, in [<xref ref-type="bibr" rid="ref-16">16</xref>] they compared the effects of GWO, EMO and DE in mulilevel image thresholding. Through detailed data comparison, it is found that GWO has better thresholding effect and convergence stability. [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>] respectively analyze the thresholding performance of FMGWOA (fuzzy modified GWO and aggregation) and FMABCA (fuzzy modified ABC and aggregation). Compared with standard GWO, standard ABC, FDE, EMO and those in [<xref ref-type="bibr" rid="ref-16">16</xref>], these two improved algorithms are also significantly better in segmentation speed and segmentation quality. Especially in [<xref ref-type="bibr" rid="ref-22">22</xref>], we improved the COA through fuzzy objective function, differential strategy and domain fuzzy information aggregation, and then verified its effect in thresholding segmentation. The experimental results show that our methods FCOA (fuzzy Coyote optimization algorithm) and FICOA (fuzzy and improved Coyote optimization algorithm) superior to the standard COA, GWO and others. Therefore, we will further compare the proposed method in this paper with FICOA (including FCOA) [<xref ref-type="bibr" rid="ref-22">22</xref>], FMGWOA [<xref ref-type="bibr" rid="ref-18">18</xref>], FMABCA [<xref ref-type="bibr" rid="ref-19">19</xref>] and GWO [<xref ref-type="bibr" rid="ref-24">24</xref>] which is also a wolf swarm heuristic algorithm, in order to illustrate that our method has better image thresholding performance.</p>
<sec id="s4_1"><label>4.1</label><title>Parameter Setting and Related Discussion of Experimental Result Analysis</title>
<p>In order to fully analyze the performance of FHCOA in visual and quantitative data, six images are selected from BSD500 according to the complexity of the scenes, the amount of color information and the length-width ratio of the images, as shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>. The experiments are run on Windows10&#x2013;64bit, with Intel Core i5 processor and 16GB running memory, and processed by programming software Matlab 2016a. All experimental data are generated on the same platform.</p>
<fig id="fig-2"><label>Figure 2</label><caption><title>Original images for multi-level thresholding</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_26625-fig-2.png"/></fig>
<p>According to the analysis of image thresholding evaluation in Section 1, this paper evaluates the differences between FHCOA and other algorithms through PSNR and FSIM [<xref ref-type="bibr" rid="ref-25">25</xref>]. These two evaluation parameters are also the evaluation mode selected in the comparative references [<xref ref-type="bibr" rid="ref-18">18</xref>&#x2013;<xref ref-type="bibr" rid="ref-19">19</xref>,<xref ref-type="bibr" rid="ref-22">22</xref>,<xref ref-type="bibr" rid="ref-24">24</xref>], in which PSNR evaluates the degree of image distortion based on the comparison error of pixels between images. The larger the parameter value of PSNR, the smaller the degree of image distortion, and the better the effect of image segmentation. FSIM evaluates the segmentation effect by measuring the feature similarity between the images before and after segmentation. Similarly, the larger the value, the better the segmentation effect.</p>
<p>Regarding the setting of iteration numbers of the algorithm, <xref ref-type="table" rid="table-1">Tab. 1</xref> shows the PSNR and FSIM parameter values of the Starfish image (<italic>th&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>5) in <xref ref-type="fig" rid="fig-2">Fig. 2</xref> with different no. of iterations. As can be seen from <xref ref-type="table" rid="table-1">Tab. 1</xref>, when the number of iterations increases from 1000 to 10000, the values of PSNR and FSIM show an upward trend. However, when the number of iterations reaches 15000, the values of PSNR and FSIM do not increase significantly but show a slight downward trend. Considering the impact of the number of iterations on time efficiency of the algorithm, we finally determine the number of iterations to be 10, 000.</p>
<table-wrap id="table-1"><label>Table 1</label><caption><title>Image thresholding quality with different no. of iterations</title></caption>
<table>
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">nfevalMax</th>
<th align="left">1000</th>
<th align="left">2500</th>
<th align="left">6000</th>
<th align="left">10000</th>
<th align="left">15000</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">PSNR</td>
<td align="left">22.8592</td>
<td align="left">22.8599</td>
<td align="left">22.8659</td>
<td align="left">22.8708</td>
<td align="left">22.8525</td>
</tr>
<tr>
<td align="left">FSIM</td>
<td align="left">0.7127</td>
<td align="left">0.7128</td>
<td align="left">0.7143</td>
<td align="left">0.7183</td>
<td align="left">0.7168</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Based on the conventional parameter setting in the comparative reference and the literature analysis in Section 1 [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>], the number of test thresholds is set to 2, 3, 4 and 5, and the number of coyotes in each group are set to 20 and 5 respectively, as shown in <xref ref-type="table" rid="table-2">Tab. 2</xref>. The parameters of the comparative literature used in this paper follow the parameters of the original literature, and the hardware, operating system and software configuration of the running platform are consistent with those in this study.</p>
<table-wrap id="table-2"><label>Table 2</label><caption><title>Thresholds and population related parameter setting</title></caption>
<table>
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">No. of the coyotes</th>
<th align="left">No. of coyotes in a group</th>
<th align="left">No. of thresholds</th>
<th align="left">No. of iterations</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">20</td>
<td align="left">5</td>
<td align="left">2, 3, 4, 5</td>
<td align="left">10000</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In order to avoid too small segmentation regions generated by isolated pixels in image thresholding process, similar to the analysis in [<xref ref-type="bibr" rid="ref-18">18</xref>&#x2013;<xref ref-type="bibr" rid="ref-19">19</xref>,<xref ref-type="bibr" rid="ref-22">22</xref>], this paper aggregates the information of neighborhood regions with the help of fuzzy aggregation theory. <xref ref-type="table" rid="table-3">Tab. 3</xref> lists the PSNR parameter values obtained by the three methods with different thresholding numbers. Comparing the PSNR parameter values in <xref ref-type="table" rid="table-3">Tab. 3</xref>, we can conclude that the median method gets the best thresholding effect with different threshold numbers, thus the median method is used in subsequent experiments.</p>
<table-wrap id="table-3"><label>Table 3</label><caption><title>Comparison of different aggregation methods by PSNR</title></caption>
<table>
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Method</th>
<th align="left">2</th>
<th align="left">3</th>
<th align="left">4</th>
<th align="left">5</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Median</td>
<td align="left"><bold>16.7771</bold></td>
<td align="left"><bold>18.5834</bold></td>
<td align="left"><bold>19.3009</bold></td>
<td align="left"><bold>20.7280</bold></td>
</tr>
<tr>
<td align="left">Average</td>
<td align="left">16.4595</td>
<td align="left">17.9494</td>
<td align="left">18.5956</td>
<td align="left">19.3137</td>
</tr>
<tr>
<td align="left">Iterative average</td>
<td align="left">16.5891</td>
<td align="left">18.0732</td>
<td align="left">18.9554</td>
<td align="left">19.9122</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4_2"><label>4.2</label><title>Comparison of Image Thresholding Results of FHCOA, FICOA and FCOA</title>
<p>Based on the comprehensive analysis of the advantages and disadvantages of Kapur entropy and fuzzy Kapur entropy in [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>], the experimental analysis shows that FMABCA and FMGWOA demonstrate greater advantages. The experimental analysis in [<xref ref-type="bibr" rid="ref-22">22</xref>] shows that the thresholding effect is better than that in [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>]. Therefore, this paper first makes a comparative analysis with [<xref ref-type="bibr" rid="ref-22">22</xref>] using the same NIMA, and the comparison of other methods will be carried out in subsequent sections. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> list the visual thresholding effects of FHCOA in this paper.</p>
<fig id="fig-3"><label>Figure 3</label><caption><title>Image thresholding based on FHCOA</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_26625-fig-3.png"/></fig>
<p>From pure visual effect in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>, it is difficult to evaluate which method is superior to the other. For more accurate comparative analysis, the threshold distribution and segmentation quantitative evaluation values optimized by FICOA and FHCOA are given in <xref ref-type="table" rid="table-4">Tabs. 4</xref> and <xref ref-type="table" rid="table-5">5</xref> respectively. <xref ref-type="table" rid="table-4">Tabs. 4</xref> and <xref ref-type="table" rid="table-5">5</xref> indicate that in comparison of threshold vector <italic>th</italic>, the threshold value range of FHCOA is wider (more widely distributed), which proves that FHCOA obtains a better threshold to a certain extent. However, the segmentation effect cannot be fully illustrated only by threshold distribution. Therefore, the data of segmentation quality evaluation parameters PSNR and FSIM are also listed in <xref ref-type="table" rid="table-4">Tabs. 4</xref> and <xref ref-type="table" rid="table-5">5</xref>.</p>
<table-wrap id="table-4"><label>Table 4</label><caption><title>Results of FICOA with fuzzy Kapur entropy as the objective function</title></caption>
<table>
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Image</th>
<th align="left">th</th>
<th align="left">Thresholds</th>
<th align="left">PSNR</th>
<th align="left">FSIM</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Reunion</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">82 164<break/>77.5 141.5 199.5<break/>68 138 172.5 218<break/>71 122.5 157.5 190 228</td>
<td align="left">16.3730<break/>20.5767<break/>22.8883<break/>24.2068</td>
<td align="left">0.5568<break/>0.6573<break/>0.7199<break/>0.7647</td>
</tr>
<tr>
<td align="left">Children</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">70 197.5<break/>63.5 138.5 202.5<break/>55 127 170 225.5<break/>42.5 87 118.5 163 216</td>
<td align="left">15.4916<break/>18.0813<break/>19.3391<break/>20.8122</td>
<td align="left">0.5887<break/>0.6324<break/>0.6669<break/>0.7089</td>
</tr>
<tr>
<td align="left">Skiing</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">67 194.5<break/>49.5 107.5 188.5<break/>50.5 109 158.5 226.5<break/>51 106.5 136 185 230.5</td>
<td align="left">16.7771<break/>18.4639<break/>19.2573<break/>20.5597</td>
<td align="left">0.5700<break/>0.6338<break/>0.6704<break/>0.7402</td>
</tr>
<tr>
<td align="left">Town</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">58 170.5<break/>55 114 186<break/>51 107.5 156 205<break/>41 100 131 170 226</td>
<td align="left">18.2341<break/>20.0420<break/>20.7285<break/>21.2949</td>
<td align="left">0.5565<break/>0.6311<break/>0.6591<break/>0.7128</td>
</tr>
<tr>
<td align="left">Swan</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">63 159<break/>72.5 132.5 199.5<break/>62.5 110.5 163 218.5<break/>32 65 118 180 223</td>
<td align="left">18.3296<break/>22.6667<break/>24.7063<break/>25.9682</td>
<td align="left">0.5927<break/>0.6178<break/>0.6393<break/>0.7660</td>
</tr>
<tr>
<td align="left">Starfish</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">62.5 184.5<break/>50.5 106 182.5<break/>43 92.5 136 194<break/>39 87 115.5 161.5 212</td>
<td align="left">17.4745<break/>19.9855<break/>21.5972<break/>22.8617</td>
<td align="left">0.5096<break/>0.5838<break/>0.6617<break/>0.7177</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-5"><label>Table 5</label><caption><title>Results of FHCOA with fuzzy Kapur entropy as the objective function</title></caption>
<table>
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Image</th>
<th align="left">th</th>
<th align="left">Thresholds</th>
<th align="left">PSNR</th>
<th align="left">FSIM</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Reunion</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">82 164<break/>70.5 150 207<break/>60.5 137.5 170.5 219.5<break/>69.5 121 160.5 199 233</td>
<td align="left">16.3730<break/>20.9218<break/>22.9210<break/>24.4491</td>
<td align="left">0.5568<break/>0.6587<break/>0.7161<break/>0.7675</td>
</tr>
<tr>
<td align="left">Children</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">70 197.5<break/>63 141 203<break/>47.5 123 162 214.5<break/>41.5 89 136.5 178.5 219.5</td>
<td align="left">15.4916<break/>18.1016<break/>19.5186<break/>20.8855</td>
<td align="left">0.5887<break/>0.6318<break/>0.6669<break/>0.7052</td>
</tr>
<tr>
<td align="left">Skiing</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">67 194.5<break/>53.5 111 190.5<break/>48.5 108.5 157.5 224<break/>46.5 107.5 147 194 234</td>
<td align="left">16.7771<break/>18.5834<break/>19.3009<break/>20.7280</td>
<td align="left">0.5700<break/>0.6382<break/>0.6697<break/>0.7515</td>
</tr>
<tr>
<td align="left">Town</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">58 170.5<break/>49.5 109 185.5<break/>49 111.5 160.5 204<break/>43.5 97.5 139.5 180 215</td>
<td align="left">18.2341<break/>20.0811<break/>20.7477<break/>21.3598</td>
<td align="left">0.5565<break/>0.6340<break/>0.6617<break/>0.7135</td>
</tr>
<tr>
<td align="left">Swan</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">63 159<break/>63.5 124.5 192.5<break/>60 104.5 158.5 210.5<break/>28.5 60 101 163 212.5</td>
<td align="left">18.3296<break/>22.8013<break/>24.7148<break/>26.1441</td>
<td align="left">0.5927<break/>0.6206<break/>0.6395<break/>0.7666</td>
</tr>
<tr>
<td align="left">Starfish</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">62.5 184.5<break/>51 106 182<break/>47 110.5 143.5 208<break/>39 81 111.5 158 210</td>
<td align="left">17.4745<break/>19.9855<break/>21.6425<break/>22.8689</td>
<td align="left">0.5096<break/>0.5838<break/>0.6581<break/>0.7198</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As can be observed in <xref ref-type="table" rid="table-4">Tabs. 4</xref> and <xref ref-type="table" rid="table-5">5</xref>, in terms of PSNR evaluation parameters, the values obtained by FHCOA are equal to those by FICOA when the threshold is 2, but with the gradual increase of threshold numbers, the values obtained by FHCOA are higher than FICOA, and the increase proportion basically shows an upward trend. Overall, compared with FICOA, the value obtained by FHCOA increased on average by 0.072, with an average increase ratio of 0.335&#x0025;. The maximum improvement was obtained on the image Reunion when the threshold is 3, and the evaluation parameter value increased by 0.3451, with an increase ratio of 1.677&#x0025;. For the FSIM evaluation value, the average increase ratio of FHCOA to FICOA is 1.13&#x0025;. The maximum improvement occurred on the Skiing image when the threshold is 5, and the improvement ratio was 1.527&#x0025;. In addition, FHCOA obtains higher values in most cases. Through the detailed comparison of the two evaluation indices, FHCOA has better comprehensive performance in image thresholding quality than FICOA.</p>
</sec>
<sec id="s4_3"><label>4.3</label><title>Comparison of PSNR Values of Image Thresholding with Different Algorithms</title>
<p>The previous sections focus on the improved method in our work in aspects of running platform configuration, program parameter setting, data source selection and performance improvement compared with those in [<xref ref-type="bibr" rid="ref-22">22</xref>]. In order to further illustrate the comparative advantages of FHCOA improved by lens imaging strategy and worst coyote reverse learning strategy, this section will focus on comparing and analyzing the advantages and disadvantages of FHCOA with other popular thresholding methods based on NIMA. The comparison references [<xref ref-type="bibr" rid="ref-18">18</xref>&#x2013;<xref ref-type="bibr" rid="ref-19">19</xref>,<xref ref-type="bibr" rid="ref-22">22</xref>,<xref ref-type="bibr" rid="ref-24">24</xref>] used in this paper take PSNR as the main standard to evaluate the image segmentation effect. Therefore, this section will use this evaluation index to compare FHCOA, FICOA, FCOA, GWO, FMABCA and FMGWOA. The relevant data are shown in <xref ref-type="table" rid="table-6">Tab. 6</xref>.</p>
<table-wrap id="table-6"><label>Table 6</label><caption><title>Comparison of image thresholding results of PSNR with different algorithms</title></caption>
<table>
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Image</th>
<th align="left" rowspan="2">th</th>
<th align="center" colspan="6">PSNR</th>
</tr>
<tr>
<th align="left">GWO</th>
<th align="left">FMABCA</th>
<th align="left">FMGWOA</th>
<th align="left">FCOA</th>
<th align="left">FICOA</th>
<th align="left">FHCOA</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Reunion</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">13.6951<break/>16.9100<break/>19.7884<break/>20.4981</td>
<td align="left">16.2002<break/>20.4477<break/>22.6588<break/>24.1301</td>
<td align="left">16.2816<break/>20.5450<break/>22.6998<break/>23.9350</td>
<td align="left">16.2816<break/>20.4172<break/>22.8660<break/>24.0739</td>
<td align="left">16.3730<break/>20.5767<break/>22.8883<break/>24.2068</td>
<td align="left">16.3730<break/><bold>20.9218</bold><break/><bold>22.9210</bold><break/><bold>24.4491</bold></td>
</tr>
<tr>
<td align="left">Children</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">14.5817<break/>16.0739<break/>16.6189<break/>17.2544</td>
<td align="left">15.4793<break/>17.9531<break/>19.2542<break/>20.7253</td>
<td align="left">15.4793<break/><bold>18.1652</bold><break/>19.2989<break/>20.7235</td>
<td align="left">15.4916<break/>18.1054<break/>19.3217<break/>20.7372</td>
<td align="left">15.4916<break/>18.0813<break/>19.3391<break/>20.8122</td>
<td align="left">15.4916<break/>18.1016<break/><bold>19.5186</bold><break/><bold>20.8855</bold></td>
</tr>
<tr>
<td align="left">Skiing</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">14.4466<break/>15.7072<break/>16.3582<break/>18.3807</td>
<td align="left">16.7771<break/>18.4625<break/>19.1891<break/>20.4991</td>
<td align="left">16.7771<break/>18.4625<break/>19.1439<break/>20.6091</td>
<td align="left">16.7771<break/>18.4635<break/>19.2149<break/>20.5144</td>
<td align="left">16.7771<break/>18.4639<break/>19.2573<break/>20.5597</td>
<td align="left">16.7771<break/><bold>18.5834</bold><break/><bold>19.3009</bold><break/><bold>20.7280</bold></td>
</tr>
<tr>
<td align="left">Town</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">16.0747<break/>18.2495<break/>18.4678<break/>20.2214</td>
<td align="left">18.2300<break/>19.7287<break/>20.7181<break/>21.2815</td>
<td align="left">18.2341<break/>19.8886<break/>20.6289<break/>21.3067</td>
<td align="left">18.2341<break/>19.8398<break/>20.6621<break/>21.2194</td>
<td align="left">18.2341<break/>20.0420<break/>20.7285<break/>21.2949</td>
<td align="left">18.2341<break/><bold>20.0811</bold><break/><bold>20.7477</bold><break/><bold>21.3598</bold></td>
</tr>
<tr>
<td align="left">Swan</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">15.0182<break/>16.6803<break/>18.3412<break/>19.0864</td>
<td align="left">17.5107<break/>20.3213<break/>23.5738<break/>24.6911</td>
<td align="left">18.2777<break/>22.6895<break/>24.5940<break/>25.9070</td>
<td align="left">18.2777<break/>22.5160<break/>24.5372<break/>25.7511</td>
<td align="left">18.3296<break/>22.6667<break/>24.7063<break/>25.9682</td>
<td align="left">18.3296<break/><bold>22.8013</bold><break/><bold>24.7148</bold><break/><bold>26.1441</bold></td>
</tr>
<tr>
<td align="left">Starfish</td>
<td align="left">2<break/>3<break/>4<break/>5</td>
<td align="left">14.9845<break/>17.3068<break/>19.2231<break/>20.8924</td>
<td align="left">17.4140<break/>19.7241<break/>19.7407<break/>22.6369</td>
<td align="left">17.4140<break/>19.8708<break/>21.5804<break/>22.6734</td>
<td align="left">17.4745<break/>19.9262<break/>21.5150<break/>22.6405</td>
<td align="left">17.4745<break/>19.9855<break/>21.5972<break/>22.8617</td>
<td align="left">17.4745<break/>19.9855<break/><bold>21.6425</bold><break/><bold>22.8689</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In Section 4.2, FHCOA has been compared with FCOAF and FICOA in terms of PSNR and FSIM in detail, and it is concluded that the comprehensive performance of FHCOA in image thresholding quality is better than the other two algorithms. According to these comparative analyses, this section focuses on the advantages and disadvantages of FHCOA and other methods (GWO, FMABCA and FMGWOA). Compared with GWO segmentation method, the PSNR value of FHCOA under any threshold number of each image is much higher than GWO. The PSNR value optimized by FHCOA increased by 3.0657 on average, and the average increase ratio was 17.79&#x0025;. The highest increase occurred on the Starfish image when the threshold is 5, with an increase value of 7.0577 and an increase ratio of 36.98&#x0025;. The lowest increase occurred on the Children image with a threshold of 2, with an increase value of 0.9099 and an increase ratio of 5.63&#x0025;. The comparison reveals that FHCOA has significantly improved the segmentation quality on six images, showing that FHCOA has significant advantages in thresholding image segmentation compared with GWO (the same type of NIMA) from the perspective of quantitative analysis.</p>
<p>Compared with FMABCA, except that the PSNR values of both skiing images with threshold number of 2 are equal, in other cases, the PSNR values obtained by FHCOA are higher than FMABCA, with an average increase of 0.462 and an average increase ratio of 2.23&#x0025;. The highest increase occurs on Swan images with threshold number of 3, with an increase of 2.48 and an increase ratio of 12.2&#x0025;. From the comparison of FHCOA and FMABCA, it can be found that FHCOA still performs well in image segmentation quality evaluation. Compared with FMABCA, the PSNR value obtained by FMABCA increased by 0.1354 on average, and the average increase ratio was 0.636&#x0025;. The highest increase occurred on the Reunion image with a threshold of 5, with an increase value of 0.5141 and an increase ratio of 2.15&#x0025;. In addition, when the threshold number of image skiing and town is 2, the segmentation quality evaluation values obtained by the two algorithms are equal. Only when the threshold number of image children is 3, the PSNR value is 0.0636 higher than FMABCA, with a decrease of only 0.35&#x0025;. Other experimental data values are higher than FMGWOA. Although the quality evaluation of one image is slightly poor with a certain threshold number, FHCOA is better than FMGWOA in thresholding image segmentation on the whole, especially in terms of average and maximum value improvement.</p>
<p>According to the above comparisons, FHCOA in this study is significantly better than GWO and FMABCA in image segmentation effect, and slightly better than FMGWOA. Combined with the data analysis and comparison in <xref ref-type="table" rid="table-4">Tabs. 4</xref> and <xref ref-type="table" rid="table-6">6</xref>, and through the visual comparison in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>, the improved algorithm which optimizes the best coyote through lens imaging and the worst coyote through reverse learning has greater advantages in multilevel image thresholding.</p>
</sec>
</sec>
<sec id="s5"><label>5</label><title>Experimental Results of FHCOA in Brain Image Thresholding</title>
<p>Image thresholding has important application value in pattern recognition and target detection, it is also widely used in medical image processing [<xref ref-type="bibr" rid="ref-22">22</xref>]. In traditional medical diagnosis, the medical effect on patients is mainly observed and analyzed by the doctor&#x0027;s naked eye. Its application value entirely depends on the doctor&#x0027;s experience and professional knowledge, particularly when the medical effect on some mild patients makes it difficult for the doctor to locate the lesions. Therefore, automatic medical image thresholding through computer-aided means has attracted more and more attention. In order to reflect the value of FHCOA in medical image thresholding and to illustrate its practical application value proposed in this paper, this section analyzes the effect of FHCOA in medical image thresholding with five brain medical images.</p>
<p><xref ref-type="fig" rid="fig-4">Fig. 4</xref> illustrates five different original brain images and the visual results after thresholding by FHCOA under different threshold conditions. It can be observed from <xref ref-type="fig" rid="fig-4">Fig. 4</xref> that as the threshold number increases, the segmentation of the brain image becomes finer, and all parts of the brain tissues gradually converge together. Drawing upon this segmented region, doctors can more effectively focus on a certain region of the brain, which is more conducive to the diagnosis of brain tissue lesions.</p>
<p>At the same time, if the lesion area is not significant enough, we can also add feature analysis and other means to locate it as soon as possible. In order to quantitatively analyze the brain image segmentation effect, <xref ref-type="table" rid="table-7">Tab. 7</xref> shows the threshold vector, PSNR and FSIM parameter values of FHCOA when the images in <xref ref-type="fig" rid="fig-4">Fig. 4</xref> are segmented with different threshold numbers. At the same time, in order to better analyze the effect of FICOA in medical image segmentation, the quantitative segmentation evaluation data of FICOA [<xref ref-type="bibr" rid="ref-22">22</xref>] under the same image conditions are also listed in <xref ref-type="table" rid="table-7">Tab. 7</xref>. From the comparative analysis of FHCOA and FICOA, the PSNR value obtained by FHCOA increased by 0.1371 on average, and the average improvement ratio was 0.628&#x0025;. The highest increase occurred on Brain 64 image when the threshold number is 3, with an increase value of 0.7746 and a maximum improvement range of 3.916&#x0025;. In terms of FSIM evaluation indices, FHCOA is slightly better than FICOA, with an average improvement ratio of 0.38&#x0025;. Based on the above analysis, FHCOA can not only effectively complete the threshold segmentation of normal images, but also realizes the segmentation of brain medical images, which has significant research value.</p>
<fig id="fig-4"><label>Figure 4</label><caption><title>Medical image thresholding based on FHCOA</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_26625-fig-4.png"/></fig>
<table-wrap id="table-7"><label>Table 7</label><caption><title>Experimental results of FHCOA with different threshold numbers</title></caption>
<table>
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Image</th>
<th align="left" rowspan="2">th</th>
<th align="left" rowspan="2">Thresholds of FHCOA</th>
<th align="center" colspan="2">PSNR</th>
<th align="center" colspan="2">FSIM</th>
</tr>
<tr>
<th align="left">FHCOA</th>
<th align="left">FICOA</th>
<th align="left">FHCOA</th>
<th align="left">FICOA</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Brain</td>
<td align="left">2<break/>3<break/>5</td>
<td align="left">53 154<break/>40 104.5 193.5<break/>24 65.5 133.5 174.5 216.5</td>
<td align="left">18.2564<break/>22.6180<break/>26.3522</td>
<td align="left">18.2564<break/>22.5511<break/>26.1981</td>
<td align="left">0.6299<break/>0.7846<break/>0.9013</td>
<td align="left">0.6299<break/>0.7829<break/>0.8996</td>
</tr>
<tr>
<td align="left">Brain64</td>
<td align="left">2<break/>3<break/>5</td>
<td align="left">169.5 220<break/>29 97.5 196<break/>30 72.5 122.5 174 222</td>
<td align="left">11.5897<break/>20.3961<break/>25.7336</td>
<td align="left">11.5897<break/>20.3877<break/>25.6769</td>
<td align="left">0.4998<break/>0.6633<break/>0.8070</td>
<td align="left">0.4998<break/>0.6599<break/>0.8059</td>
</tr>
<tr>
<td align="left">Brain79</td>
<td align="left">2<break/>3<break/>5</td>
<td align="left">151 203<break/>28 88 190<break/>16 62.5 119.5 157.5 211.5</td>
<td align="left">12.4052<break/>20.5575<break/>26.1436</td>
<td align="left">12.3065<break/>19.7829<break/>25.9792</td>
<td align="left">0.4755<break/>0.6867<break/>0.8176</td>
<td align="left">0.4802<break/>0.6568<break/>0.8149</td>
</tr>
<tr>
<td align="left">Brain83</td>
<td align="left">2<break/>3<break/>5</td>
<td align="left">145 200<break/>62.5 146.5 211.5<break/>26 87 136 172.5 226.5</td>
<td align="left">14.7370<break/>19.2568<break/>25.8646</td>
<td align="left">14.8668<break/>18.9239<break/>25.5382</td>
<td align="left">0.5200<break/>0.6566<break/>0.8177</td>
<td align="left">0.5237<break/>0.6552<break/>0.8122</td>
</tr>
<tr>
<td align="left">Brain92</td>
<td align="left">2<break/>3<break/>5</td>
<td align="left">137 196<break/>23 88.5 193.5<break/>22 80 131 169.5 224</td>
<td align="left">14.4990<break/>20.6317<break/>27.6248</td>
<td align="left">14.4990<break/>20.6130<break/>27.4402</td>
<td align="left">0.5260<break/>0.6989<break/>0.8448</td>
<td align="left">0.5260<break/>0.6974<break/>0.8414</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s6"><label>6</label><title>Conclusion</title>
<p>In order to improve the algorithm efficiency and segmentation quality of COA in multilevel thresholding, this paper introduces chaotic initialization and hybrid reverse learning strategy, proposes an improved coyote algorithm based on hybrid strategy, which combined with fuzzy aggregation method effectively realizes normal thresholding and brain medical image thresholding. In improving the two strategies, chaos initialization is used to initialize the initial coyote population distribution to make it more random and uniform, and hybrid reverse learning strategy is used to overcome the problem of the original algorithm falling into local optimization. On this basis, this paper takes the fuzzy Kapur entropy as the objective function and the fuzzy aggregation theory to have successfully avoided the problem of over segmentation (isolated points or detached island regions). The experimental results show that FHCOA outperforms the other five popular thresholding segmentation algorithms in the comprehensive performance of image segmentation quality with the same test images and the same number of thresholds. In practical applications, we can easily obtain the segmentation results that meet the needs of different conditions by only adjusting the number of thresholds according to the application purpose.</p>
</sec>
</body>
<back>
<ack>
<p>Thanks for the support and help of the team when writing the paper. Thanks to the reviewers and experts of your magazine for their valuable opinions on the article revision. This author has provided great inspiration when writing.</p>
</ack>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> This paper is supported by the National Youth Natural Science Foundation of China (61802208), the National Natural Science Foundation of China (61572261 and 61876089), the Natural Science Foundation of Anhui (1908085MF207, KJ2020A1215, KJ2021A1251 and KJ2021A1253), the Excellent Youth Talent Support Foundation of Anhui (gxyqZD2019097 and gxyqZD2021142), the Postdoctoral Foundation of Jiangsu (2018K009B), the Foundation of Fuyang Normal University (TDJC2021008).</p></fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p></fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. S.</given-names> <surname>Chouhan</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Koul</surname></string-name> and <string-name><given-names>U. P.</given-names> <surname>Singh</surname></string-name></person-group>, &#x201C;<article-title>Soft computing approaches for image segmentation: A survey</article-title>,&#x201D; <source>Multimedia Tools and Applications</source>, vol. <volume>77</volume>, no. <issue>1</issue>, pp. <fpage>28483</fpage>&#x2013;<lpage>28537</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Pare</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>G. K.</given-names> <surname>Singh</surname></string-name> and <string-name><given-names>V.</given-names> <surname>Bajaj</surname></string-name></person-group>, &#x201C;<article-title>Image segmentation using multilevel thresholding: A research review</article-title>,&#x201D; <source>Iranian Journal of Science and Technology, Transactions of Electrical Engineering</source>, vol. <volume>44</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>29</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>Splicing image and its localization: A survey</article-title>,&#x201D; <source>Journal of Information Hiding and Privacy Protection</source>, vol. <volume>1</volume>, no. <issue>2</issue>, pp. <fpage>77</fpage>&#x2013;<lpage>86</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Abdel-Basset</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Mohamed</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Abouhawwash</surname></string-name>, <string-name><given-names>R. K.</given-names> <surname>Chakrabortty</surname></string-name>, <string-name><given-names>M. J.</given-names> <surname>Ryan</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>An improved jellyfish algorithm for multilevel thresholding of magnetic resonance brain image segmentations</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>68</volume>, no. <issue>3</issue>, pp. <fpage>2961</fpage>&#x2013;<lpage>2977</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Sowjanya</surname></string-name> and <string-name><given-names>S. K.</given-names> <surname>Injeti</surname></string-name></person-group>, &#x201C;<article-title>Investigation of butterfly optimization and gases brownian motion optimization algorithms for optimal multilevel image thresholding</article-title>,&#x201D; <source>Expert Systems with Applications</source>, vol. <volume>182</volume>, no. <issue>5</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Wu</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Peng</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>A morphological image segmentation algorithm for circular overlapping cells</article-title>,&#x201D; <source>Intelligent Automation &#x0026; Soft Computing</source>, vol. <volume>32</volume>, no. <issue>1</issue>, pp. <fpage>301</fpage>&#x2013;<lpage>321</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Peng</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Xia</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Wu</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>A Multi-task network for cardiac magnetic resonance image segmentation and classification</article-title>,&#x201D; <source>Intelligent Automation &#x0026; Soft Computing</source>, vol. <volume>30</volume>, no. <issue>1</issue>, pp. <fpage>259</fpage>&#x2013;<lpage>272</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Xue</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Zhu</surname></string-name> and <string-name><given-names>J. Y.</given-names> <surname>Liang</surname></string-name></person-group>, &#x201C;<article-title>Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification</article-title>,&#x201D; <source>Knowledge-Based Systems</source>, vol. <volume>227</volume>, no. <issue>5</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Chouksey</surname></string-name>, <string-name><given-names>R. K.</given-names> <surname>Jha</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Sharma</surname></string-name></person-group>, &#x201C;<article-title>A fast technique for image segmentation based on two meta-heuristic algorithms</article-title>,&#x201D; <source>Multimedia Tools and Applications</source>, vol. <volume>79</volume>, no. <issue>27</issue>, pp. <fpage>19075</fpage>&#x2013;<lpage>19127</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Pare</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Bajaj</surname></string-name> and <string-name><given-names>G. K.</given-names> <surname>Singh</surname></string-name></person-group>, &#x201C;<article-title>Context sensitive multilevel thresholding using swarm based algorithms</article-title>,&#x201D; <source>IEEE/CAA Journal of Automatica Sinica</source>, vol. <volume>6</volume>, no. <issue>6</issue>, pp. <fpage>1471</fpage>&#x2013;<lpage>1486</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Song</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Jia</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Ma</surname></string-name></person-group>, &#x201C;<article-title>A chaotic electromagnetic field optimization algorithm based on fuzzy entropy for multilevel thresholding color image segmentation</article-title>,&#x201D; <source>Entropy</source>, vol. <volume>21</volume>, no. <issue>4</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>36</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Upadhyay</surname></string-name> and <string-name><given-names>J. K.</given-names> <surname>Chhabra</surname></string-name></person-group>, &#x201C;<article-title>Kapur&#x0027;s entropy based optimal multilevel image segmentation using crow search algorithm</article-title>,&#x201D; <source>Applied Soft Computing</source>, vol. <volume>97</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>15</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Singh</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Mittal</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Singh</surname></string-name></person-group>, &#x201C;<article-title>A multilevel thresholding algorithm using HDAFA for image segmentation</article-title>,&#x201D; <source>Soft Computing</source>, vol. <volume>25</volume>, pp. <fpage>10677</fpage>&#x2013;<lpage>10708</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E. H.</given-names> <surname>Houssein</surname></string-name>, <string-name><given-names>E. D.</given-names> <surname>Helmy</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Oliva</surname></string-name>, <string-name><given-names>A. A.</given-names> <surname>Elngar</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Shaban</surname></string-name></person-group>, &#x201C;<article-title>A novel black widow optimization algorithm for multilevel thresholding image segmentation</article-title>,&#x201D; <source>Expert Systems with Applications</source>, vol. <volume>167</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>25</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. K.</given-names> <surname>Bhandari</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Singh</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Shubham</surname></string-name></person-group>, &#x201C;<article-title>An efficient optimal multilevel image thresholding with electromagnetism-like mechanism</article-title>,&#x201D; <source>Multimedia Tools and Applications</source>, vol. <volume>78</volume>, pp. <fpage>35733</fpage>&#x2013;<lpage>35788</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Jian</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Qi</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Xu</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Modified discrete grey wolf optimizer algorithm for multilevel image thresholding</article-title>,&#x201D; <source>Computational Intelligence and Neuroscience</source>, vol. <volume>2017</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Raj</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Gautam</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Abdullah</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Zaini</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Mukhopadhyay</surname></string-name></person-group>, &#x201C;<article-title>Multi-level thresholding based on differential evolution and tsallis fuzzy entropy</article-title>,&#x201D; <source>Image and Vision Computing</source>, vol. <volume>91</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>14</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Kang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Guo</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Han</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Fuzzy multilevel image thresholding based on modified discrete grey wolf optimizer and local information aggregation</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>4</volume>, pp. <fpage>6438</fpage>&#x2013;<lpage>6450</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Jian</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Han</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>Fuzzy multilevel image thresholding based on modified quick artificial bee colony algorithm and local information aggregation</article-title>,&#x201D; <source>Mathematical Problems in Engineering</source>, vol. <volume>2016</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E. H.</given-names> <surname>Houssein</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Hussain</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Abualigah</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Elaziz</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Cuevas</surname></string-name></person-group>, &#x201C;<article-title>An improved opposition-based marine predators algorithm for global optimization and multilevel thresholding image segmentation</article-title>,&#x201D; <source>Knowledge-Based Systems</source>, vol. <volume>229</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>33</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Singh</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Mittal</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Thakur</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Singh</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Oliva</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Nature and biologically inspired image segmentation techniques</article-title>,&#x201D; <source>Archives of Computational Methods in Engineering</source>, vol. <volume>2021</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>28</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Xue</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Huang</surname></string-name> <etal>et al.,</etal></person-group> &#x201C;<article-title>Fuzzy multilevel image thresholding based on improved coyote optimization algorithm</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>9</volume>, pp. <fpage>33595</fpage>&#x2013;<lpage>33607</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>O.</given-names> <surname>Thinnukool</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Panityakul</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Bano</surname></string-name></person-group>, &#x201C;<article-title>Double encryption using trigonometric chaotic map and xor of an image</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>69</volume>, no. <issue>3</issue>, pp. <fpage>3033</fpage>&#x2013;<lpage>3046</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Khairuzzaman</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Chaudhury</surname></string-name></person-group>, &#x201C;<article-title>Multilevel thresholding using grey wolf optimizer for image segmentation</article-title>,&#x201D; <source>Expert Systems with Applications</source>, vol. <volume>86</volume>, pp. <fpage>64</fpage>&#x2013;<lpage>76</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Shen</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Pan</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Fan</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Lei</surname></string-name></person-group>, &#x201C;<article-title>No-reference stereoscopic image quality assessment based on global and local content characteristics</article-title>,&#x201D; <source>Neurocomputing</source>, vol. <volume>424</volume>, pp. <fpage>132</fpage>&#x2013;<lpage>142</lpage>, <year>2021</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>