<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">IASC</journal-id>
<journal-id journal-id-type="nlm-ta">IASC</journal-id>
<journal-id journal-id-type="publisher-id">IASC</journal-id>
<journal-title-group>
<journal-title>Intelligent Automation &#x0026; Soft Computing</journal-title>
</journal-title-group>
<issn pub-type="epub">2326-005X</issn><issn pub-type="ppub">1079-8587</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1008</article-id>
<article-id pub-id-type="doi">10.32604/iasc.2021.01008</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A Fuzzy-Based Bio-Inspired Neural Network Approach for Target Search by Multiple Autonomous Underwater Vehicles in Underwater Environments</article-title><alt-title alt-title-type="left-running-head">A Fuzzy-based Bio-inspired Neural Network Approach for Target Search by Multiple Autonomous Underwater Vehicles in Underwater Environments</alt-title><alt-title alt-title-type="right-running-head">A Fuzzy-based Bio-inspired Neural Network Approach for Target Search by Multiple Autonomous Underwater Vehicles in Underwater Environments</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western">
<surname>Sun</surname>
<given-names>Aolin</given-names>
</name>
</contrib>
<contrib id="author-2" contrib-type="author" corresp="yes">
<name name-style="western">
<surname>Cao</surname>
<given-names>Xiang</given-names>
</name>
<email>cxeffort@126.com</email>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western">
<surname>Xiao</surname>
<given-names>Xu</given-names>
</name>
</contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western">
<surname>Xu</surname>
<given-names>Liwen</given-names>
</name>
</contrib><aff><institution>School of Physics and Electronic Electrical Engineering, Huaiyin Normal University</institution>, <addr-line>Huaian, 223300</addr-line>, <country>China</country></aff>
</contrib-group><author-notes><corresp id="cor1">&#x002A;Corresponding Author: Xiang Cao. Email: <email>cxeffort@126.com</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-01-15">
<day>15</day>
<month>01</month>
<year>2021</year>
</pub-date>
<volume>27</volume>
<issue>2</issue>
<fpage>551</fpage>
<lpage>564</lpage>
<history>
<date date-type="received">
<day>14</day>
<month>10</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>06</day>
<month>11</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2021 Sun et al.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Sun et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_IASC_1008.pdf"></self-uri>
<abstract>
<p>An essential issue in a target search is safe navigation while quickly finding targets. In order to improve the efficiency of a target search and the smoothness of AUV&#x2019;s (Autonomous Underwater Vehicle) trajectory, a fuzzy-based bio-inspired neural network approach is proposed in this paper. A bio-inspired neural network is applied to a multi-AUV target search, which can effectively plan search paths. In the meantime, a fuzzy algorithm is introduced into the bio-inspired neural network to make the trajectory of AUV obstacle avoidance smoother. Unlike other algorithms that need repeated training in the parameters selection, the proposed approach obtains all the required parameters that do not require learning and training. And the model parameters are not sensitive. The simulation and experiment results show that the proposed algorithm can quickly and security search targets in the complex obstacle environments. Compared with the PSO (Particle Swarm Optimization) algorithm, the simulation results show that the proposed algorithm can control a multi-AUV to complete multi-target search tasks with higher search efficiency and adaptability. At the same time, the fuzzy obstacle-avoidance improves the search trajectory smoothness.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Target search</kwd>
<kwd>multi-AUV</kwd>
<kwd>fuzzy approach</kwd>
<kwd>bio-inspired neural network</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Multi-AUV (multiple autonomous underwater vehicle) systems have many advantages over a single AUV: 1) By appropriately decomposing tasks, multi-AUVs can accomplish different sub-tasks in parallel, thereby increasing work efficiency [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-2">2</xref>]; 2) The members of the system can be designed to &#x201C;experts&#x201D; who complete a task, not &#x201C;generalists&#x201D; [<xref ref-type="bibr" rid="ref-3">3</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]; 3) Through the cooperation between members, the algorithm can increase the redundancy and robustness of the scheme [<xref ref-type="bibr" rid="ref-5">5</xref>&#x2013;<xref ref-type="bibr" rid="ref-6">6</xref>]. It can provide more solutions and reduce system cost and complexity. Because of these advantages, many scholars have used multi-AUV systems for target search tasks. A large number of multi-AUV target search strategies are proposed.</p>
<p>The target search based on a behavioral strategy is a standard method in early studies. The behavioral approach is a heuristic method that endows a robot with simple behavior sets, such as searching along boundaries, and avoiding obstacles. Target search tasks in complex environments can be accomplished through a hierarchical combination of these simple behaviors [<xref ref-type="bibr" rid="ref-7">7</xref>&#x2013;<xref ref-type="bibr" rid="ref-8">8</xref>]. Balch et al. [<xref ref-type="bibr" rid="ref-9">9</xref>] adopted a behavior-based heuristic approach to multi-robot target search tasks to solve map coverage problems. Magid et al. [<xref ref-type="bibr" rid="ref-10">10</xref>] analyzed the advantages of behavioral strategy from the perspective of cost and benefit. The robots based on this strategy do not need expensive positioning sensors, do not need to use limited computing resources to calculate the precise positions of robots, so the cost of the robot is reduced. This behavior-based search strategy does not require the advance selection of the search path. Instead, it focuses on the overall behavior of multiple robots by randomly selecting directions. However, the algorithm does not guarantee the integrity of the map search, nor does it ensure no duplication, resulting in an inefficient search. And there is no collaboration between the robots, which violates the original purpose of using multiple robots for a target search [<xref ref-type="bibr" rid="ref-11">11</xref>&#x2013;<xref ref-type="bibr" rid="ref-12">12</xref>].</p>
<p>To increase cooperation among multiple robots, Yamauchi [<xref ref-type="bibr" rid="ref-13">13</xref>] proposed a boundary-based distributed multi-robot target search strategy. The algorithm defines the boundary as that between the known open area and the unsearched area. During the search process, each robot continuously selects the nearest boundary point for environmental search until all reachable areas are searched, and the task is completed. Due to the limited coordination of information between robots in this strategy, some robots may move to the same boundary, causing collisions and repeated searches, which is inefficient. To further enhance the collaboration of multi-AUVs, Yoon et al. [<xref ref-type="bibr" rid="ref-14">14</xref>] proposed a synchronous search algorithm that can achieve large-scale target search. AUVs exchange data through regular rendezvous to perform a collaborative search of targets. The algorithm has redundancy error capability, and after some AUV failures, the search task can be completed. However, this algorithm only studies the ideal two-dimensional environment without considering current and obstacles, which reduces its practicability [<xref ref-type="bibr" rid="ref-15">15</xref>].</p>
<p>To improve the efficiency of the target search, Zolt et al. [<xref ref-type="bibr" rid="ref-16">16</xref>] proposed a multi-robot target search algorithm based on a market economy mechanism. A robot shares target information and calculates the cost of reaching the target based on the local map. The algorithm is distributed, robust, and efficient. The cooperation of robots is realized by explicit communication, which increases resource consumption. The performance of the target search is significantly diminished when transmission is interrupted. Ferranti et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] proposed a method based on multi-robot self-search in an unknown environment. This algorithm constructs indirect communication between multiple robots. The method avoids disadvantages such as unreliability. The process does not require a robot to have advance environmental knowledge, and can coordinate the movement of multiple robots in terrain with different topological features. Cai et al. [<xref ref-type="bibr" rid="ref-18">18</xref>] and Hashemi et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] proposed a distributed self-organizing multi-robot target search algorithm based on particle swarm optimization (PSO). The method transforms the process of searching for the optimal solution in the abstract solution space to the search of the unsearched area in the new map to realize the target search for unknown environments [<xref ref-type="bibr" rid="ref-20">20</xref>]. These three methods are not suitable for a large-scale target search.</p>
<p>To improve the efficiency of a multi-AUV target search in complex environments, Cao et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] combined a bio-inspired neurodynamic model and velocity vector synthesis algorithm for multi-AUV target search in current settings. This method can not only complete the search task but can automatically avoid obstacles and overcome the influence of current on AUV navigation [<xref ref-type="bibr" rid="ref-22">22</xref>]. However, the algorithm does not consider security when avoiding obstacles and does not apply to environments with multiple barriers.</p>
<p>This paper studies the safety navigation problem of multi-AUV target searching. To improve the safety of an AUV in obstacle avoidance, a fuzzy-based bio-inspired neural network approach (FBNN) is proposed. The bio-inspired neural network (BNN) algorithm plans an effective search path for an AUV. When the AUV meets obstacles, the fuzzy algorithm improves its navigation path. The improved trajectory is more reasonable and secure. A fuzzy algorithm makes the trajectory of AUV obstacle avoidance smoother. Simulation results show that the proposed approach can control multi-AUVs to achieve the search of multiple targets with higher efficiency and adaptability compared to PSO. At the same time, fuzzy obstacle-avoidance improve the security of the AUV trajectory. Experimental results show that the proposed algorithm can be applied in real underwater environments.</p>
<p>The advantages of the algorithm can be summarized as follows: 1) The method&#x2019;s parameters do not require learning and training, and they are not sensitive; 2) Fuzzy obstacle-avoidance improves track smoothness; and 3) a real-time, safety-aware navigation paradigm guides the AUV locally to plan more reasonable and safer trajectories.</p>
<p>The rest of this paper is organized as follows. The principles of the proposed algorithm are given in Section II. Simulations of various situations are described in Section III. A pool experiment is outlined in Section IV. Section V provides our conclusions.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Proposed Approach</title>
<p>This paper proposes a fuzzy-based bio-inspired neural network approach to realize a real-time multi-AUV target search task in an underwater environment with obstacles. A bio-inspired neural network topologically organized is constructed to represent the dynamic environments. Through the model&#x2019;s dynamic neural activity landscape, the target globally attracts the AUV, while the obstacles locally push the AUV away to avoid a collision. The AUV generates its search path to the targets autonomously by a steepest gradient descent rule. When encountering obstacles, AUVs move using the fuzzy obstacle-avoidance method. The flowchart of the approach is shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Flowchart of multi-AUV target search</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-1.png"/>
</fig>
<sec id="s2_1">
<label>2.1</label>
<title>Bio-Inspired Neural Network</title>
<p>A bio-inspired neural network in underwater environments is established (as shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>), in which a neuron represents a grid cell. The information of the grid cell is described by neural activities. Each neuron is connected to adjacent ones to form a network for their transmission of activity (for simplicity, only connections of the central neuron are shown, and those of other neurons are omitted).</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Diagram of neural network</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-2.png"/>
</fig>
<p>The change rule of neuronal activity in the neural network is expressed as [<xref ref-type="bibr" rid="ref-23">23</xref>]:</p>
<p><disp-formula id="eqn-1">
<label>(1)</label>
<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-1.png"/><tex-math id="tex-eqn-1"><![CDATA[$$\displaystyle{{d{u_k}} \over {dt}} &#x003D; - A{u_k} &#x002B; (B - {u_k})({[{I_k}]^ &#x002B; } &#x002B; \sum\limits_{0 < |kl| \le \sqrt 3 }^{} {{w_{kl}}} {[{u_l}]^ + }) - (D + {u_k}){[{I_k}]^ - }$$]]></tex-math><mml:math id="mml-eqn-1" display="block"><mml:mstyle scriptlevel="0" displaystyle="true"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>A</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>&#x002B;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:msup><mml:mo stretchy="false">]</mml:mo><mml:mo>&#x002B;</mml:mo></mml:msup></mml:mrow><mml:mo>&#x002B;</mml:mo><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:msqrt><mml:mn>3</mml:mn></mml:msqrt></mml:mrow><mml:mrow></mml:mrow></mml:munderover><mml:mrow><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:msup><mml:mo stretchy="false">]</mml:mo><mml:mo>&#x002B;</mml:mo></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>D</mml:mi><mml:mo>&#x002B;</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:msup><mml:mo stretchy="false">]</mml:mo><mml:mo>&#x2212;</mml:mo></mml:msup></mml:mrow></mml:mstyle></mml:math>
</alternatives></disp-formula></p>
<p>where <italic>u</italic><sub><italic>k</italic></sub> represents the activity value of the <italic>k</italic>-th neuron, <italic>u</italic><sub><italic>l</italic></sub> represents the activity value of other neurons connected to the <italic>k</italic>-th neuron, and <italic>I</italic><sub><italic>k</italic></sub> represents the grid cell signal&#x2019;s input defined as [<xref ref-type="bibr" rid="ref-24">24</xref>]:</p>
<p><disp-formula id="eqn-2">
<label>(2)</label>
<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-2.png"/><tex-math id="tex-eqn-2"><![CDATA[$${I_k} = \left\{ {\matrix{ {1,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} if{\kern 1pt} {\kern 1pt} {\kern 1pt} it{\kern 1pt} {\kern 1pt} {\kern 1pt} is{\kern 1pt} {\kern 1pt} {\kern 1pt} a{\kern 1pt} {\kern 1pt} {\kern 1pt} target{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \cr { - 1,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} if{\kern 1pt} {\kern 1pt} {\kern 1pt} it{\kern 1pt} {\kern 1pt} {\kern 1pt} is{\kern 1pt} {\kern 1pt} {\kern 1pt} an{\kern 1pt} {\kern 1pt} {\kern 1pt} obstacle{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \cr {{\kern 1pt} {\kern 1pt} {\kern 1pt} 0,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} otherwise{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} } \cr } } \right\}$$]]></tex-math><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnspacing="1em" rowspacing="4pt"><mml:mtr><mml:mtd><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>a</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>w</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math>
</alternatives></disp-formula></p>
<p><italic>A</italic>, <italic>B</italic>, and <italic>D</italic> are positive constants; <italic>A</italic> reflects the passive decay rate of neuron <italic>k&#x2019;</italic>s activity; <italic>B</italic> and <italic>D</italic> are upper and lower limits of <inline-formula id="ieqn-1">
<alternatives><inline-graphic xlink:href="ieqn-1.png"/><tex-math id="tex-ieqn-1"><![CDATA[${u_k}$]]></tex-math><mml:math id="mml-ieqn-1"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula>, i.e., <inline-formula id="ieqn-2">
<alternatives><inline-graphic xlink:href="ieqn-2.png"/><tex-math id="tex-ieqn-2"><![CDATA[${u_k} \in [ - D,B]$]]></tex-math><mml:math id="mml-ieqn-2"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mi>D</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">]</mml:mo></mml:math>
</alternatives></inline-formula>. <inline-formula id="ieqn-3">
<alternatives><inline-graphic xlink:href="ieqn-3.png"/><tex-math id="tex-ieqn-3"><![CDATA[$|kl|$]]></tex-math><mml:math id="mml-ieqn-3"><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:math>
</alternatives></inline-formula> is the Euclidean distance between neuron <italic>k</italic> and its neighbor <italic>l</italic> on the 3D space,</p>
<p><disp-formula id="eqn-3">
<label>(3)</label>
<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-3.png"/><tex-math id="tex-eqn-3"><![CDATA[$$|kl| = \sqrt {{{({x_k} - {x_l})}^2} + {{({y_k} - {y_l})}^2} + {{({z_k} - {z_l})}^2}}$$]]></tex-math><mml:math id="mml-eqn-3" display="block"><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:msqrt><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>&#x002B;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>&#x002B;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:msqrt></mml:math>
</alternatives></disp-formula></p>
<p>where (<italic>x</italic><sub><italic>k</italic></sub>, <italic>y</italic><sub><italic>k</italic></sub>, <italic>z</italic><sub><italic>k</italic></sub>) and (<italic>x</italic><sub><italic>l</italic></sub>, <italic>y</italic><sub><italic>l</italic></sub>, <italic>z</italic><sub><italic>l</italic></sub>) are the coordinates of the <italic>k</italic>-th and <italic>l</italic>-th neurons, respectively, in the 3D coordinate system.</p>
<p>In <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>, <italic>w</italic><sub><italic>kl</italic></sub> is the connection weights between neuron <italic>k</italic> and its neighbor <italic>l</italic>, which can be defined as [<xref ref-type="bibr" rid="ref-25">25</xref>]:</p>
<p><disp-formula id="eqn-4">
<label>(4)</label>
<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-4.png"/><tex-math id="tex-eqn-4"><![CDATA[$${w_{kl}}{\rm{ =  }}f(\left| {kl} \right|){\rm{  =  }}\left\{ {\matrix{
   {\mu /\mu \left| {kl} \right|} & {0{\rm{  < }}\left| {kl} \right|{\rm{ < }}\sqrt 3 }  \cr 
   0 & {\left| {kl} \right| \ge \sqrt 3 }  \cr 

 } } \right.$$]]></tex-math><mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true">/</mml:mo><mml:mrow><mml:mrow><mml:mpadded width="0"><mml:mphantom><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:mphantom></mml:mpadded></mml:mrow></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x003C;</mml:mo><mml:msqrt><mml:mn>3</mml:mn></mml:msqrt></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mn>0</mml:mn><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mrow><mml:mspace width="1pt"></mml:mspace></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:msqrt><mml:mn>3</mml:mn></mml:msqrt></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math>
</alternatives></disp-formula></p>
<p>where <inline-formula id="ieqn-4">
<alternatives><inline-graphic xlink:href="ieqn-4.png"/><tex-math id="tex-ieqn-4"><![CDATA[$\mu$]]></tex-math><mml:math id="mml-ieqn-4"><mml:mi>&#x03BC;</mml:mi></mml:math>
</alternatives></inline-formula> is a positive constant, and generally, <inline-formula id="ieqn-5">
<alternatives><inline-graphic xlink:href="ieqn-5.png"/><tex-math id="tex-ieqn-5"><![CDATA[$0 \le {w_{kl}} \le 1$]]></tex-math><mml:math id="mml-ieqn-5"><mml:mn>0</mml:mn><mml:mo>&#x2264;</mml:mo><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:mn>1</mml:mn></mml:math>
</alternatives></inline-formula>. As the connections between neurons are not directional, the connection weight coefficients are symmetric, i.e., <inline-formula id="ieqn-6">
<alternatives><inline-graphic xlink:href="ieqn-6.png"/><tex-math id="tex-ieqn-6"><![CDATA[${w_{kl}} = {w_{lk}}$]]></tex-math><mml:math id="mml-ieqn-6"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>l</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula>.</p>
<p>It can be seen from <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref> that the activity values of the target neurons are positive, and those of the obstacle neurons are negative. That means that some excitation signals come from external inputs, and others from internal incremental gains of interconnecting neurons. If <inline-formula id="ieqn-7">
<alternatives><inline-graphic xlink:href="ieqn-7.png"/><tex-math id="tex-ieqn-7"><![CDATA[${I_k} \le 0$]]></tex-math><mml:math id="mml-ieqn-7"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:mn>0</mml:mn></mml:math>
</alternatives></inline-formula>, then <inline-formula id="ieqn-8">
<alternatives><inline-graphic xlink:href="ieqn-8.png"/><tex-math id="tex-ieqn-8"><![CDATA[$\sum\limits_{0 < \left| {kl} \right| \le \sqrt 3 }^{} {{w_{kl}}{{[{u_l}]}^ + }}$]]></tex-math><mml:math id="mml-ieqn-8"><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:msqrt><mml:mn>3</mml:mn></mml:msqrt></mml:mrow><mml:mrow></mml:mrow></mml:munderover><mml:mrow><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow><mml:mo>&#x002B;</mml:mo></mml:msup></mml:mrow></mml:mrow></mml:math>
</alternatives></inline-formula>, which means there is no external input for neuron <italic>k</italic>, and all its excitation signals are transmitted through the neuronal network. In contrast, <inline-formula id="ieqn-9">
<alternatives><inline-graphic xlink:href="ieqn-9.png"/><tex-math id="tex-ieqn-9"><![CDATA[${[{I_k}]^ - }$]]></tex-math><mml:math id="mml-ieqn-9"><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:msup><mml:mo stretchy="false">]</mml:mo><mml:mo>&#x2212;</mml:mo></mml:msup></mml:mrow></mml:math>
</alternatives></inline-formula> means that all the inhibitory inputs for signals <italic>k</italic> are external. Therefore, the excitation signals are transmitted between neurons. The value of the positive activity of neurons in a neural network has a global effect while the inhibitory signals do not transmit and the negative activity of neurons has only a local effect.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>3D Search Path Planning Model</title>
<p>We transform the search path planning problem of an AUV to one of finding its next navigation position. Only by accurately finding the position can an AUV quickly find unknown targets and avoid collisions with obstacles. In this regard, the next navigation position of an AUV must be determined in conjunction with the specific dynamic environment and the previous position and current position of the AUV. Hence, the research of AUV search path planning is transformed to the study of neuron activity output values in the neural network structure. The navigation position of an AUV at the next moment is determined by the distribution of neuron activity output values. In the target search mission, all of an AUVs movements are guided by the dynamic activity landscape of the neural network. The activity of each neuron is obtained by a shunting <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>. The influence of the targets and obstacles, the state workspace varies according to the dynamics of the neural network. The motion of the AUV is determined by the landscape of the dynamic activity of the topologically organized neural network. An AUV&#x0027;s search path selection strategy can be written as [<xref ref-type="bibr" rid="ref-26">26</xref>]:</p>
<p><disp-formula id="eqn-5">
<label>(5)</label>
<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-5.png"/><tex-math id="tex-eqn-5"><![CDATA[$${P_n} \Leftarrow {u_{{P_n}}} = \max \{ {u_l},l = 1,2,...,M\}$$]]></tex-math><mml:math id="mml-eqn-5" display="block"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">&#x21D0;</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mo stretchy="false" fence="false">{</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mi>l</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mi>M</mml:mi><mml:mo stretchy="false" fence="false">}</mml:mo></mml:math>
</alternatives></disp-formula></p>
<p>where <italic>M</italic> represents the number of neurons adjacent to the <italic>k</italic>-th neuron, <italic>P</italic><sub><italic>n</italic></sub> represents the AUV&#x2019;s location at the next moment in the map, and <inline-formula id="ieqn-10">
<alternatives><inline-graphic xlink:href="ieqn-10.png"/><tex-math id="tex-ieqn-10"><![CDATA[${u_{{P_n}}}$]]></tex-math><mml:math id="mml-ieqn-10"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> is the highest activity value of the <italic>k</italic>-th neuron&#x2019;s neighbors. When an AUV selects a path, it compares the activity value of the neuron of its current location with its neighbors and chooses the one with the largest value as the next step. Repeating this performance, the AUV keeps moving towards the targets.</p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Fuzzy Obstacle-Avoidance</title>
<p>Fuzzy obstacle-avoidance rules can provide different line speeds and angular velocities for a single AUV when facing obstacles. These rules can improve the smoothness of the AUV trajectory.</p>
<p>The structure of the fuzzy control module embedded in the control system is shown in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>. It has four main parts [<xref ref-type="bibr" rid="ref-27">27</xref>]:</p>
<p>(1)&#x2002;The fuzzification interface converts input to membership and compares it to rules in the rule library.</p>
<p>(2)&#x2002;The rule base contains rules based on common knowledge and experience.</p>
<p>(3)&#x2002;Decision logic evaluates the current situation, selects the appropriate fuzzy rules, and converts the fuzzy input to fuzzy output.</p>
<p>(4)&#x2002;The defuzzification interface converts the fuzzy output to a non-fuzzy instruction that the AUV recognizes.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Fuzzy controller architecture</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-3.png"/>
</fig>
<p>The fuzzy controller has the characteristics of intelligence and real-time performance. The output <inline-formula id="ieqn-11">
<alternatives><inline-graphic xlink:href="ieqn-11.png"/><tex-math id="tex-ieqn-11"><![CDATA[$Y(t)$]]></tex-math><mml:math id="mml-ieqn-11"><mml:mi>Y</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</alternatives></inline-formula> of the system is fed back to the fuzzy controller and compared to the reference input <inline-formula id="ieqn-12">
<alternatives><inline-graphic xlink:href="ieqn-12.png"/><tex-math id="tex-ieqn-12"><![CDATA[$R(t)$]]></tex-math><mml:math id="mml-ieqn-12"><mml:mi>R</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</alternatives></inline-formula>. The input <inline-formula id="ieqn-13">
<alternatives><inline-graphic xlink:href="ieqn-13.png"/><tex-math id="tex-ieqn-13"><![CDATA[$U(t)$]]></tex-math><mml:math id="mml-ieqn-13"><mml:mi>U</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
</alternatives></inline-formula> is extracted by the fuzzy controller to meet the requirements of the underwater robot. The inputs and outputs of the fuzzy controller are shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Fuzzy controller</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-4.png"/>
</fig>
<p>There are two fuzzy inputs, <inline-formula id="ieqn-14">
<alternatives><inline-graphic xlink:href="ieqn-14.png"/><tex-math id="tex-ieqn-14"><![CDATA[$v$]]></tex-math><mml:math id="mml-ieqn-14"><mml:mi>v</mml:mi></mml:math>
</alternatives></inline-formula> and <inline-formula id="ieqn-15">
<alternatives><inline-graphic xlink:href="ieqn-15.png"/><tex-math id="tex-ieqn-15"><![CDATA[${\alpha _o}$]]></tex-math><mml:math id="mml-ieqn-15"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>o</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> in the fuzzy controller, where <inline-formula id="ieqn-16">
<alternatives><inline-graphic xlink:href="ieqn-16.png"/><tex-math id="tex-ieqn-16"><![CDATA[$v$]]></tex-math><mml:math id="mml-ieqn-16"><mml:mi>v</mml:mi></mml:math>
</alternatives></inline-formula> is the linear degree of freedom of the AUV, and <inline-formula id="ieqn-17">
<alternatives><inline-graphic xlink:href="ieqn-17.png"/><tex-math id="tex-ieqn-17"><![CDATA[${\alpha _o}$]]></tex-math><mml:math id="mml-ieqn-17"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>o</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> is the angle of attack between the AUV and the obstacle. This design represents line velocity fuzzy terms as VS (velocity small), VM (velocity middle), and VL (velocity large). The angular fuzzy terms are expressed as ANS (angle negative small), ANL (angle negative large), APS (angle positive small), and APL (angle positive large). The membership functions of the above three fuzzy inputs are shown in <xref ref-type="fig" rid="fig-5">Fig. 5</xref> [<xref ref-type="bibr" rid="ref-28">28</xref>].</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Membership functions of inputs (a) Membership function of <inline-formula id="ieqn-18">
<alternatives><inline-graphic xlink:href="ieqn-18.png"/><tex-math id="tex-ieqn-18"><![CDATA[${\mu _v}$]]></tex-math><mml:math id="mml-ieqn-18"><mml:mrow><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mi>v</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> (b) Membership function of <inline-formula id="ieqn-19">
<alternatives><inline-graphic xlink:href="ieqn-19.png"/><tex-math id="tex-ieqn-19"><![CDATA[${\mu _{{\alpha _o}}}$]]></tex-math><mml:math id="mml-ieqn-19"><mml:mrow><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mi>o</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula></title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-5.png"/>
</fig>
<p>The AUV&#x2019;s motion is adjusted by the output command of the fuzzy controller, which has two fuzzy outputs:, <inline-formula id="ieqn-20">
<alternatives><inline-graphic xlink:href="ieqn-20.png"/><tex-math id="tex-ieqn-20"><![CDATA[${\omega _e}$]]></tex-math><mml:math id="mml-ieqn-20"><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> is the desired curve speed of the AUV, and <inline-formula id="ieqn-21">
<alternatives><inline-graphic xlink:href="ieqn-21.png"/><tex-math id="tex-ieqn-21"><![CDATA[${v_e}$]]></tex-math><mml:math id="mml-ieqn-21"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> is its desired line speed. To effectively avoid obstacles, <inline-formula id="ieqn-22">
<alternatives><inline-graphic xlink:href="ieqn-22.png"/><tex-math id="tex-ieqn-22"><![CDATA[${\omega _e}$]]></tex-math><mml:math id="mml-ieqn-22"><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> has five fuzzy items, represented as NL (negative large), NS (negative small), Z (zero), PS (positive small), and PL (positive large). The fuzzy terms of <inline-formula id="ieqn-23">
<alternatives><inline-graphic xlink:href="ieqn-23.png"/><tex-math id="tex-ieqn-23"><![CDATA[${v_e}$]]></tex-math><mml:math id="mml-ieqn-23"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> are similarly represented as VS (velocity small), VM (velocity middle), and VL (velocity large), as shown in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>. For example, if the AUV is required to turn right, then <inline-formula id="ieqn-24">
<alternatives><inline-graphic xlink:href="ieqn-24.png"/><tex-math id="tex-ieqn-24"><![CDATA[$\omega$]]></tex-math><mml:math id="mml-ieqn-24"><mml:mi>&#x03C9;</mml:mi></mml:math>
</alternatives></inline-formula> would has the fuzzy term PS or PL, depending on the input [<xref ref-type="bibr" rid="ref-29">29</xref>].</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Membership functions of outputs (a) Membership function of <inline-formula id="ieqn-25">
<alternatives><inline-graphic xlink:href="ieqn-25.png"/><tex-math id="tex-ieqn-25"><![CDATA[${\mu _{{\omega _e}}}$]]></tex-math><mml:math id="mml-ieqn-25"><mml:mrow><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> (b) Membership function of <inline-formula id="ieqn-26">
<alternatives><inline-graphic xlink:href="ieqn-26.png"/><tex-math id="tex-ieqn-26"><![CDATA[${\mu _{{v_e}}}$]]></tex-math><mml:math id="mml-ieqn-26"><mml:mrow><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula></title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-6.png"/>
</fig>
<p>The center of gravity algorithm is used for defuzzification, and the fuzzy output is converted to a motion command output to control the motion of the AUV. The fuzzy controller can effectively guide the AUV to avoid obstacles and improve the motion track [<xref ref-type="bibr" rid="ref-30">30</xref>].</p>
<p>The main task of the rule is to make the system have a reasonable output. The rules in <xref ref-type="table" rid="table-1">Tab. 1</xref> apply to all possible AUV obstacle avoidance scenarios. Each &#x201C;IF&#x201D; condition contains sub-conditions of <inline-formula id="ieqn-27">
<alternatives><inline-graphic xlink:href="ieqn-27.png"/><tex-math id="tex-ieqn-27"><![CDATA[${\alpha _{ o}}$]]></tex-math><mml:math id="mml-ieqn-27"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mrow><mml:mi mathvariant="normal">o</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula> and <inline-formula id="ieqn-28">
<alternatives><inline-graphic xlink:href="ieqn-28.png"/><tex-math id="tex-ieqn-28"><![CDATA[$v$]]></tex-math><mml:math id="mml-ieqn-28"><mml:mi>v</mml:mi></mml:math>
</alternatives></inline-formula>, and all fuzzy sets basically come from experience and common knowledge.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Rulebase for obstacle-avoidance</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr><th rowspan="2">IF</th>
<th><inline-formula id="ieqn-29">
<alternatives><inline-graphic xlink:href="ieqn-29.png"/><tex-math id="tex-ieqn-29"><![CDATA[${\alpha _{\rm o}}$]]></tex-math><mml:math id="mml-ieqn-29"><mml:mrow><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mrow><mml:mi mathvariant="normal">o</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula></th>
<th>ANL</th>
<th>ANL</th>
<th>ANS</th>
<th>ANS</th>
<th>APS</th>
<th>APS</th>
<th>APL</th>
<th>APL</th>
<th>ANS</th>
<th>ANL</th>
<th>APS</th>
<th>APL</th>
</tr>
<tr>
<th><inline-formula id="ieqn-30">
<alternatives><inline-graphic xlink:href="ieqn-30.png"/><tex-math id="tex-ieqn-30"><![CDATA[$v$]]></tex-math><mml:math id="mml-ieqn-30"><mml:mi>v</mml:mi></mml:math>
</alternatives></inline-formula></th>
<th>VS</th>
<th>VM</th>
<th>VS</th>
<th>VM</th>
<th>VS</th>
<th>VM</th>
<th>VS</th>
<th>VM</th>
<th>VL</th>
<th>VL</th>
<th>VL</th>
<th>VL</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2">THEN</td>
<td><inline-formula id="ieqn-31">
<alternatives><inline-graphic xlink:href="ieqn-31.png"/><tex-math id="tex-ieqn-31"><![CDATA[${\omega _e}$]]></tex-math><mml:math id="mml-ieqn-31"><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula></td>
<td>Z</td>
<td>PS</td>
<td>PL</td>
<td>PL</td>
<td>NL</td>
<td>NL</td>
<td>Z</td>
<td>NS</td>
<td>PL</td>
<td>PS</td>
<td>NL</td>
<td>NS</td>
</tr>
<tr>
<td><inline-formula id="ieqn-32">
<alternatives><inline-graphic xlink:href="ieqn-32.png"/><tex-math id="tex-ieqn-32"><![CDATA[${v_e}$]]></tex-math><mml:math id="mml-ieqn-32"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:msub></mml:mrow></mml:math>
</alternatives></inline-formula></td>
<td>VS</td>
<td>VM</td>
<td>VL</td>
<td>VM</td>
<td>VL</td>
<td>VM</td>
<td>VS</td>
<td>VM</td>
<td>VS</td>
<td>VL</td>
<td>VS</td>
<td>VL</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Simulation Studies</title>
<p>The practicability of the algorithm was verified by simulating the 3D multi-AUV target search under static and dynamic conditions on MATLAB R2011a. The underwater search space in the simulation experiment was set to 100 &#x00D7; 100 &#x00D7; 100. AUVs, targets, and obstacles were randomly distributed in the search space before the search task began. An AUV only knew the number of targets and environmental boundaries. AUVs and targets could move within the search space. AUVs moved according to the proposed algorithm, and the target moved randomly before discovery by the AUV. When all the targets were found, the multi-AUV target search ended.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Static Targets Search</title>
<p>To test the performance of the proposed algorithm, the search for static targets by multiple AUVs was simulated. The simulation consisted of three targets, three AUVs, and several obstacles in underwater environments. The targets had initial positions (41, 88, 76), (74, 72, 18), and (94, 58, 82), and three AUVs had initial positions (98, 34, 25), (32, 11, 97), and (38, 94, 27), as shown in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>. At the beginning of the search task, since the target affected the entire search area through neural transmission, the activity of each neuron could be derived from the shunt <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>. When the AUV selected its path, it compared the activity value of the neuron at its current location to its neighbors and selected the neuron with the largest value as the next step. In the proposed algorithm, the target and obstacles respectively are the excitation and suppression of the neural network. Repeating the path selection, the AUV moved toward the target, and could bypass obstacles to avoid collisions. At the same time, through fuzzy obstacle-avoidance, the AUV search path became smoother, and the AUV made no sharp turns when the avoiding obstacles. As shown in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>, the static targets <italic>T1</italic>, <italic>T2</italic>, and <italic>T3</italic> were found by AUVs <italic>R3</italic>, <italic>R1</italic>, and <italic>R2</italic>, respectively, which showed that the proposed algorithm could realize the joint search of multiple static targets.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Initial state of static targets</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-7.png"/>
</fig>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Search process with static targets</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-8.png"/>
</fig>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Dynamic Targets Search</title>
<p>Multiple AUVs searched for multiple dynamic targets in the second simulation. Dynamic targets undoubtedly bring more difficulties to the search task. But the proposed algorithm solved the problem. Due to pre-definition, the activity values of the neurons in the target were positive, and the target globally attracted AUVs through the dynamic neural activity landscape of the model. It can be known from <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref> that when a target&#x2019;s position changes, the activity values of the adjacent neurons of the AUV will change accordingly. By changing the activity values of neighboring neurons, each AUV can obtain real-time information on the change in the position of a moving target. The algorithm can continuously adjust the search path according to the evolution of adjacent neural activity values.</p>
<p><xref ref-type="fig" rid="fig-9">Figs. 9</xref> and <xref ref-type="fig" rid="fig-10">10</xref> show the process of three AUVs searching for three dynamic targets. A target moves randomly, and an AUV moves according to the proposed algorithm. <xref ref-type="fig" rid="fig-9">Fig. 9</xref> shows the initial positions of AUVs, targets, and obstacles. <xref ref-type="fig" rid="fig-10">Fig. 10</xref> shows the trajectory of the search targets, where the dotted and solid lines respectively indicate the moving track of the target and the moving trajectory of the AUV. The simulation results showed that the algorithm realized the joint search for multiple dynamic targets.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Initial state of dynamic targets</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-9.png"/>
</fig>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Search process with dynamic targets</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-10.png"/>
</fig>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>AUV Break Down</title>
<p>An AUV may experience mechanical failure when searching in real 3D underwater environments. In the case of partial AUV failure, whether the target search can be completed is an important indicator to measure the fault tolerance of the algorithm. We simulated a target search in the event of an AUV failure. In the simulation environments, four AUVs were to search for three dynamic targets. Initially, the four AUVs searched for targets. After a while, an AUV failed and other AUVs were still working, as shown in <xref ref-type="fig" rid="fig-11">Fig. 11a</xref>. Although <italic>R1</italic> failed, the team continued to search. After target <italic>T1</italic> was detected, its activity value was set to 0. Since target <italic>T4</italic> was not found, its activity value was 1. <italic>R2</italic>, <italic>R3</italic>, and <italic>R4</italic> continued to search in its direction. As shown in <xref ref-type="fig" rid="fig-11">Fig. 11b</xref>, <italic>R3</italic> detected target <italic>T4</italic>. The AUVs found all the targets in the given space, and the search task ended. The simulation showed that the algorithm could complete a search task under the condition of AUV mechanical failure, the algorithm&#x2019;s good fault-tolerant performance.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Search process when one AUV breaks down (a) One AUV breaks down (b) Final trajectories of entire search process</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-11.png"/>
</fig>
</sec>
<sec id="s3_4">
<label>3.4</label>
<title>Performance Comparison of Different Algorithms</title>
<p>Compared to commonly used algorithms such as PSO, the proposed algorithm is expected to improve the efficiency of multi-AUV collaborative search. The PSO algorithm optimizes the solution by improving candidate solutions based on given iterations. The algorithm randomly generates particles, evaluates their fitness, and finds the individual and optimal global positions of the group.</p>
<p>The comparative simulation was of three AUVs searching for three targets in underwater environments with obstacles. The targets had initial positions (41, 88, 76), (78, 78, 6), and (80, 37, 58), and AUVs were randomly distributed, with positions (38, 94, 27), (58, 21, 12) and (93, 55, 83). <xref ref-type="fig" rid="fig-12">Figs. 12a</xref>, <xref ref-type="fig" rid="fig-12">12b</xref> show the search process for two algorithms. <xref ref-type="fig" rid="fig-12">Fig. 12</xref> shows that the search path of the algorithm was shorter than that of the PSO algorithm for each AUV member. This is because the solution to the fitness function of PSO is only related to the distance between targets. In complex situations, the PSO algorithm does not adequately guide the AUV to avoid obstacles. The simulation results showed the proposed algorithm&#x2019;s greater efficiency and adaptability compared to the PSO algorithm.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>Searching process with different algorithms (a) Proposed algorithm (b) PSO algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-12.png"/>
</fig>
<p>To compare the trajectory smoothness of the proposed algorithm and the PSO algorithm, <xref ref-type="fig" rid="fig-13">Fig. 13</xref> shows the trajectories of the two algorithms to avoid obstacles. <xref ref-type="fig" rid="fig-13">Fig. 13a</xref>, <xref ref-type="fig" rid="fig-13">13b</xref> show the obstacle avoidance result of the proposed algorithm and PSO algorithm, respectively. The fuzzy obstacle avoidance of the proposed algorithm can provide better trajectory smoothness than the PSO control method. An AUV avoids sharp turns when it detects obstacles.</p>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>Trajectory comparison of avoiding obstacle (a) Proposed algorithm (b) PSO algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-13.png"/>
</fig>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Experiment</title>
<p>We used the Neptune-1 AUV model to test the performance of the proposed algorithm in a real pool environment (see <xref ref-type="fig" rid="fig-14">Fig. 14</xref>). The model was based on a remote submarine produced by Thunder-Tiger. We made it an AUV by adding a positioning module and rebuilding the remote-control system. The AUV thruster had a speed of 2 knots (1.08kN). Since underwater acoustic communication is not yet mature, this experiment was carried out on the water&#x2019;s surface. The AUVs communicated using WiFi.</p>
<fig id="fig-14">
<label>Figure 14</label>
<caption>
<title>AUV model</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-14.png"/>
</fig>
<p>The AUV position information was obtained through a water surface control platform, and the search track of the AUV was plotted. <xref ref-type="fig" rid="fig-15">Fig. 15</xref> shows the target search experiment of the AUV in the pool with an obstacle. We placed an ROV (remotely operated vehicle) in the pool as an obstacle. At the beginning of the experiment, the two AUVs searched in different directions to avoid overlapping search areas. Next, an AUV searched for the target according to the proposed algorithm. When encountering obstacles, AUVs moved using the fuzzy obstacle-avoidance method, which improved the smoothness and safety of the trajectory. Finally, two AUVs completed a search for a target, and the search path was similar to that obtained in our simulations. Experiments showed that the proposed algorithm was useful for target search in an obstacle pool.</p>
<fig id="fig-15">
<label>Figure 15</label>
<caption>
<title>Pool experiment with an obstacle</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-15.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-16">Fig. 16</xref> shows the target search experiment with two AUVs in the pool with multiple obstacles. Although obstacles were added to the environment, an AUV could safely avoid them under the guidance of the proposed algorithm. Finally, <italic>R1</italic> searched for the target. It can be seen that the proposed algorithm could still complete the target search task in a complex environment, and the search path was smooth.</p>
<fig id="fig-16">
<label>Figure 16</label>
<caption>
<title>Pool experiment with multi-ple obstacles</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-16.png"/>
</fig>
</sec>
<sec id="s5">
<label>5</label>
<title>Citations</title>
<p>A bio-inspired neural network and fuzzy obstacle-avoidance method were combined to realize the collaborative search for targets in underwater environments with obstacles. A bio-inspired neural network algorithm with certain cooperation rules could accomplish handle the target search tasks. It did not require an explicit environment model, nor trial and error to select parameters. Due to obstacles in underwater environments, a fuzzy obstacle-avoidance algorithm was introduced to optimize the search path. Simulations and experiments showed that the integrated algorithm enabled complete search tasks of multiple AUV teams and performed better than other algorithms. Although the algorithm has certain advantages, some problems have not been fully considered, such as ocean currents and the shape of an AUV. Further research is needed to obtain better algorithms that are consistent with real underwater environments.</p>
</sec>
</body>
<back><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> This project is supported by the National Natural Science Foundation of China (61773177), the Natural Science Foundation of Jiangsu Province (BK20171270), Jiangsu Undergraduate Training Program for Innovation and Entrepreneurship (201810323001Z, 201910323054Y).</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1">
<label>1</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Q.</given-names> 
<surname>Li</surname></string-name>, <string-name>
<given-names>Y. Y.</given-names> 
<surname>Ben</surname></string-name>, <string-name>
<given-names>S. M.</given-names> 
<surname>Naqvi</surname></string-name>, <string-name>
<given-names>J. A.</given-names> 
<surname>Neasham</surname></string-name> and <string-name>
<given-names>J. A.</given-names> 
<surname>Chambers</surname></string-name></person-group>, &#x201C;
<article-title>Robust student&#x2019;s <italic>t</italic>-based cooperative navigation for autonomous underwater vehicles</article-title>,&#x201D; <source>IEEE Transactions on Instrumentation and Measurement</source>, vol. <volume>67</volume>, no. <issue>8</issue>, pp. <fpage>1762</fpage>&#x2013;<lpage>1777</lpage>, <year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-2">
<label>2</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>X.</given-names> 
<surname>Cao</surname></string-name> and <string-name>
<given-names>D. Q.</given-names> 
<surname>Zhu</surname></string-name>
</person-group>, &#x201C;
<article-title>Multi-AUV task assignment and path planning with ocean current based on biological inspired self-organizing map and velocity synthesis algorithm</article-title>,&#x201D; 
<source>Intelligent Automation &#x0026; Soft Computing</source>, vol. 
<volume>23</volume>, no. 
<issue>1</issue>, pp. 
<fpage>31</fpage>&#x2013;
<lpage>39</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-3">
<label>3</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>X.</given-names> 
<surname>Li</surname></string-name> and <string-name>
<given-names>D. Q.</given-names> 
<surname>Zhu</surname></string-name>
</person-group>, &#x201C;
<article-title>An adaptive SOM neural network method for distributed formation control of a group of AUVs</article-title>,&#x201D; 
<source>IEEE Transactions on Industrial Electronics</source>, vol. 
<volume>65</volume>, no. 
<issue>10</issue>, pp. 
<fpage>8260</fpage>&#x2013;
<lpage>8270</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-4">
<label>4</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>X.</given-names> 
<surname>Cao</surname></string-name>, <string-name>
<given-names>H. C.</given-names> 
<surname>Yu</surname></string-name> and <string-name>
<given-names>H. B.</given-names> 
<surname>Sun</surname></string-name>
</person-group>, &#x201C;
<article-title>Dynamic task assignment for multi-AUV cooperative hunting</article-title>,&#x201D; 
<source>Intelligent Automation &#x0026; Soft Computing</source>, vol. 
<volume>25</volume>, no. 
<issue>1</issue>, pp. 
<fpage>25</fpage>&#x2013;
<lpage>34</lpage>, 
<year>2019</year>.</mixed-citation>
</ref>
<ref id="ref-5">
<label>5</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>X.</given-names> 
<surname>Xiang</surname></string-name>, <string-name>
<given-names>C.</given-names> 
<surname>Yu</surname></string-name>, <string-name>
<given-names>L.</given-names> 
<surname>Lapierre</surname></string-name>, <string-name>
<given-names>J.</given-names> 
<surname>Zhang</surname></string-name> and <string-name>
<given-names>Q.</given-names> 
<surname>Zhang</surname></string-name>
</person-group>, &#x201C;
<article-title>Survey on fuzzy-logic-based guidance and control of marine surface vehicles and underwater vehicles</article-title>,&#x201D; 
<source>International Journal of Fuzzy Systems</source>, vol. 
<volume>20</volume>, no. 
<issue>2</issue>, pp. 
<fpage>572</fpage>&#x2013;
<lpage>586</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-6">
<label>6</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>S.</given-names> 
<surname>MahmoudZadeh</surname></string-name>, <string-name>
<given-names>A. M.</given-names> 
<surname>Yazdani</surname></string-name>, <string-name>
<given-names>K.</given-names> 
<surname>Sammut</surname></string-name> and <string-name>
<given-names>D. M. W.</given-names> 
<surname>Powers</surname></string-name>
</person-group>, &#x201C;
<article-title>Online path planning for AUV rendezvous in dynamic cluttered undersea environment using evolutionary algorithms</article-title>,&#x201D; 
<source>Applied Soft Computing</source>, vol. 
<volume>70</volume>, pp. 
<fpage>929</fpage>&#x2013;
<lpage>945</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-7">
<label>7</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>W. Y.</given-names> 
<surname>Gan</surname></string-name> and <string-name>
<given-names>D. Q.</given-names> 
<surname>Zhu</surname></string-name>
</person-group>, &#x201C;
<article-title>Complete coverage belief function path planning algorithm of autonomous underwater vehicle based on behavior strategy</article-title>,&#x201D; 
<source>Journal of System Simulation</source>, vol. 
<volume>30</volume>, no. 
<issue>5</issue>, pp. 
<fpage>1857</fpage>&#x2013;
<lpage>1868</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-8">
<label>8</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>C. Y.</given-names> 
<surname>Wei</surname></string-name>, <string-name>
<given-names>K. V.</given-names> 
<surname>Hindriks</surname></string-name> and <string-name>
<given-names>C. M.</given-names> 
<surname>Jonker</surname></string-name>
</person-group>, &#x201C;
<article-title>Dynamic task allocation for multi-robot search and retrieval tasks</article-title>,&#x201D; 
<source>Applied Intelligence</source>, vol. 
<volume>45</volume>, no. 
<issue>2</issue>, pp. 
<fpage>383</fpage>&#x2013;
<lpage>401</lpage>, 
<year>2016</year>.</mixed-citation>
</ref>
<ref id="ref-9">
<label>9</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>T.</given-names> 
<surname>Balch</surname></string-name> and <string-name>
<given-names>R. C.</given-names> 
<surname>Arkin</surname></string-name>
</person-group>, &#x201C;
<article-title>Communication in reactive multiagent robotic systems</article-title>,&#x201D; 
<source>Autonomous Robots</source>, vol. 
<volume>1</volume>, no. 
<issue>1</issue>, pp. 
<fpage>27</fpage>&#x2013;
<lpage>52</lpage>, 
<year>1994</year>.</mixed-citation>
</ref>
<ref id="ref-10">
<label>10</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>E.</given-names> 
<surname>Magid</surname></string-name>, <string-name>
<given-names>T.</given-names> 
<surname>Tsubouchi</surname></string-name> and <string-name>
<given-names>E.</given-names> 
<surname>Koyanagi</surname></string-name>
</person-group>, &#x201C;
<article-title>Building a search tree for a pilot system of a rescue search robot in a discretized random step environment</article-title>,&#x201D; 
<source>Journal of Robotics and Mechatronics</source>, vol. 
<volume>23</volume>, no. 
<issue>4</issue>, pp. 
<fpage>567</fpage>&#x2013;
<lpage>581</lpage>, 
<year>2011</year>.</mixed-citation>
</ref>
<ref id="ref-11">
<label>11</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>J.</given-names> 
<surname>Melo</surname></string-name> and <string-name>
<given-names>A.</given-names> 
<surname>Matos</surname></string-name>
</person-group>, &#x201C;
<article-title>Survey on advances on terrain based navigation for autonomous underwater vehicles</article-title>,&#x201D; 
<source>Ocean Engineering</source>, vol. 
<volume>139</volume>, pp. 
<fpage>250</fpage>&#x2013;
<lpage>264</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-12">
<label>12</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>S. Y.</given-names> 
<surname>Chien</surname></string-name>, <string-name>
<given-names>Y. L.</given-names> 
<surname>Lin</surname></string-name>, <string-name>
<given-names>P. J.</given-names> 
<surname>Lee</surname></string-name>, <string-name>
<given-names>S.</given-names> 
<surname>Han</surname></string-name>, <string-name>
<given-names>M.</given-names> 
<surname>Lewis</surname></string-name> <etal>et al.</etal>
</person-group><italic>,</italic> &#x201C;
<article-title>Attention allocation for human multi-robot control: cognitive analysis based on behavior data and hidden states</article-title>,&#x201D; 
<source>International Journal of Human-Computer Studies</source>, vol. 
<volume>117</volume>, pp. 
<fpage>30</fpage>&#x2013;
<lpage>44</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-13">
<label>13</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>H.</given-names> 
<surname>Yamaguchi</surname></string-name>
</person-group>, &#x201C;
<article-title>A distributed motion coordination strategy for multiple nonholonomic mobile robots in cooperative hunting operations</article-title>,&#x201D; 
<source>Robotics and Autonomous Systems</source>, vol. 
<volume>43</volume>, no. 
<issue>4</issue>, pp. 
<fpage>257</fpage>&#x2013;
<lpage>282</lpage>, 
<year>2003</year>.</mixed-citation>
</ref>
<ref id="ref-14">
<label>14</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>S.</given-names> 
<surname>Yoon</surname></string-name> and <string-name>
<given-names>C.</given-names> 
<surname>Qiao</surname></string-name>
</person-group>, &#x201C;
<article-title>Cooperative search and survey using autonomous underwater vehicles (AUVs)</article-title>,&#x201D; 
<source>IEEE Transactions on Parallel and Distributed Systems</source>, vol. 
<volume>22</volume>, no. 
<issue>3</issue>, pp. 
<fpage>364</fpage>&#x2013;
<lpage>379</lpage>, 
<year>2011</year>.</mixed-citation>
</ref>
<ref id="ref-15">
<label>15</label><mixed-citation publication-type="other">
<person-group person-group-type="author"><string-name>
<given-names>S.</given-names> 
<surname>Uttendorf</surname></string-name>, <string-name>
<given-names>B.</given-names> 
<surname>Eilert</surname></string-name> and <string-name>
<given-names>L.</given-names> 
<surname>Overmeyer</surname></string-name>
</person-group>, &#x201C;
<article-title>Combining a fuzzy inference system with an A&#x002A; algorithm for the automated generation of roadmaps for automated guided vehicles</article-title>. <source>At-Automatisierungstechnik</source>, vol. 
<volume>65</volume>, no. 
<issue>3</issue>, pp. 
<fpage>189</fpage>&#x2013;
<lpage>197</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-16">
<label>16</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>R.</given-names> 
<surname>Zlot</surname></string-name> and <string-name>
<given-names>A.</given-names> 
<surname>Stentz</surname></string-name>
</person-group>, &#x201C;
<article-title>Market-based multirobot coordination for complex tasks</article-title>,&#x201D; 
<source>International Journal of Robotics Research</source>, vol. 
<volume>25</volume>, no. 
<issue>1</issue>, pp. 
<fpage>73</fpage>&#x2013;
<lpage>101</lpage>, 
<year>2016</year>.</mixed-citation>
</ref>
<ref id="ref-17">
<label>17</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>E.</given-names> 
<surname>Ferranti</surname></string-name> and <string-name>
<given-names>N.</given-names> 
<surname>Trigoni</surname></string-name>
</person-group>, &#x201C;
<article-title>Practical issues in deploying mobile agents to explore a sensor-instrumented environment</article-title>,&#x201D; 
<source>Computer Journal</source>, vol. 
<volume>54</volume>, no. 
<issue>3</issue>, pp. 
<fpage>309</fpage>&#x2013;
<lpage>320</lpage>, 
<year>2011</year>.</mixed-citation>
</ref>
<ref id="ref-18">
<label>18</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y. F.</given-names> 
<surname>Cai</surname></string-name> and <string-name>
<given-names>S. X.</given-names> 
<surname>Yang</surname></string-name>
</person-group>, &#x201C;
<article-title>An improved PSO-based approach with dynamic parameter tuning for cooperative multi-robot target searching in complex unknown environments</article-title>,&#x201D; 
<source>International Journal of Control</source>, vol. 
<volume>86</volume>, no. 
<issue>10</issue>, pp. 
<fpage>1720</fpage>&#x2013;
<lpage>1732</lpage>, 
<year>2013</year>.</mixed-citation>
</ref>
<ref id="ref-19">
<label>19</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>A. B.</given-names> 
<surname>Hashemi</surname></string-name> and <string-name>
<given-names>M. R.</given-names> 
<surname>Meybodi</surname></string-name>
</person-group>, &#x201C;
<article-title>A note on the learning automata based algorithms for adaptive parameter selection in PSO</article-title>,&#x201D; 
<source>Applied Soft Computing Journal</source>, vol. 
<volume>11</volume>, no. 
<issue>1</issue>, pp. 
<fpage>689</fpage>&#x2013;
<lpage>705</lpage>, 
<year>2011</year>.</mixed-citation>
</ref>
<ref id="ref-20">
<label>20</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>J.</given-names> 
<surname>Kim</surname></string-name> and <string-name>
<given-names>M.</given-names> 
<surname>Jin</surname></string-name>
</person-group>, &#x201C;
<article-title>Synchronization of chaotic systems using particle swarm optimization and time-delay estimation</article-title>,&#x201D; 
<source>Nonlinear Dynamics</source>, vol. 
<volume>86</volume>, no. 
<issue>3</issue>, pp. 
<fpage>2003</fpage>&#x2013;
<lpage>2015</lpage>, 
<year>2016</year>.</mixed-citation>
</ref>
<ref id="ref-21">
<label>21</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>X.</given-names> 
<surname>Cao</surname></string-name> and <string-name>
<given-names>D. Q.</given-names> 
<surname>Zhu</surname></string-name>
</person-group>, &#x201C;
<article-title>Multi-AUV underwater cooperative search algorithm based on biological inspired neurodynamics model and velocity synthesis</article-title>,&#x201D; 
<source>Journal of Navigation</source>, vol. 
<volume>68</volume>, no. 
<issue>6</issue>, pp. 
<fpage>1075</fpage>&#x2013;
<lpage>1087</lpage>, 
<year>2015</year>.</mixed-citation>
</ref>
<ref id="ref-22">
<label>22</label><mixed-citation publication-type="journal">
<person-group person-group-type="author">X. Cao, H. Sun and G. E. Jan
</person-group>, &#x201C;
<article-title>Multi-AUV cooperative target search and tracking in unknown underwater environment</article-title>,&#x201D; 
<source>Ocean Engineering</source>, vol. 
<volume>150</volume>, pp. 
<fpage>1</fpage>&#x2013;
<lpage>11</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-23">
<label>23</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>H.</given-names> 
<surname>&#x00D6;gmen</surname></string-name> and <string-name>
<given-names>S.</given-names> 
<surname>Gagn&#x00E9;</surname></string-name>
</person-group>, &#x201C;
<article-title>Neural network architectures for motion perception and elementary motion detection in the fly visual system</article-title>,&#x201D; 
<source>Neural Networks</source>, vol. 
<volume>3</volume>, no. 
<issue>5</issue>, pp. 
<fpage>487</fpage>&#x2013;
<lpage>505</lpage>, 
<year>1990</year>.</mixed-citation>
</ref>
<ref id="ref-24">
<label>24</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>C. M.</given-names> 
<surname>Luo</surname></string-name> and <string-name>
<given-names>S. X.</given-names> 
<surname>Yang</surname></string-name>
</person-group>, &#x201C;
<article-title>A bioinspired neural network for real-time concurrent map building and complete coverage robot navigation in unknown environments</article-title>,&#x201D; 
<source>IEEE Transactions on Neural Networks</source>, vol. 
<volume>19</volume>, no. 
<issue>7</issue>, pp. 
<fpage>1279</fpage>&#x2013;
<lpage>1298</lpage>, 
<year>2008</year>.</mixed-citation>
</ref>
<ref id="ref-25">
<label>25</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>M. A.</given-names> 
<surname>Haque</surname></string-name>, <string-name>
<given-names>A. R.</given-names> 
<surname>Rahmani</surname></string-name> and <string-name>
<given-names>M. B.</given-names> 
<surname>Egerstedt</surname></string-name>
</person-group>, &#x201C;
<article-title>Biologically inspired confinement of multi-robot systems</article-title>,&#x201D; 
<source>International Journal of Bio-Inspired Computation</source>, vol. 
<volume>3</volume>, no. 
<issue>4</issue>, pp. 
<fpage>213</fpage>&#x2013;
<lpage>224</lpage>, 
<year>2011</year>.</mixed-citation>
</ref>
<ref id="ref-26">
<label>26</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>X.</given-names> 
<surname>Cao</surname></string-name>, <string-name>
<given-names>D. Q.</given-names> 
<surname>Zhu</surname></string-name> and <string-name>
<given-names>S. X.</given-names> 
<surname>Yang</surname></string-name>
</person-group>, &#x201C;
<article-title>Multi-AUV cooperative target search based on biological inspired neurodynamics model in three-dimensional underwater environments</article-title>,&#x201D; 
<source>IEEE Transactions on Neural Networks and Learning Systems</source>, vol. 
<volume>27</volume>, no. 
<issue>11</issue>, pp. 
<fpage>2364</fpage>&#x2013;
<lpage>2374</lpage>, 
<year>2016</year>.</mixed-citation>
</ref>
<ref id="ref-27">
<label>27</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<surname>Xia</surname> 
<given-names>M.</given-names></string-name>, <string-name>
<surname>Zhang</surname> 
<given-names>C.</given-names></string-name>, <string-name>
<surname>Weng</surname> 
<given-names>L.</given-names></string-name>, <string-name>
<surname>Liu</surname> 
<given-names>J.</given-names></string-name>, <string-name>
<surname>Wang</surname> 
<given-names>Y.</given-names></string-name>, <string-name>
<surname>Tiwari</surname> 
<given-names>S.</given-names></string-name>, <string-name>
<surname>Trivedi</surname> 
<given-names>M.</given-names></string-name> and <string-name>
<surname>Kohle</surname> 
<given-names>M. L.</given-names></string-name>
</person-group>, &#x201C;
<article-title>Robot path planning based on multi-objective optimization with local search</article-title>,&#x201D; 
<source>Journal of Intelligent &#x0026; Fuzzy Systems</source>, vol. 
<volume>35</volume>, no. 
<issue>2</issue>, pp. 
<fpage>1755</fpage>&#x2013;
<lpage>1764</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-28">
<label>28</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>M.</given-names> 
<surname>Elhoseny</surname></string-name>, <string-name>
<given-names>A.</given-names> 
<surname>Shehab</surname></string-name> and <string-name>
<given-names>X. H.</given-names> 
<surname>Yuan</surname></string-name>
</person-group>, &#x201C;
<article-title>Optimizing robot path in dynamic environments using Genetic Algorithm and Bezier Curve</article-title>,&#x201D; 
<source>Journal of Intelligent &#x0026; Fuzzy Systems</source>, vol. 
<volume>33</volume>, no. 
<issue>4</issue>, pp. 
<fpage>2305</fpage>&#x2013;
<lpage>2316</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-29">
<label>29</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>B.</given-names> 
<surname>Sun</surname></string-name>, <string-name>
<given-names>D. Q.</given-names> 
<surname>Zhu</surname></string-name> and <string-name>
<given-names>S. X.</given-names> 
<surname>Yang</surname></string-name>
</person-group>, &#x201C;
<article-title>An optimized fuzzy control algorithm for three-dimensional AUV path planning</article-title>,&#x201D; 
<source>International Journal of Fuzzy Systems</source>, vol. 
<volume>20</volume>, no. 
<issue>2</issue>, pp. 
<fpage>597</fpage>&#x2013;
<lpage>610</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-30">
<label>30</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>T. Y.</given-names> 
<surname>Abdalla</surname></string-name>, <string-name>
<given-names>A. A.</given-names> 
<surname>Abed</surname></string-name> and <string-name>
<given-names>A. A.</given-names> 
<surname>Ahmed</surname></string-name>
</person-group>, &#x201C;
<article-title>Mobile robot navigation using PSO-optimized fuzzy artificial potential field with fuzzy control</article-title>,&#x201D; 
<source>Journal of Intelligent &#x0026; Fuzzy Systems</source>, vol. 
<volume>32</volume>, no. 
<issue>6</issue>, pp. 
<fpage>3893</fpage>&#x2013;
<lpage>3908</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>