<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">16152</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2022.016152</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Tour Planning Design for Mobile Robots Using Pruned Adaptive Resonance Theory Networks</article-title>
<alt-title alt-title-type="left-running-head">Tour Planning Design for Mobile Robots Using Pruned Adaptive Resonance Theory Networks</alt-title>
<alt-title alt-title-type="right-running-head">Tour Planning Design for Mobile Robots Using Pruned Adaptive Resonance Theory Networks</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes"><name name-style="western"><surname>Palani Murugan</surname><given-names>S.</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>suyambhu11@gmail.com</email>
</contrib>
<contrib id="author-2" contrib-type="author"><name name-style="western"><surname>Chinnadurai</surname><given-names>M.</given-names></name><xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-3" contrib-type="author"><name name-style="western"><surname>Manikandan</surname><given-names>S.</given-names></name><xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<aff id="aff-1"><label>1</label><institution>Department of CSE, E. G. S. Pillay Engineering College</institution>, <addr-line>Nagapattinam, 611002, Tamil Nadu</addr-line>, <country>India</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of IT, E. G. S. Pillay Engineering College</institution>, <addr-line>Nagapattinam, 611002, Tamil Nadu</addr-line>, <country>India</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: S. Palani Murugan. Email: <email>suyambhu11@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-08-30"><day>30</day><month>08</month><year>2021</year>
</pub-date>
<volume>70</volume>
<issue>1</issue>
<fpage>181</fpage>
<lpage>194</lpage>
<history>
<date date-type="received"><day>08</day><month>4</month><year>2021</year></date>
<date date-type="accepted"><day>10</day><month>5</month><year>2021</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Palani Murugan et al.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Palani Murugan et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_16152.pdf"></self-uri>
<abstract>
<p>The development of intelligent algorithms for controlling autonom- ous mobile robots in real-time activities has increased dramatically in recent years. However, conventional intelligent algorithms currently fail to accurately predict unexpected obstacles involved in tour paths and thereby suffer from inefficient tour trajectories. The present study addresses these issues by proposing a potential field integrated pruned adaptive resonance theory (PPART) neural network for effectively managing the touring process of autonomous mobile robots in real-time. The proposed system is implemented using the AlphaBot platform, and the performance of the system is evaluated according to the obstacle prediction accuracy, path detection accuracy, time-lapse, tour length, and the overall accuracy of the system. The proposed system provide a very high obstacle prediction accuracy of 99.61&#x0025;. Accordingly, the proposed tour planning design effectively predicts unexpected obstacles in the environment and thereby increases the overall efficiency of tour navigation.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Autonomous mobile robots</kwd>
<kwd>path exploration</kwd>
<kwd>navigation</kwd>
<kwd>tour planning</kwd>
<kwd>tour process</kwd>
<kwd>potential filed integrated pruned ART networks</kwd>
<kwd>AlphaBot platform</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label><title>Introduction</title>
<p>Fixed robotics have been widely applied for many years in numerous settings where environmental conditions are known with a very high degree of certainty. However, mobile robots have the capacity to perform a much wider range of activities, such as explore terrestrial, underwater, aerial, and outer space environments, transport cargo, complete complex tasks, perform surgery, assist in warehouse distribution centers, support security, act as a personal assistants, aid in space and ocean exploration, and provide guidance for navigation [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-4">4</xref>]. Mobile robots that implement well-defined tasks in highly controlled environments rely upon preprogrammed or externally communicated instructions and guidance rules for moving about the environment, and generally implement only simplistic obstacle avoidance algorithms. In contrast, the goal of autonomous mobile robots is to implement tasks within uncontrolled environments without any external direction. Accordingly, autonomous mobile robots must maneuver around obstacles, in addition to addressing all other issues that guided mobile robots encounter [<xref ref-type="bibr" rid="ref-5">5</xref>]. These features are achieved by mobile robots using several technologies, such as various sensors, wireless communication, integrated safety, fleet simulation software, supervisory software, and fleet management software [<xref ref-type="bibr" rid="ref-6">6</xref>]. The first electronic autonomous mobile robots were Elmer and Elsie, which were created by Dr. William Grey Walter in 1948 in Bristol, England [<xref ref-type="bibr" rid="ref-7">7</xref>]. This and subsequent developments in autonomous mobile robot design relied upon conventional obstacle avoidance algorithms. However, efficient autonomous operation requires predictive capabilities based upon feedback from the environment, which cannot be obtained via conventional algorithms. The first autonomous mobile robot to be controlled with the help of artificial intelligence was introduced in 1970 [<xref ref-type="bibr" rid="ref-8">8</xref>].</p>
<p>More recent efforts to improve the predictive capabilities of autonomous mobile robots have been based upon the development of increasingly sophisticated artificial neural networks (ANNs) [<xref ref-type="bibr" rid="ref-9">9</xref>]. For example, Tai et al. [<xref ref-type="bibr" rid="ref-10">10</xref>] implemented an ANN by integrating a convolutional neural network(CNN)with the respective decision-making process to facilitate effective robot exploration based on visual cues. The autonomous mobile robot system was trained using annotated visual information related to the exploration task, and its efficiency was evaluated in a real-time application. Similarly, Thomas et al. [<xref ref-type="bibr" rid="ref-11">11</xref>] applied a CNN in the FumeBot home monitoring robot system. Obstacles in the home environment were effectively identified from image data during the training process. Bing et al. [<xref ref-type="bibr" rid="ref-12">12</xref>] developed effective autonomous mobile robot control for exploration applications using a spiking neural network (SNN)in conjunction with various robot characteristics, such as energy, speed, and computational capabilities, during network training. The efficiency of the system was evaluated via simulations. Patnaik et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] developed autonomous mobile robot control using an evolving sensory data-based network approach. Four different learning models were applied during network training to predict both obstacles and targets in the surrounding environment based on their sizes and shapes.</p>
<p>The present work addresses these issues by proposing a potential field integrated pruned adaptive resonance theory (PPART) neural network for effectively managing the touring process of autonomous mobile robots in real-time based on a very high accuracy for predicting unexpected obstacles in the environment. The excellent obstacle prediction accuracy then facilitates the development of highly efficient robot trajectories in real time. Specifically, the potential field method is employed to conduct path exploration according to a given destination and the presence of obstacles in the path exploration space based on the Laplace equation and an energy field representation of the path exploration space, including the destination and obstacles within that space. The adaptive resonance theory (ART) neural network is then employed in conjunction with the determined obstacles to obtain the optimal navigation path that avoids all obstacles and is the shortest possible path to achieve operational objectives. Here, the optimal navigation pathways are identified by fuzzy and ART neural networks based on building maps that consist of several geometric primitives. The remainder of this manuscript is organized as follows. Section 2 presents the PPART neural network in detail. The obstacle prediction and path detection performance, and the tour efficiency obtained by the network are evaluated in Section 3. Finally, Section 4 concludes the manuscript.</p>
</sec>
<sec id="s2">
<label>2</label><title>Potential Field Integrated Pruned Adaptive Resonance Theory Neural Network</title>
<sec id="s2_1">
<title>Assumptions and Notations</title>
<p>The following assumptions and notations are applied to identify the obstacles, ideal travel paths, and navigation process of an autonomous mobile robot [<xref ref-type="bibr" rid="ref-14">14</xref>].</p>
<list list-type="bullet">
<list-item><p>The tour planning working environment with in which the mobile robot is placed is defined as <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>W</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula>, and the boundary of the environment is denoted as the tour environment boundary (TEB).</p></list-item>
<list-item><p>A specific area to be analyzed and accessed by the robot in <italic>WE</italic> is denoted as <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>W</mml:mi><mml:mrow><mml:msub><mml:mi>E</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>, which is approximately a circle with a radius defined as <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>.</p></list-item>
<list-item><p>The <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>W</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula> and TEB contain both unknown and known stationary obstacles with unknown positions and shapes.</p></list-item>
<list-item><p>The shape of the mobile robot is approximately a circle with a radius defined as <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>.</p></list-item>
<list-item><p>The mobile robot path configuration space in <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi>W</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula> is defined as <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi>S</mml:mi><mml:mi>P</mml:mi></mml:math></inline-formula>, and the free space is defined as <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>S</mml:mi><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>.</p></list-item>
<list-item><p>The robot sensing data captured with in <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>W</mml:mi><mml:mrow><mml:msub><mml:mi>E</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is defined as <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>R</mml:mi><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>, and the specific area radius is defined as <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>.</p></list-item>
</list>
</sec>
<sec id="s2_2">
<title>Mobile Robot Path Identification Process Based on the Potential Field</title>
<p>The path exploration problem and any obstacles within the defined environment <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>W</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula>must be identified [<xref ref-type="bibr" rid="ref-15">15</xref>]. Initially, the obstacles are identified through <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi>R</mml:mi><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> are, and the points are connected to facilitate path exploration in <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mi>W</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula>. The primary objective of the path identification process is to minimize the path length by analyzing all of the obstacles present in <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mi>W</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula>. To this end, we first define the respective observation points of the mobile robot as <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mtext>&#xA0;</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, where <italic>N</italic> is the total number of observation points. The defined observation points must satisfy the following equation:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi mathvariant="normal">&#x2200;</mml:mi><mml:mspace width="thinmathspace" /><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mi>W</mml:mi><mml:mrow><mml:msub><mml:mi>E</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mspace width="1em" /><mml:mi mathvariant="normal">&#x2203;</mml:mi><mml:mspace width="thickmathspace" /><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mo fence="false" stretchy="false">}</mml:mo><mml:mo>,</mml:mo><mml:mspace width="1em" /><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mo>&#x2264;</mml:mo><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></disp-formula></p>
<p>Once the observation points related to the path exploration process are detected, the detected robot path must adhere to the mapping relation <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mi>p</mml:mi><mml:mi>i</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x003A;</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">]</mml:mo><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mi>W</mml:mi><mml:mrow><mml:msub><mml:mi>E</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>. Therefore, the following condition must also be satisfied:
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mi>p</mml:mi><mml:mi>i</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>0</mml:mn><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mi>i</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03C4;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>p</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>N</mml:mi></mml:mfrac></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03C4;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mspace width="1em" /><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:math></disp-formula>
where <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mrow><mml:msub><mml:mi>&#x03C4;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is defined as the path navigation of all observation points <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>&#x2026; During exploration, the path initiates from <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mi>p</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>0</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and traverses to the initial point <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mi>p</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. The tour planning process and the kinematic robot model are illustrated in <?A3B2 "fig1",5,"anchor"?><xref ref-type="fig" rid="fig-1">Fig. 1</xref>. The motion of the mobile robot in the defined system is constrained in WE using the following dynamic non-holonomic constraints in the <italic>x</italic>, <italic>y</italic>, and <italic>z</italic> translational directions and in the rotational direction [<xref ref-type="bibr" rid="ref-16">16</xref>]:
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:mover><mml:mi>x</mml:mi><mml:mo>&#x02D9;</mml:mo></mml:mover></mml:mrow><mml:mo>=</mml:mo><mml:mi>z</mml:mi><mml:mo>.</mml:mo><mml:mrow><mml:mtext>cos</mml:mtext></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03C6;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula>
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mi>z</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mi>R</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:math></disp-formula>
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo>&#x02D9;</mml:mo></mml:mover></mml:mrow><mml:mo>=</mml:mo><mml:mi>m</mml:mi><mml:mo>.</mml:mo><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03C6;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo></mml:math></disp-formula>
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mi>m</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mi>R</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>L</mml:mi></mml:mfrac></mml:math></disp-formula>
where <italic>z</italic> is the translational velocity of the robot, the angular robot orientation is represented as <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mi>&#x03C6;</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mi>R</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> represents the translational velocity of the right wheel, <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mi>L</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> denotes the translational velocity of the left wheel, <italic>m</italic> denotes the robot&#x0027;s rotational velocity, and <italic>L</italic> is the distance between the right and left wheels [<xref ref-type="bibr" rid="ref-17">17</xref>].</p>
<fig id="fig-1"><label>Figure 1</label><caption><title>(a) Sample mobile robot and (b) Kinematic robot model</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-1.png"/></fig>
<p>During the analysis process, <italic>WE</italic> is split into grids within which the obstacles and destination are represented, and the obstacles and destination are respectively assigned repulsive and attractive potentials according to an artificial potential field [<xref ref-type="bibr" rid="ref-18">18</xref>]. This converts robot path exploration into an energy minimization problem [<xref ref-type="bibr" rid="ref-19">19</xref>]. A representative potential field within a divided working space is illustrated in <?A3B2 "fig2",5,"anchor"?><xref ref-type="fig" rid="fig-2">Fig. 2</xref>, where the green color represents the attractive potential of the destination and the brown color represents the repulsive potential of the obstacles. The potentials presented in <xref ref-type="fig" rid="fig-2">Fig. 2</xref> are defined in the following discussion.</p>
<fig id="fig-2"><label>Figure 2</label><caption><title>Working environment representation using potential field</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-2.png"/></fig>
<p>The attractive potential of the destination is defined as follows:
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>g</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>c</mml:mi><mml:msqrt><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>g</mml:mi><mml:mi>o</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>g</mml:mi><mml:mi>o</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:msqrt></mml:math></disp-formula>
<disp-formula id="ueqn-1">
<mml:math id="mml-ueqn-1" display="block"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>H</mml:mi><mml:mi>A</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>&#x03B4;</mml:mi><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>s</mml:mi></mml:msubsup><mml:mo>&#x2061;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac></mml:math>
</disp-formula>
where <italic>x</italic> and <italic>y</italic> are the robot coordinates in two-dimensional (2D) space, <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>g</mml:mi><mml:mi>o</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>g</mml:mi><mml:mi>o</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> are the 2D coordinates of the destination, and <italic>c</italic> is a constant. Then, the potentials of obstacles in <italic>WE</italic> are defined as follows:
<disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:mi>g</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>
where <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula> is the maximum potential of<italic>WE</italic>, and <italic>g</italic> is derived as follows.
<disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd><mml:mi>g</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:mi>y</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mtd><mml:mtd><mml:mi></mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x2212;</mml:mo><mml:mi>x</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x2212;</mml:mo><mml:mi>x</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x2212;</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd /><mml:mtd><mml:mi></mml:mi><mml:mspace width="1em" /><mml:mo>+</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>&#x2212;</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>l</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>Here, <italic>x<sub>o</sub></italic> and <italic>y<sub>o</sub></italic> are the 2D coordinates of the obstacle, and <italic>l</italic> is the obstacle length. In addition, robot path exploration is maintained within <italic>WE</italic> by applying a repulsive potential to the TEB as follows:
<disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>H</mml:mi><mml:mi>A</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>&#x03B4;</mml:mi><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>s</mml:mi></mml:msubsup><mml:mo>&#x2061;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>Here, <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi>&#x03B4;</mml:mi></mml:math></inline-formula> is a constant, <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mrow><mml:msub><mml:mi>g</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is a linear boundary convex region function, and the boundary face segments are represented as <italic>s</italic>. Finally, the total potential acting on the robot at (<italic>x</italic>, <italic>y</italic>) is computed as the sum of the attractive and repulsive potentials:
<disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:mrow><mml:msub><mml:mi>U</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>g</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mi>o</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>H</mml:mi><mml:mi>A</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></disp-formula></p>
<p>The successful identification of <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mi>W</mml:mi><mml:mi>E</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:mi>R</mml:mi><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>,</mml:mo></mml:math></inline-formula> and <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mi mathvariant="normal">&#x2200;</mml:mi><mml:mspace width="thinmathspace" /><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mi>W</mml:mi><mml:mrow><mml:msub><mml:mi>E</mml:mi><mml:mi>a</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mo>&#x220B;</mml:mo><mml:mspace width="thickmathspace" /><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi></mml:mrow><mml:mo fence="false" stretchy="false">}</mml:mo><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>s</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo fence="false" stretchy="false">&#x2016;</mml:mo><mml:mo>&#x2264;</mml:mo><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> are used to define the obstacle-related information in <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mi>W</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula>. These observation points are analyzed by the kinematic constraints of the mobile robot, such as <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mrow><mml:mover><mml:mi>x</mml:mi><mml:mo>&#x00A8;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mrow><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo>&#x00A8;</mml:mo></mml:mover></mml:mrow></mml:mrow></mml:math></inline-formula> The positions of all obstacles in <italic>WE</italic> are then predicted based on an effective examination of the values of <italic>P<sub>o</sub></italic>.</p>
<p>The output of this process generates vector information <italic>I</italic> &#x003D; [<italic>I</italic><sub>1</sub>I<sub>2</sub>&#x2026;<italic>I<sub>M</sub></italic>]<sup><roman>T</roman></sup> of length <italic>M</italic>, where each element lies in a range (0, 1), which, along with the geometric primitives and corresponding parameters, is represented as velocity, position, and acceleration into the ART neural network.</p>
</sec>
<sec id="s2_3">
<title>Mobile Robot Navigation Using an ART Neural Network</title>
<p>The proposed ART neural network architecture and corresponding processing are illustrated in <?A3B2 "fig3",5,"anchor"?><xref ref-type="fig" rid="fig-3">Fig. 3</xref>. The network consists of an input layer denoted as <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>. that receives vector inputs <italic>I</italic> &#x003D; [<italic>I</italic><sub>1</sub><italic>I</italic><sub>2</sub>&#x2026;<italic>I<sub>M</sub></italic>]<sup><roman>T</roman></sup>. The incoming inputs are received as <italic>y</italic><sub>1</sub> &#x003D; [<italic>y</italic><sub>11</sub><italic>y</italic><sub>12</sub>&#x2026;<italic>y</italic><sub>1M</sub>]<sup><roman>T</roman></sup> by the following layer <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> in the bottom-up process, which are then transmitted to the following layer <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> as <italic>y</italic><sub>2</sub> &#x003D; [<italic>y</italic><sub>21</sub><italic>y</italic><sub>22</sub>&#x2026;<italic>y</italic><sub>2N</sub>]<sup><roman>T</roman></sup> of length <italic>N</italic> in the top-down inputs, where the inputs <italic>I</italic> are processed. After initializing the input vectors and respective processing layers, the particular weight values of the network nodes are denoted as <italic>w<sub>j</sub></italic> &#x003D; {<italic>w<sub>j</sub></italic><sub>1</sub>, <italic>w<sub>j</sub></italic><sub>2</sub>,&#x2026;, <italic>w<sub>jM</sub></italic>}.</p>
<fig id="fig-3"><label>Figure 3</label><caption><title>ART network structure</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-3.png"/></fig>
<p>The specific navigation choice function is defined as follows:
<disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>I</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi>I</mml:mi><mml:mo>&#x2227;</mml:mo><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<p>Here, <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mo>&#x2227;</mml:mo></mml:math></inline-formula> represents the fuzzy operator defined as
<disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mo>&#x2227;</mml:mo><mml:mi>q</mml:mi></mml:mrow><mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mtext>min</mml:mtext></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:mrow><mml:msub><mml:mi>q</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula>
where <italic>p<sub>i</sub></italic> and <italic>q<sub>i</sub></italic> denote <italic>M</italic>-dimensional vectors. In addition, <italic>&#x03B1;</italic> is the scalar value, and the Manhattan norm is applied, which is estimated as follows:
<disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Furthermore, a matching process is performed for every incoming input, where upon the exact navigation path is identified successfully. Otherwise, network training is continued by updating the weight values as follows:
<disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:msubsup><mml:mi>w</mml:mi><mml:mi>J</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>I</mml:mi><mml:mo>&#x2227;</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mi>J</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>l</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:msubsup><mml:mi>w</mml:mi><mml:mi>J</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>l</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula></p>
<p>Here, the ART network uses the learning parameter <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> is 1 and vigilance parameter <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mi>&#x03C1;</mml:mi></mml:math></inline-formula> to implement a very fast network training process. The value of <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> lies in the range (0, 1), and a constant value of <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mi>&#x03C1;</mml:mi></mml:math></inline-formula> is employed.</p>
<p>The complete training process is illustrated in <?A3B2 "fig4",5,"anchor"?><xref ref-type="fig" rid="fig-4">Fig. 4</xref>. Here, category pruning, direct category updating, and direct category creation are applied to further refine the ART network output.</p>
<p>In the category pruning process, a fuzzy ART rectangular map is identified for every obstacle present in <italic>WE</italic>. The pruning process removes obstacles related to the rectangular map from the touring environment, and the related categories are also eliminated from the list. The weight values of the removed obstacles are written in the form of <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:msubsup><mml:mi>v</mml:mi><mml:mi>j</mml:mi><mml:mi>c</mml:mi></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, where <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:msubsup><mml:mi>v</mml:mi><mml:mi>j</mml:mi><mml:mi>c</mml:mi></mml:msubsup></mml:math></inline-formula> are the vertices of the corresponding rectangular map. Finally, the respective weight values are changed in both layers F<sub>1</sub> and F<sub>2</sub>. For layer F<sub>1</sub>, these are computed as follows.</p>
<p><disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left" rowspacing="1em 1em 0.4em" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mn>1</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>:=</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>:=</mml:mo><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>:=</mml:mo><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>N</mml:mi><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo>,</mml:mo><mml:mspace width="1em" /><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>M</mml:mi><mml:mo>.</mml:mo></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>For layer F<sub>2</sub>, the weight values are computed as follows.
<disp-formula id="eqn-18"><label>(18)</label><mml:math id="mml-eqn-18" display="block"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left" rowspacing="1em 0.4em" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mi>N</mml:mi><mml:mo>:=</mml:mo><mml:mi>N</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>N</mml:mi><mml:mi>M</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>This process is repeated for all removed obstacles in the touring environment.</p>
<fig id="fig-4"><label>Figure 4</label><caption><title>Pruned ART network learning process</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-4.png"/></fig>
<p>Then, direct category updating is applied to the ART network to resize the rectangular map categories. In addition, the corresponding weight values are also updated as <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:msubsup><mml:mi>v</mml:mi><mml:mi>j</mml:mi><mml:mi>c</mml:mi></mml:msubsup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> according to the new vertices of the rectangular maps. This process is repeated whenever the size of a rectangle map decreases or one map divides into two or more maps.</p>
<p>Finally, direct category creation is applied whenever the incoming input is not matched with the trained features. Moreover, categories are created only when obstacles are present in the environment. A new rectangular map is created with the respective weight values defined above, and a new category is created (<xref ref-type="disp-formula" rid="eqn-19">Eq. (19)</xref>). Afterward, the category value is increased continuously to meet the corresponding tour path.</p>
<p><disp-formula id="eqn-19"><label>(19)</label><mml:math id="mml-eqn-19" display="block"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable columnalign="left" rowspacing="1em 0.4em" columnspacing="1em"><mml:mtr><mml:mtd><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>N</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>N</mml:mi><mml:mi>M</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>According to the above discussion, each incoming input feature is processed by a pruned ART network that completely recognizes the obstacles present in the environment. Then, the effective navigation path is detected from source to destination by eliminating unwanted categories from the list. This process is repeated, and the mobile robot efficiently moves in the tour environment until reaching the destination.</p>
</sec>
</sec>
<sec id="s3">
<label>3</label><title>Experimental Analysis</title>
<p>The proposed PPART approach was developed using the AlphaBot robotic development platform. The development platform is compatible with Ardunio and Raspberry Pi, and includes several components, such as a mobile chassis and a main control board for providing motion within a test environment boundary. The effective utilization of the components and compatibility helps to predict the obstacles, line tracking, infrared remote control, Bluetooth, ZigBee process, and video monitoring. The mobile robot path exploration and navigation process performance provided by the PPART neural network is evaluated according to its obstacle prediction accuracy, path detection accuracy, error rate, and overall system accuracy based on different evaluation metrics. The accuracy of obstacle prediction was compared with those obtained using three existing machine learning techniques, including ANN-,CNN-, and SNN-based methods.</p>
<p><?A3B2 "tbl1",5,"anchor"?><xref ref-type="table" rid="table-1">Tab. 1</xref> lists the average obstacle prediction accuracy obtained by the four methods considered based on 250 touring attempts. The results in the table demonstrate that the obstacle prediction accuracy of the PPART approach effectively predicts the artificial potential regions in <italic>WE</italic>. This prediction process is facilitated by the continuous collection of observation points in <italic>WE</italic>. The results in <xref ref-type="table" rid="table-1">Tab. 1</xref> are graphically presented in <?A3B2 "fig5",5,"anchor"?><xref ref-type="fig" rid="fig-5">Fig. 5</xref> for a more intuitive appraisal of the obstacle prediction accuracy of the proposed approach.</p>
<table-wrap id="table-1"><label>Table 1</label><caption><title>Obstacles prediction accuracy on navigation attempts</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Methods</th>
<th align="left" colspan="9">Number of touring navigation attempts</th>
</tr>
<tr>
<th align="left">50</th>
<th align="left">75</th>
<th align="left">100</th>
<th align="left">125</th>
<th align="left">150</th>
<th align="left">175</th>
<th align="left">200</th>
<th align="left">225</th>
<th align="left">250</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">ANN</td>
<td align="left">95.11</td>
<td align="left">95.53</td>
<td align="left">95.04</td>
<td align="left">97.13</td>
<td align="left">95.59</td>
<td align="left">96.02</td>
<td align="left">96.07</td>
<td align="left">96.22</td>
<td align="left">96.27</td>
</tr>
<tr>
<td align="left">CNN</td>
<td align="left">97.32</td>
<td align="left">95.59</td>
<td align="left">97.09</td>
<td align="left">97.32</td>
<td align="left">96.87</td>
<td align="left">98.02</td>
<td align="left">96.99</td>
<td align="left">97.96</td>
<td align="left">97.09</td>
</tr>
<tr>
<td align="left">SNN</td>
<td align="left">97.9</td>
<td align="left">98.36</td>
<td align="left">97.383</td>
<td align="left">97.993</td>
<td align="left">98.443</td>
<td align="left">97.823</td>
<td align="left">98.013</td>
<td align="left">98.253</td>
<td align="left">98.023</td>
</tr>
<tr>
<td align="left">PPART</td>
<td align="left">98.21</td>
<td align="left">98.48</td>
<td align="left">98.92</td>
<td align="left">98.23</td>
<td align="left">98.7</td>
<td align="left">98.92</td>
<td align="left">98.78</td>
<td align="left">98.82</td>
<td align="left">98.97</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-5"><label>Figure 5</label><caption><title>Obstacles detection accuracy on the number of navigation attempts</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-5.png"/></fig>
<p>In addition, the efficiency of the obstacle detection process was analyzed, the results of which are listed in <?A3B2 "tbl2",5,"anchor"?><xref ref-type="table" rid="table-2">Tab. 2</xref>. The results in the table clearly demonstrate that the proposed PPART approach efficiently predicts the obstacles present in <italic>WE</italic>. The results in <xref ref-type="table" rid="table-2">Tab. 2</xref> are graphically presented in <?A3B2 "fig6",5,"anchor"?><xref ref-type="fig" rid="fig-6">Fig. 6</xref>.</p>
<table-wrap id="table-2"><label>Table 2</label><caption><title>Obstacles prediction accuracy on various time interval</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Methods</th>
<th align="left" colspan="9">Time (s)</th>
</tr>
<tr>
<th align="left">0.5</th>
<th align="left">1.0</th>
<th align="left">1.5</th>
<th align="left">2.0</th>
<th align="left">2.5</th>
<th align="left">3.0</th>
<th align="left">3.5</th>
<th align="left">4.0</th>
<th align="left">4.5</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">ANN</td>
<td align="left">96.34</td>
<td align="left">96.76</td>
<td align="left">96.27</td>
<td align="left">98.36</td>
<td align="left">96.82</td>
<td align="left">97.25</td>
<td align="left">97.3</td>
<td align="left">97.45</td>
<td align="left">97.5</td>
</tr>
<tr>
<td align="left">CNN</td>
<td align="left">98.55</td>
<td align="left">96.82</td>
<td align="left">98.32</td>
<td align="left">98.55</td>
<td align="left">98.1</td>
<td align="left">98.56</td>
<td align="left">98.22</td>
<td align="left">98.34</td>
<td align="left">98.32</td>
</tr>
<tr>
<td align="left">SNN</td>
<td align="left">98.78</td>
<td align="left">98.92</td>
<td align="left">98.613</td>
<td align="left">98.63</td>
<td align="left">98.73</td>
<td align="left">98.83</td>
<td align="left">98.43</td>
<td align="left">98.83</td>
<td align="left">98.53</td>
</tr>
<tr>
<td align="left">PPART</td>
<td align="left">99.44</td>
<td align="left">99.31</td>
<td align="left">99.15</td>
<td align="left">99.36</td>
<td align="left">99.63</td>
<td align="left">99.15</td>
<td align="left">99.01</td>
<td align="left">99.35</td>
<td align="left">99.32</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-6"><label>Figure 6</label><caption><title>Obstacles detection accuracy based on the time interval</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-6.png"/></fig>
<p><xref ref-type="fig" rid="fig-5">Figs. 5</xref> and <xref ref-type="fig" rid="fig-6">6</xref> illustrate the accuracy and efficiency of obstacle identification. The computation of <italic>P<sub>g</sub></italic> based on <xref ref-type="disp-formula" rid="eqn-8">Eq. (8)</xref> and <italic>P<sub>HA</sub> </italic>based on <xref ref-type="disp-formula" rid="eqn-11">Eq. (11)</xref> reduce the occurrence of mobile robot navigation outside of the TEB. Meanwhile, the computation of <italic>P<sub>o</sub></italic> based on <xref ref-type="disp-formula" rid="eqn-9">Eq. (9)</xref> maximizes the obstacle detection process. Following the computation of the repulsive potential values, the obstacles are effectively predicted in the different time intervals.</p>
<p>The path navigation accuracy values are listed in <?A3B2 "tbl3",5,"anchor"?><xref ref-type="table" rid="table-3">Tab. 3</xref>. From the table we can understand that the proposed PART approach offers better prediction accuracy in different navigation attempts. The results in <xref ref-type="table" rid="table-3">Tab. 3</xref> are graphically presented in <?A3B2 "fig7",5,"anchor"?><xref ref-type="fig" rid="fig-7">Fig. 7</xref>.</p>
<table-wrap id="table-3"><label>Table 3</label><caption><title>Path prediction accuracy on navigation attempts</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Methods</th>
<th align="left" colspan="9">Number of touring navigation attempts</th>
</tr>
<tr>
<th align="left">50</th>
<th align="left">75</th>
<th align="left">100</th>
<th align="left">125</th>
<th align="left">150</th>
<th align="left">175</th>
<th align="left">200</th>
<th align="left">225</th>
<th align="left">250</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">ANN</td>
<td align="left">97.43</td>
<td align="left">97.85</td>
<td align="left">97.36</td>
<td align="left">99.45</td>
<td align="left">97.91</td>
<td align="left">98.34</td>
<td align="left">98.39</td>
<td align="left">98.54</td>
<td align="left">98.59</td>
</tr>
<tr>
<td align="left">CNN</td>
<td align="left">98.343</td>
<td align="left">96.613</td>
<td align="left">98.113</td>
<td align="left">98.343</td>
<td align="left">97.893</td>
<td align="left">99.043</td>
<td align="left">98.013</td>
<td align="left">98.983</td>
<td align="left">98.113</td>
</tr>
<tr>
<td align="left">SNN</td>
<td align="left">98.923</td>
<td align="left">99.383</td>
<td align="left">98.406</td>
<td align="left">99.016</td>
<td align="left">99.466</td>
<td align="left">98.846</td>
<td align="left">99.036</td>
<td align="left">99.276</td>
<td align="left">99.046</td>
</tr>
<tr>
<td align="left">PPART</td>
<td align="left">99.233</td>
<td align="left">99.503</td>
<td align="left">99.43</td>
<td align="left">99.253</td>
<td align="left">99.723</td>
<td align="left">99.43</td>
<td align="left">99.53</td>
<td align="left">99.843</td>
<td align="left">99.23</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-7"><label>Figure 7</label><caption><title>Navigation path accuracy</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-7.png"/></fig>
<p>The efficiency of the navigation path identification process was analyzed. The results, listed in <?A3B2 "tbl4",5,"anchor"?><xref ref-type="table" rid="table-4">Tab. 4</xref>, show that the proposed PART approach offers better prediction accuracy at various time intervals. The results in <xref ref-type="table" rid="table-4">Tab. 4</xref> are graphically presented in <?A3B2 "fig8",5,"anchor"?><xref ref-type="fig" rid="fig-8">Fig. 8</xref>.</p>
<table-wrap id="table-4"><label>Table 4</label><caption><title>Navigation path prediction accuracy on the various time interval</title></caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Methods</th>
<th align="left" colspan="9">Time (s)</th>
</tr>
<tr>
<th align="left">0.5</th>
<th align="left">1.0</th>
<th align="left">1.5</th>
<th align="left">2.0</th>
<th align="left">2.5</th>
<th align="left">3.0</th>
<th align="left">3.5</th>
<th align="left">4.0</th>
<th align="left">4.5</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">ANN</td>
<td align="left">96.68</td>
<td align="left">97.1</td>
<td align="left">96.61</td>
<td align="left">98.7</td>
<td align="left">97.16</td>
<td align="left">97.59</td>
<td align="left">97.64</td>
<td align="left">97.79</td>
<td align="left">97.84</td>
</tr>
<tr>
<td align="left">CNN</td>
<td align="left">98.89</td>
<td align="left">97.16</td>
<td align="left">98.66</td>
<td align="left">98.89</td>
<td align="left">98.44</td>
<td align="left">98.9</td>
<td align="left">98.56</td>
<td align="left">98.68</td>
<td align="left">98.66</td>
</tr>
<tr>
<td align="left">SNN</td>
<td align="left">99.12</td>
<td align="left">99.26</td>
<td align="left">98.953</td>
<td align="left">98.97</td>
<td align="left">99.07</td>
<td align="left">99.17</td>
<td align="left">98.77</td>
<td align="left">99.17</td>
<td align="left">98.87</td>
</tr>
<tr>
<td align="left">PPART</td>
<td align="left">99.78</td>
<td align="left">99.65</td>
<td align="left">99.49</td>
<td align="left">99.7</td>
<td align="left">99.97</td>
<td align="left">99.49</td>
<td align="left">99.35</td>
<td align="left">99.69</td>
<td align="left">99.66</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-8"><label>Figure 8</label><caption><title>Navigation path prediction accuracy based on time interval</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-8.png"/></fig>
<p>The error rates of the three different classifiers considered in comparison with that of the PPART approach are illustrated in <?A3B2 "fig9",5,"anchor"?><xref ref-type="fig" rid="fig-9">Fig. 9</xref>. The overall accuracy obtained by the four classifiers is illustrated in <?A3B2 "fig10",5,"anchor"?><xref ref-type="fig" rid="fig-10">Fig. 10</xref>. This figure demonstrates that the PPART approach provides an overall accuracy of up to 99.61&#x0025;.</p>
<fig id="fig-9"><label>Figure 9</label><caption><title>Error rate</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-9.png"/></fig>
<fig id="fig-10"><label>Figure 10</label><caption><title>Accuracy</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_16152-fig-10.png"/></fig>
</sec>
<sec id="s4">
<label>4</label><title>Conclusion</title>
<p>The present work addressed the generally inefficient tour trajectories obtained by conventional intelligent algorithms due to the poor prediction of unexpected obstacles in the environment by proposing the PPART neural network. The potential field method was employed to conduct path exploration according to a given destination and the presence of obstacles in the path exploration space based on an energy field representation of the path exploration space. An ART neural network was then employed in conjunction with the determined obstacles to obtain the optimal navigation path that avoids all obstacles and is the shortest possible path to achieve operational objectives. The proposed system was implemented using the AlphaBot platform, and the performance of the system was evaluated according to the obstacle prediction accuracy, path detection accuracy, and the overall accuracy of the system. These results demonstrated that the proposed system provides a very high obstacle prediction accuracy of 99.61&#x0025;. Accordingly, the proposed tour planning design effectively predicts unexpected obstacles in the environment, and thereby increases the overall efficiency of tour navigation. In future work, we will seek to improve the efficiency of the mobile robot navigation process by applying optimized techniques.</p>
</sec>
</body>
<back>
<ack>
<p>We thank LetPub (<uri xlink:href="https://www.letpub.com">www.letpub.com</uri>) for its linguistic assistance during the preparation of this manuscript.</p>
</ack>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> The authors received no specific funding for this study.</p></fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p></fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J. R. G.</given-names> <surname>Sanchez</surname></string-name>, <string-name><given-names>R. S.</given-names> <surname>Ortigoza</surname></string-name>, <string-name><given-names>S. T.</given-names> <surname>Mosqueda</surname></string-name>, <string-name><given-names>G. M.</given-names> <surname>Sanchez</surname></string-name>, <string-name><given-names>V. M. H.</given-names> <surname>Guzman</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Tracking control for mobile robots considering the dynamics of all their subsystems: Experimental implementation</article-title>,&#x201D; <source>Complexity</source>, vol. <volume>2017</volume>, no. <issue>5318504</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Zhu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Han</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Xiao</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Lu</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Motion planning of autonomous mobile robot using recurrent fuzzy neural network trained by extended kalman filter</article-title>,&#x201D; <source>Computational Intelligence and Neuroscience</source>, vol. <volume>2019</volume>, no. <issue>1934575</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Mitsch</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Ghorbal</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Vogelbacher</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Platzer</surname></string-name></person-group>, &#x201C;<article-title>Formal verification of obstacle avoidance and navigation of ground robots</article-title>,&#x201D; <source>International Journal of Robotics Research</source>, vol. <volume>36</volume>, no. <issue>12</issue>, pp. <fpage>1312</fpage>&#x2013;<lpage>1340</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D. W.</given-names> <surname>Kim</surname></string-name>, <string-name><given-names>T. A.</given-names> <surname>Lasky</surname></string-name> and <string-name><given-names>S. A.</given-names> <surname>Velinsky</surname></string-name></person-group>, &#x201C;<article-title>Autonomous multi-mobile robot system: Simulation and implementation using fuzzy logic</article-title>,&#x201D; <source>International Journal of Control,<italic></italic> Automation and Systems</source>, vol. <volume>11</volume>, no. <issue>3</issue>, pp. <fpage>545</fpage>&#x2013;<lpage>554</lpage>, <year>2013</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Budiharto</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Jazidie</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Purwanto</surname></string-name></person-group>, &#x201C;<article-title>Indoor navigation using adaptive neuro fuzzy controller for servant robot</article-title>,&#x201D; in <conf-name>Proc. 2010 Second Int. Conf. on Computer Engineering and Applications</conf-name>, <conf-loc>Bali Island</conf-loc>, pp. <fpage>582</fpage>&#x2013;<lpage>586</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>N. A.</given-names> <surname>Vien</surname></string-name>, <string-name><given-names>N. H.</given-names> <surname>Viet</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Lee</surname></string-name> and <string-name><given-names>T.</given-names> <surname>Chung</surname></string-name></person-group>, &#x201C;<article-title>Obstacle avoidance path planning for mobile robot based on ant-q reinforcement learning algorithm</article-title>,&#x201D; in <conf-name>Proc. Int. Sym. on Neural Networks: Advances in Neural Networks</conf-name>, <conf-loc>Berlin, Heidelberg</conf-loc>, pp. <fpage>704</fpage>&#x2013;<lpage>713</lpage>, <year>2007</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Aouf</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Boussaid</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Sakly</surname></string-name></person-group>, &#x201C;<article-title>Same fuzzy logic controller for two-wheeled mobile robot navigation in strange environments</article-title>,&#x201D; <source>Journal of Robotics</source>, vol. <volume>2019</volume>, no. <issue>2465219</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>A. M.</given-names> <surname>Zou</surname></string-name>, <string-name><given-names>Z. G.</given-names> <surname>Hou</surname></string-name>, <string-name><given-names>S. Y.</given-names> <surname>Fu</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Tan</surname></string-name></person-group>, &#x201C;<article-title>Neural networks for mobile robot navigation: A survey</article-title>,&#x201D; in <conf-name>Proc. Int. Sym. on Neural Networks: Advances in Neural Networks</conf-name>, <conf-loc>Berlin, Heidelberg</conf-loc>, pp. <fpage>1218</fpage>&#x2013;<lpage>1226</lpage>, <year>2006</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Tai</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Shaohua</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Liu</surname></string-name></person-group>, &#x201C;<article-title>Autonomous exploration of mobile robots through deep neural networks</article-title>,&#x201D; <source>International Journal of Advanced Robotic Systems</source>, vol. <volume>14</volume>, no. <issue>4</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Thomas</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Hedley</surname></string-name></person-group>, &#x201C;<article-title>Fumebot: A deep convolutional neural network controlled robot</article-title>,&#x201D; <source>Robotics</source>, vol. <volume>8</volume>, no. <issue>3</issue>, pp. <fpage>50</fpage>&#x2013;<lpage>62</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Bing</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Meschede</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Rohrbein</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Huang</surname></string-name> and <string-name><given-names>A. C.</given-names> <surname>Knoll</surname></string-name></person-group>, &#x201C;<article-title>A survey of robotics control based on learning-inspired spiking neural networks</article-title>,&#x201D; <source>Frontiers in Neurorobotics</source>, vol. <volume>12</volume>, no. <issue>35</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>22</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Patnaik</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Khetarpal</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Behera</surname></string-name></person-group>, &#x201C;<article-title>Mobile robot navigation using evolving neural controller in unstructured environments</article-title>,&#x201D; <source>IFAC Proc. Volumes</source>, vol. <volume>47</volume>, no. <issue>1</issue>, pp. <fpage>758</fpage>&#x2013;<lpage>765</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Quinonez</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Ramirez</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Lizarraga</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Tostado</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Bekios</surname></string-name></person-group>, &#x201C;<article-title>Autonomous robot navigation based on pattern recognition techniques and artificial neural networks</article-title>,&#x201D; in <conf-name>Proc. Int. Work-Conf. on the Interplay Between Natural and Artificial Computation (IWINAC 2015): Bioinspired Computation in Artificial Systems</conf-name>, <publisher-name>Springer</publisher-name>, <conf-loc>Cham</conf-loc>, pp. <fpage>320</fpage>&#x2013;<lpage>329</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K. B.</given-names> <surname>Reed</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Majewicz</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Kallem</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Alterovitz</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Goldberg</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Robot-assisted needle steering</article-title>,&#x201D; <source>IEEE Robotics and Automation Magazine</source>, vol. <volume>18</volume>, no. <issue>4</issue>, pp. <fpage>35</fpage>&#x2013;<lpage>46</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Solea</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Filipescu</surname></string-name> and <string-name><given-names>U. J.</given-names> <surname>Nunes</surname></string-name></person-group>, &#x201C;<article-title>Sliding-mode control for trajectory-tracking of a wheeled mobile robot in presence of uncertainties</article-title>,&#x201D; in <conf-name>Proc. of 7th Asian Control Conf.</conf-name>, <conf-loc>Hong Kong</conf-loc>, pp. <fpage>1701</fpage>&#x2013;<lpage>1706</lpage>, <year>2009</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>I.</given-names> <surname>Hassani</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Maalej</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Rekik</surname></string-name></person-group>. &#x201C;<article-title>Robot path planning with avoiding obstacles in known environment using free segments and turning points algorithm</article-title>,&#x201D; <source>Mathematical Problems in Engineering</source>, vol. <volume>2018</volume>, no. <issue>2163278</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>13</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Seki</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Shibayama</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Kamiyaand</surname></string-name> <string-name><given-names>M.</given-names> <surname>Hikizu</surname></string-name></person-group>, &#x201C;<article-title>Practical obstacle avoidance using potential field for anonholonomic mobile robot with rectangular body</article-title>,&#x201D; in <conf-name>Proc. of the 13th IEEE Int. Conf. on Emerging Technologies and Factory Automation</conf-name>, <conf-loc>Hamburg, Germany</conf-loc>, pp. <fpage>326</fpage>&#x2013;<lpage>332</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Barszcz</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Bielecki</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Wojcik</surname></string-name></person-group>, &#x201C;<article-title>ART-Type artificial neural networks applications for classification of operational states in wind turbines</article-title>,&#x201D; in <conf-name>Proc. Int. Conf. on Artificial Intelligence and Soft Computing</conf-name>, <publisher-name>Springer</publisher-name>, <conf-loc>Berlin, Heidelberg</conf-loc>, pp. <fpage>11</fpage>&#x2013;<lpage>18</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>D. C.</given-names> <surname>Wunsch</surname></string-name>, <string-name><given-names>E. O.</given-names> <surname>Hair</surname></string-name> and <string-name><given-names>M. G.</given-names> <surname>Giesselmann</surname></string-name></person-group>, &#x201C;<article-title>Comparative analysis of regression and artificial neural network models for wind turbine power curve estimation</article-title>,&#x201D; <source>Journal of Solar Energy Engineering</source>, vol. <volume>123</volume>, no. <issue>4</issue>, pp. <fpage>327</fpage>&#x2013;<lpage>332</lpage>, <year>2001</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>