<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">IASC</journal-id>
<journal-id journal-id-type="nlm-ta">IASC</journal-id>
<journal-id journal-id-type="publisher-id">IASC</journal-id>
<journal-title-group>
<journal-title>Intelligent Automation &#x0026; Soft Computing</journal-title>
</journal-title-group>
<issn pub-type="epub">2326-005X</issn>
<issn pub-type="ppub">1079-8587</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">24310</article-id>
<article-id pub-id-type="doi">10.32604/iasc.2022.024310</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>MLP-PSO Framework with Dynamic Network Tuning for Traffic Flow Forecasting</article-title><alt-title alt-title-type="left-running-head">MLP-PSO Framework with Dynamic Network Tuning for Traffic Flow Forecasting</alt-title><alt-title alt-title-type="right-running-head">MLP-PSO Framework with Dynamic Network Tuning for Traffic Flow Forecasting</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Rajalakshmi</surname><given-names>V.</given-names></name>
<xref ref-type="aff" rid="aff-1">1</xref><email>vraji@svce.ac.in</email>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Ganesh Vaidyanathan</surname><given-names>S.</given-names></name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<aff id="aff-1"><label>1</label><institution>Department of Computer Science and Engineering, Sri Venkateswara College of Engineering</institution>, <addr-line>Sriperumbudur</addr-line>, <country>India</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of Electronics and Communication Engineering, Sri Venkateswara College of Engineering</institution>, <addr-line>Sriperumbudur</addr-line>, <country>India</country></aff>
</contrib-group><author-notes><corresp id="cor1"><label>&#x002A;</label>Corresponding Author: V. Rajalakshmi. Email: <email>vraji@svce.ac.in</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-03-21"><day>21</day>
<month>03</month>
<year>2022</year></pub-date>
<volume>33</volume>
<issue>3</issue>
<fpage>1335</fpage>
<lpage>1348</lpage>
<history>
<date date-type="received"><day>13</day><month>10</month><year>2021</year></date>
<date date-type="accepted"><day>13</day><month>12</month><year>2021</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Rajalakshmi and Ganesh Vaidyanathan</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Rajalakshmi and Ganesh Vaidyanathan</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_IASC_24310.pdf"></self-uri>
<abstract>
<p>Traffic flow forecasting is the need of the hour requirement in Intelligent Transportation Systems (ITS). Various Artificial Intelligence Frameworks and Machine Learning Models are incorporated in today&#x2019;s ITS to enhance forecasting. Tuning the model parameters play a vital role in designing an efficient model to improve the reliability of forecasting. Hence, the primary objective of this research is to propose a novel hybrid framework to tune the parameters of Multilayer Perceptron (MLP) using the Swarm Intelligence technique called Particle Swarm Optimization (PSO). The proposed MLP-PSO framework is designed to adjust the weights and bias parameters of MLP dynamically using PSO in order to optimize the network performance of MLP. PSO continuously monitors the gradient loss of MLP network while forecasting the traffic flow. The loss is reduced gradually using Inertia Weight (denoted as &#x03C9;) which is the critical parameter of PSO. It is used to set a balance between the local and global search possibilities. The Inertia Weight has been varied in order to dynamically adjusts the network parameters of MLP. A comparison has been carried out among MLP and MLP-PSO models with variants of Inertia Weight Initializations. The results obtained justifies that, the proposed MLP-PSO framework reduces the forecasting error and improves reliability and accuracy than MLP model.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Multilayer perceptron</kwd>
<kwd>evolutionary computing</kwd>
<kwd>particle swarm optimization</kwd>
<kwd>swarm intelligence</kwd>
<kwd>forecasting</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>There are rapid advances in Intelligent Transportation Systems (ITS) due to its high potential to improve safety and mobility in transportation. The main motivation behind the implementation of ITS is the effective and efficient forecasting of traffic flow on roads thus facilitating a safer and smarter use of traffic networks. The implementation of such systems aims in forecasting with negligible error rates. A traffic flow forecasted from multilayer perceptron output depends on the input and parameters of the network. The performance of the network can be optimized by Bio-inspired computer optimization algorithms.</p>
<p>Bio-inspired computer optimization algorithms are a novel way to developing new and robust competing approaches that is based on the ideas and inspiration of biological evolution. In recent years, bio-inspired optimization algorithms have gained popularity in machine learning for solving challenging issues in science and engineering. However, these problems are typically nonlinear and constrained by many nonlinear constraints in determining the best solution. Recent advances tend to utilize bio-inspired optimization algorithms to solve the issues of traditional optimization methods, which provide a potential technique for solving complex optimization problems.</p>
<p>Various population-based nature inspired metaheuristic optimization algorithms are used with the Machine learning models in order to train the parameters of the model. Some of the algorithms used are Ant Colony optimization (ACO) [<xref ref-type="bibr" rid="ref-1">1</xref>], Particle swarm optimization (PSO), Firefly Algorithm (FA) [<xref ref-type="bibr" rid="ref-2">2</xref>], Moth flame Optimization (MFO), Chicken Swarm Optimization (CSO) [<xref ref-type="bibr" rid="ref-3">3</xref>], Elephant Herding Optimization (EHO) [<xref ref-type="bibr" rid="ref-4">4</xref>], Cuckoo Search (CS) [<xref ref-type="bibr" rid="ref-5">5</xref>] etc.</p>
<p>Particle Swarm Optimization (PSO) is utilized in this study to improve the performance of the Multilayer Perceptron and to find the optimal solution for forecasting traffic flow. With this forecasting model the error rate is reduced and hence the prediction accuracy is improved.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Work</title>
<p>Garg et al. used PSO to train Artificial Neural Network (ANN) for drilling flank wear detection. The network parameters are tuned using the PSO. The results show that the PSO trained ANN performs much better than Back Propogation Neural Network (BPNN).</p>
<p>The main feature of PSO is to provide minimum velocity constraint. This feature avoids premature convergence in MLP network parameter tuning. It also reduces the impact of increasing the dimensions of MLP. These improvements in network tuning [<xref ref-type="bibr" rid="ref-6">6</xref>] provides reduced error rate in prediction as stated by Pu et al.</p>
<p>A combined scheme using PSO and Newton&#x2019;s law of motion named as centripetal accelerated particle swarm optimization (CAPSO) was proposed by Beheshti et al. CAPSO is applied to MLP to optimize the network parameters. The CAPSO applied on MLP was able to classify diseases in various medical datasets efficiently that the nature inspired algorithms such as PSO [<xref ref-type="bibr" rid="ref-7">7</xref>], imperialist competitive algorithm and GSA on MLP.</p>
<p>Dang et al. applied PSO and Firefly algorithms to ANN to estimate the scour depths. The result prove Firefly [<xref ref-type="bibr" rid="ref-8">8</xref>] on ANN was effective compared to PSO on ANN. Guofeng Zhou et al. applied Artificial Bee Colony (ABC) Optimization and Particle Swarm Optimization (PSO) on MLP for estimating the usage of heating and cooling loads of the energy in the residance. The R<sup>2</sup>, MAE and RMSE metrics are used to measure the performance of the models. The results show that PSO on MLP works efficient than ABC on MLP [<xref ref-type="bibr" rid="ref-9">9</xref>].</p>
<p>Multimean particle swarm optimization is a novel optimization approach introduced by Mehmet et al. This algorithm was used to train a multilayer feed-forward ANN. On multilayer feed-forward ANN training, the technique was found to generate superior results than multiple swarm optimization (MSO) on benchmark datasets [<xref ref-type="bibr" rid="ref-10">10</xref>].</p>
<p>In cognitive research, the acquired data for training contains uncertain information. This uncertain data is termed as fuzziness. Training the MLP with this fuzzy data is called as Fuzzy MLP model [<xref ref-type="bibr" rid="ref-11">11</xref>] by Dash et al. MLP normally suffers from local minima problem. To overcome this, three metaheuristic algorithms particle swarm optimization, genetic algorithm and gravitational search are used to optimize the parameters of FMLP.</p>
<p>To optimize the operation of the thermal producing units, Thang et al. suggested an updated firefly method. At various levels, three fundamental modifications to the firefly algorithms were presented. The improved algorithm was applied on the five different benchmarks and attained better performance than the firefly algorithm [<xref ref-type="bibr" rid="ref-12">12</xref>].</p>
<p>Khatibi et al. proposed hybrid algorithms. One is MLP with Levenberg&#x2013;Marquardt (LM) backpropagation algorithm [<xref ref-type="bibr" rid="ref-13">13</xref>] and the other is MLP with Fire-Fly Algorithm. These algorithms were used to predict the directions of stream flow and the result show MLP with Fire-Fly Algorithm performs better.</p>
<p>Emad et al. use the swarm intelligence Firefly method as a classifier. The suggested technique has three stages [<xref ref-type="bibr" rid="ref-14">14</xref>]. The first stage is feature selection, the second stage is model construction using FA to classify the class labels and the last stage is prediction of the classes. The classifier performance was proved to be effective by applying to seven different datasets.</p>
<p>Aboul et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] employed Moth flame optimization to optimize the parameters of the support vector machine in detecting tomato diseases. The fitness function in MFO captures the dependency in the features more efficiently and thereby maximized the classification accuracy. Xiaodong et al. proposed a modified Ameliorated Moth Flame Optimization (AMFO) algorithm [<xref ref-type="bibr" rid="ref-16">16</xref>] in which the flames are produced with Gaussian mutation and thereby the moth postions are updated. This enables the MFO to attain the global minimum faster. On 23 separate benchmarks, this algorithm was compared to 9 state-of-the-art models and achieved good results with 0.0542 mean square error.</p>
<p>The hybrid algorithm combining the Deep Neural Network and Chicken Swarm Optimization is proposed by [<xref ref-type="bibr" rid="ref-17">17</xref>] Sengar et al. The DNN was initially trained with 24&#x2005;h wind energy data. The error rate during training the DNN model is reduced using CSO. Later the wind energy forecast was carried out using the DNN-CSO algorithm. This algorithm showed better performance when compared with DNN and ANN models.</p>
<p>Saghatforoush et al. predicted the flyrock and back break in order to minimize the environmental impacts due to back break, flyrock and ground vibration [<xref ref-type="bibr" rid="ref-18">18</xref>]. The Artificial Neural Network (ANN) algorithm was used in flyrock and back break prediction. In order to increase prediction accuracy, the parameters of the ANN were fine-tuned effectively utilising Ant Colony Optimization (ACO).</p>
<p>Hossein et al. presented a hybrid approach that combines elephant herding optimization (EHO) and MLP [<xref ref-type="bibr" rid="ref-19">19</xref>] to optimize the cooling loads in heating, ventilation, and air conditioning. ACO with MLP and Harris hawks optimization (HHO) with MLP were used to compare the outcomes. The proposed EHO with MLP outperformed the other two metaheuristic algorithms in terms of accuracy.</p>
<p>The performances of Feed forward neural networks was improved by optimizing its parameters. In this paper, Ashraf et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] compared the effectiveness of various single objective and multi objective optimization algorithms. AbdElRahman et al. proposed a strategy to refine the cell structure of Long Short Term Memory (LSTM) Reccurent Neural Networks (RNN) using Ant Colony Optimization (ACO) [<xref ref-type="bibr" rid="ref-21">21</xref>]. The ACO optimized LSTM RNNs performed better than the traditional algorithms.</p>
<p>A simple matching-grasshopper new cat swarm optimization algorithm (SM-GNCSOA) was proposed by [<xref ref-type="bibr" rid="ref-22">22</xref>] Bansal et al. in order to select the relevant featyres efficiently. It was used to optimize the performance of Multilayer Perceptron (MLP). This gives better results when applied to various datasets and very efficient to avoid local minima problems. Monalisa et al. used Elephant Herding Optimization algorithm [<xref ref-type="bibr" rid="ref-23">23</xref>] for tuning the network parameters of ANN. This resulted in diagnosis of cancer with good classification accuracy of about 0.9837.</p>
<p>One of the most important topics in Particle Swarm Optimization (PSO) [<xref ref-type="bibr" rid="ref-24">24</xref>] is determining the inertia weight w. The inertia weight was created by PSO to balance its global and local search capabilities. Initially, a method was proposed for adjusting the inertia weight adaptively based on particle velocity data. Second, Zheng et al. propose that both position and velocity data be used to adaptively modify the inertia weight.</p>
<p>Martins et al. developed the linear decreasing inertia weight (LDIW) technique [<xref ref-type="bibr" rid="ref-25">25</xref>] to increase the performance of the initial particle swarm optimization. However, when dealing with big optimization problems, the linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the defect of premature convergence due to a lack of momentum for particles to execute exploitation as the programme approaches completion.</p>
<p>Particle swarm optimization (PSO) is a well-known swarm intelligence-based optimization technique [<xref ref-type="bibr" rid="ref-26">26</xref>]. PSO&#x2019;s inertia weight is a crucial parameter. During the last 20 years, many methodologies for determining inertia weight have been proposed by Kushwah et al. This study proposes a Dynamic Inertia Weight method. In the suggested method, probability-based inertia weights are applied to PSO. This paper demonstrates the potential of particle swarm optimization for tackling many types of optimization problems in chemometrics by providing a full description of the method [<xref ref-type="bibr" rid="ref-27">27</xref>] as well as several worked examples in diverse applications is proposed by Marini et al.</p>
<p>Shang et al. present a novel hybrid prediction model based on combination kernel function-least squares support vector machine and multivariate phase space reconstruction [<xref ref-type="bibr" rid="ref-28">28</xref>] to increase traffic prediction accuracy. PSO is used to optimize the parameters of the given model. Cong et al. proved that the least squares support vector machine (LSSVM) [<xref ref-type="bibr" rid="ref-29">29</xref>] has a lot of potential in forecasting issues, particularly when it comes to selecting the values of its two parameters using appropriate heuristic approaches. However, the difficulty in understanding and obtaining the global optimal solution with these meta-heuristics. The fruit fly optimization algorithm (FOA) is a novel heuristic method that is easy to learn and quickly converges to the global optimal solution.</p>
<p>An upgraded PSO-BP (particle swarm optimization-back propagation) prediction model is built to estimate total vessel traffic flow in a particular port location. The SAPSO-BP neural network is a prediction model presented by Zhang et al. [<xref ref-type="bibr" rid="ref-30">30</xref>] that updates the parameters of a BP neural network using the SAPSO (self-adaptive particle swarm optimization) algorithm.</p>
<p>In this work, the researcher&#x2019;s ideas motivated in optimizing the performance of MLP using Particle Swarm Optimization.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Multilayer Perceptron (MLP) Framework</title>
<p>Multilayer Perceptron is a fully connected feedforward back propagation network. There can be three or more layers consisting of an input layer, an output layer and one or more hidden layers forming a deep neural network. In a fully connected network, the node in one layer connects to every node in the next layer with a certain weight w. The model is trained by changing the connection weights after each iteration based on the amount of error in the output. <xref ref-type="fig" rid="fig-1">Figs. 1</xref> and <xref ref-type="fig" rid="fig-2">2</xref> shows the MLP framework and the weight and bias initialization.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Multilayer perceptron</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-1.png"/>
</fig>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Bias and weights representation in MLP</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-2.png"/>
</fig>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>MLP-PSO framework for traffic flow forecasting</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-3.png"/>
</fig>
<p>Algorithm 1 is an implementation of multilayer perceptron with backpropogation for time series forecasting. X, Y are the input and the target time series data points which is fed as input data to multilayer perceptron algorithm. Initialize the two main parameters in perceptron algorithm namely: the learning rate (&#x03B7;) and the number of times to perform back propagation (epoch). Later, initialize the following:<list list-type="bullet"><list-item>
<p>W<sub>h</sub>-weight between the input layer and hidden layer</p></list-item><list-item>
<p>W<sub>out</sub>-weight between the hidden layer and output layer</p></list-item><list-item>
<p>B<sub>h</sub>-bias to hidden layer neurons</p></list-item><list-item>
<p>B<sub>out</sub>-bias to output layer neurons</p></list-item></list>
<fig id="fig-9">
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-9.png"/>
</fig></p>
<p>The following are computed for forward propagation:<list list-type="order"><list-item>
<p>Hidden layer input (H<sub>i</sub>) which is the dot product of Y and W<sub>h</sub> added to B<sub>h</sub></p></list-item><list-item>
<p>Hidden layer activation (H<sub>a</sub>) obtained by applying sigmoid activation function to H<sub>i</sub></p></list-item><list-item>
<p>Output layer input (O<sub>i</sub>) which is the dot product of H<sub>a</sub> with W<sub>out</sub> added to B<sub>out</sub></p></list-item><list-item>
<p>Predicted output (Y<sub>out</sub>) obtained by applying sigmoid activation function to O<sub>i</sub></p></list-item></list></p>
<p>After forward propagation, an output is predicted which may contain error. In order to minimize the error rate, the network is back propagated by updating the weights and bias at all intermediate layers between output and input layer. The following computations are done in backward propagation:<list list-type="order"><list-item>
<p>Error (E) which is the difference between original output (<inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mover accent="true"><mml:mi>Y</mml:mi><mml:mo>&#x00AF;</mml:mo></mml:mover></mml:math></inline-formula>) and predicted output (Y<sub>out</sub>)</p></list-item><list-item>
<p>Slope or gradient of output layer (g<sub>out</sub>) and hidden layer (g<sub>out</sub>) which are obtained by applying derivative of sigmoid function to Y<sub>out</sub> and H<sub>a</sub> respectively.</p></list-item><list-item>
<p>Delta of the output layer (d<sub>out</sub>) is the product of E and g<sub>out</sub></p></list-item><list-item>
<p>Error in hidden layer (E<sub>h</sub>) is the dot product of d<sub>out</sub> and W<sub>out</sub></p></list-item><list-item>
<p>Delta of the hidden layer (d<sub>h</sub>) is the product of E<sub>h</sub> and g<sub>h</sub></p></list-item></list></p>
</sec>
<sec id="s4">
<label>4</label>
<title>MLP-PSO Framework</title>
<p>Particle Swarm Optimization (PSO) is a meta-heuristic optimization technique based on natural swarm behaviour, such as fish and bird schools. PSO is a simple social system simulation. The PSO algorithm was created with the goal of graphically simulating the elegant but unpredictable choreography of a flock of birds.</p>
<p>Each solution in PSO is a &#x201C;bird&#x201D; with in problem space which is known as &#x201C;particle&#x201D;. All particles have fitness values that are evaluated by the fitness function in order to be optimized, as well as velocities that direct the particles&#x2019; flight. The particles follow the current optimum particles through the search space.</p>
<p>This technique is a basic yet effective population-based, adaptive, and stochastic technique for tackling simple and challenging optimization problems. Because it does not require the gradient of the problems to work with, the technique can be used to a wide range of optimization problems. In PSO, a swarm of particles (a collection of solutions) is scattered at random across the search space. The food at each particle&#x2019;s location is specified by the target function (which is the value of the objective function). Every particle is aware of its initialization value, best value (locally best solution), swarm-wide best value (globally best solution), and velocity as determined by the objective function.</p>
<p>A single static population is maintained by PSO whose particles are slightly adjusted due to variations in search space. This method is referred as directed mutation. These particles never expire instead moved to different space due to directed mutation.</p>
<p>A small number of different parameters, such as the number of particles in the swarm, the dimension of the search space, the particle&#x2019;s velocity and position, cognitive rate, social rate, inertia weight and other random factors, control the PSO algorithm&#x2019;s behaviour and usefulness in optimizing a given problem. The versatility of PSO lies in tuning these parameters for optimal solution. To be more specific, handling inertia weight has attracted many researchers&#x2019; interest. This is due to the fact that the parameter &#x03C9; controls the divergence or convergence of particles. The position and velocity in a d dimensional space of the particles are represented by x<sub>i</sub> &#x003D; x<sub>i1</sub>, x<sub>id</sub> and v<sub>i</sub> &#x003D; v<sub>i1</sub>, . . . v<sub>id</sub> respectively. Particle movement is required to find the best solution in the search space.</p>
<p>PSO starts with a set of random particles (solutions) and then updates generations to look for optima. Each particle is updated in each iteration by comparing two &#x201C;best&#x201D; values. The first is the best solution (fitness) obtained by the particle thus far which is called as PBest. The best value obtained so far by any particle in the population is another &#x201C;best&#x201D; value recorded by the particle swarm optimizer. This best value is referred to as GBest, which stands for &#x201C;global best&#x201D;.</p>
<p>Consider the following notations:<list list-type="bullet"><list-item>
<p><italic>f</italic>-MLP function to optimize</p></list-item><list-item>
<p><italic>p<sub>s</sub></italic>-number of particles in the swarm</p></list-item><list-item>
<p>d-dimension of the search space</p></list-item><list-item>
<p>c<sub>1</sub>-cognitive coefficient</p></list-item><list-item>
<p>c<sub>2</sub>-social coefficient</p></list-item><list-item>
<p><italic>&#x03C9;</italic>-inertia</p></list-item><list-item>
<p><italic>v<sub>i</sub></italic>-velocity of the i<sup>th</sup> particle</p></list-item><list-item>
<p><italic>x<sub>i</sub></italic>-position of the i<sup>th</sup> particle</p></list-item><list-item>
<p><italic>PBest<sub>i</sub></italic>-local best solution of the i<sup>th</sup> particle</p></list-item><list-item>
<p><italic>GBest</italic>-global best among other particles in the swarm</p></list-item><list-item>
<p>r<sub>1</sub> and r<sub>2</sub>-random values obtained from rand() function</p></list-item></list></p>
<p>For the next iteration, the velocity and position of the particles are updated as<disp-formula id="eqn-1"><label>(1)</label>
<mml:math id="mml-eqn-1" display="block"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo>&#x2217;</mml:mo><mml:msub><mml:mi>v</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2217;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>&#x2217;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>P</mml:mi><mml:mi>B</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2217;</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x2217;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>G</mml:mi><mml:mi>B</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:msub><mml:mi>t</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula><disp-formula id="eqn-2"><label>(2)</label>
<mml:math id="mml-eqn-2" display="block"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
</disp-formula></p>
<p><xref ref-type="fig" rid="fig-3">Fig. 3</xref> shows the proposed MLP-PSO framework for traffic flow forecasting. The objective of MLP-PSO framework is to minimize the error rate compared to MLP. Hence, the objective function of PSO is to minimize the Mean Square Error (MSE).</p>
<sec id="s4_1">
<label>4.1</label>
<title>Parameter Selection in PSO</title>
<p>The inertia, cognitive coefficient and social coefficient are the parameters that control the behaviour of the swarms. At each iteration, the random terms are used to accelerate the cognitive and social behavior. The weights r<sub>1</sub> and r<sub>2</sub> are used to stochastically adjust the cognitive and social acceleration respectively. r<sub>1</sub> and r<sub>2</sub> are unique for each iteration and each particle. They are random values set in the range of [0, 1].</p>
<p>The c<sub>1</sub> and c<sub>2</sub> parameters defines the group&#x2019;s capability to experience the best personal solutions and the best global solution found over the iterations respectively. When c<sub>1</sub> is high, there is no convergence since each particle is focused on its own best solutions. When c<sub>2</sub> is high, the optimal solution cannot be reached. Most of the studies states that c<sub>1</sub> &#x002B; c<sub>2&#x2009;</sub>&#x003E;<sub>&#x2009;</sub>4.</p>
<p>For N iterations and for every i current iteration, c<sub>2</sub> linearly increases from 0.5 to 3.5 whereas c<sub>1</sub> linearly decreases from 3.5 to 0.5. This ensures c<sub>1</sub> &#x002B; c<sub>2&#x2009;</sub>&#x003D;<sub>&#x2009;</sub>4 [<xref ref-type="bibr" rid="ref-25">25</xref>].</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Variants of Inertia Weight (&#x03C9;)</title>
<p>Particles&#x2019; ability to identify the best solutions found so far is known as exploitation. Particles&#x2019; ability to evaluate the entire search space is referred to as exploration. A good balance on exploitation and exploration likely has an impact on the convergence to optimum solution. The inertia weight is a critical parameter in the PSO process, since it influences convergence and the trade-off between exploration-exploitation. Some variants of inertia weights are used that shows its impact on the forecasted traffic flows. Various studies states that &#x03C9; can be initially 0.9 (referred as &#x03C9;<sub>start</sub>) and converges down to 0.4 (referred as &#x03C9;<sub>end</sub>) in the subsequent iterations.</p>
<sec id="s4_2_1">
<label>4.2.1</label>
<title>Constant Inertia Weight (CIW)</title>
<p>A constant value is set for the inertia in all iterations. In this work, the constant value is set as<disp-formula id="eqn-3"><label>(3)</label>
<mml:math id="mml-eqn-3" display="block"><mml:mi>C</mml:mi><mml:mi>I</mml:mi><mml:mi>W</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:mstyle></mml:math>
</disp-formula></p>
</sec>
<sec id="s4_2_2">
<label>4.2.2</label>
<title>Random Inertia Weight (RIW)</title>
<p>The Random Inertia Weight is fully determined by a random value. The generated random value is in the range [0, 1] and the computed RIW is between [0.4, 0.9]<disp-formula id="eqn-4"><label>(4)</label>
<mml:math id="mml-eqn-4" display="block"><mml:mi>R</mml:mi><mml:mi>I</mml:mi><mml:mi>W</mml:mi><mml:mo>=</mml:mo><mml:mn>0.4</mml:mn><mml:mo>+</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:mstyle></mml:math>
</disp-formula></p>
</sec>
<sec id="s4_2_3">
<label>4.2.3</label>
<title>Chaotic Inertia Weight (ChIW)</title>
<p>A dynamic nonlinear system that is highly dependent on its starting value is called a chaos. It possesses the properties of ergodicity and stochasticity. The goal is to use the merits of chaotic optimization to prevent PSO entering into local optimum in the problem search process.<disp-formula id="eqn-5"><label>(5)</label>
<mml:math id="mml-eqn-5" display="block"><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03BC;</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula>where <italic>z<sub>k</sub></italic> is the <italic>k<sup>th</sup></italic> chaotic number in the range [0, 1] and &#x03BC; &#x003D; 4 such that, <italic>z</italic><sub>0</sub> &#x2208; (0, 1) and <italic>z</italic><sub>0</sub> &#x2209; (0, 0.25, 0.5, 0.75, 1)<disp-formula id="eqn-6"><label>(6)</label>
<mml:math id="mml-eqn-6" display="block"><mml:mi>C</mml:mi><mml:mi>h</mml:mi><mml:mi>I</mml:mi><mml:mi>W</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math>
</disp-formula></p>
</sec>
<sec id="s4_2_4">
<label>4.2.4</label>
<title>Linear Decreasing Inertia Weight (LDIW)</title>
<p>For every subsequent iteration, the inertia weight decreases linearly. In general, a large inertia weight is advised for the initial phases of the search process to increase global exploration (finding new areas), whereas the inertia weight is reduced for local exploration in the later stages in order to fine tune the current search space. Considering <italic>i<sub>max</sub></italic> and <italic>i</italic> are maximum and current iterations respectively, the Linear Decreasing Inertia Weight (LDIW) is computed as follows:<disp-formula id="eqn-7"><label>(7)</label>
<mml:math id="mml-eqn-7" display="block"><mml:mi>L</mml:mi><mml:mi>D</mml:mi><mml:mi>I</mml:mi><mml:mi>W</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03C9;</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:math>
</disp-formula></p>
</sec>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Experimental Setup and Discussion</title>
<sec id="s5_1">
<label>5.1</label>
<title>Metrics</title>
<p>In order to have a comprehensive evaluation of the model, three metrics are considered to validate the models. Mean Square Error (MSE) is used to show the degree of variation in the results. Mean Absolute Error(MAE) is used to show the variance in the result prediction. R<sup>2</sup> is used to measure the degree of correlation between the original value and the predicted value.<list list-type="bullet"><list-item>
<p><italic>A<sub>t</sub></italic>-the original traffic flow at time t</p></list-item><list-item>
<p><italic>&#x0100;</italic>-average of the original traffic flows</p></list-item><list-item>
<p><italic>F<sub>t</sub></italic>-the predicted traffic flow at time t</p></list-item><list-item>
<p>num-number of traffic flow data</p></list-item></list>
<fig id="fig-10">
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-10.png"/>
</fig></p>
<p>Metrics are given by,<disp-formula id="eqn-8"><label>(8)</label>
<mml:math id="mml-eqn-8" display="block"><mml:mi>M</mml:mi><mml:mi>S</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:munderover><mml:mo>&#x2061;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mn>2</mml:mn></mml:msup></mml:mstyle></mml:math>
</disp-formula><disp-formula id="eqn-9"><label>(9)</label>
<mml:math id="mml-eqn-9" display="block"><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:munderover><mml:mrow><mml:mo>&#x2223;</mml:mo></mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mrow><mml:mo>&#x2223;</mml:mo></mml:mrow></mml:mstyle></mml:math>
</disp-formula><disp-formula id="eqn-10"><label>(10)</label>
<mml:math id="mml-eqn-10" display="block"><mml:msup><mml:mi>R</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:munderover><mml:mo>&#x2061;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>A</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:munderover><mml:mo>&#x2061;</mml:mo><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mi>t</mml:mi></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mover><mml:mi>A</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:math>
</disp-formula></p>
</sec>
<sec id="s5_2">
<label>5.2</label>
<title>Data Acquisition</title>
<p>Time Series Traffic Flow dataset used in this paper is recorded from MIDAS Site-5825 at M48 westbound between M4 and J1 (102022401)&#x2013;UK [<xref ref-type="bibr" rid="ref-31">31</xref>]. The traffic flow data comprises of traffic flow data for every 15 min interval of each day for every month. The model is trained and tested with Monday 8 AM data in the year 2020. The data is preprocessed before training to obtain a stationary time series. 44 traffic flow values are available for Mondays in the dataset. In this, 39 data is used to train the models. The forecast for last 5 Mondays in the year 2020 are predicted and tested with various models.</p>
</sec>
<sec id="s5_3">
<label>5.3</label>
<title>Parameter Initialization</title>
<p>The 1&#x2009;&#x00D7;&#x2009;10&#x2009;&#x00D7;&#x2009;6&#x2009;&#x00D7;&#x2009;1 MLP model is built. The model represents one node at input and output layer, 10 and 6 nodes in the first and second hidden layers respectively. The learning rate &#x03B7; is set as 0.1. After various runs of the algorithm by varying the number of epochs, it has been identified that the gradient loss is minimized at 50 epochs. The weight and bias is initialised at random during the initial run of PSO. Later PSO generates the weight and bias based on the loss from MLP which is stated in Algorithm 2. The position and velocity of the particles are updated in the range 0&#x2013;1. The size of search space is set as 100. Other parameter setting of PSO is carried out as per the discussion in Section 4.1.</p>
</sec>
<sec id="s5_4">
<label>5.4</label>
<title>Results</title>
<p><xref ref-type="table" rid="table-1">Tab. 1</xref> shows the result of training by various models. While training the models, the MLP-PSO with Linear Decreasing Inertia Weight (LDIW) yields MSE of 0.182, MAE of 0.250 and R2 of 0.981, whereas MLP yields MSE of 0.720, MAE of 0.640 and <italic>R</italic><sup>2</sup> of 0.882. Testing results of all the models are displayed in <xref ref-type="table" rid="table-2">Tab. 2</xref>. In the same way, during testing the models MLP-PSO (LDIW) yields MSE of 0.148, MAE of 0.240 and <italic>R</italic><sup>2</sup> of 0.979 and MLP yields MSE of 0.666, MAE of 0.620 and <italic>R</italic><sup>2</sup> of 0.899. The results show that MLP-PSO with variants of inertia provides better performance than MLP model. <xref ref-type="fig" rid="fig-4">Figs. 4</xref>&#x2013;<xref ref-type="fig" rid="fig-8">8</xref> shows the comparison of actual and predicted forecast from all the models.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>MLP forecast</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-4.png"/>
</fig>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>MLP-PSO (CIW) forecast</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-5.png"/>
</fig>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>MLP-PSO (RIW) forecast</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-6.png"/>
</fig>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>MLP-PSO (ChIW) forecast</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-7.png"/>
</fig>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>MLP-PSO (LDIW) forecast</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24310-fig-8.png"/>
</fig>
<table-wrap id="table-1"><label>Table 1</label>
<caption>
<title>Training performance</title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Model</th>
<th align="left">Inertia variants</th>
<th align="left">MSE</th>
<th align="left">MAE</th>
<th align="left">R<sup>2</sup></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">MLP</td>
<td align="left">&#x2014;</td>
<td align="left">0.720</td>
<td align="left">0.640</td>
<td align="left">0.882</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">CIW</td>
<td align="left">0.690</td>
<td align="left">0.650</td>
<td align="left">0.922</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">RIW</td>
<td align="left">0.651</td>
<td align="left">0.820</td>
<td align="left">0.929</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">ChIW</td>
<td align="left">0.392</td>
<td align="left">0.510</td>
<td align="left">0.962</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">LDIW</td>
<td align="left">0.182</td>
<td align="left">0.250</td>
<td align="left">0.981</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-2"><label>Table 2</label>
<caption>
<title>Testing performance</title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Model</th>
<th align="left">Inertia variants</th>
<th align="left">MSE</th>
<th align="left">MAE</th>
<th align="left">R<sup>2</sup></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">MLP</td>
<td align="left">&#x2014;</td>
<td align="left">0.666</td>
<td align="left">0.620</td>
<td align="left">0.899</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">CIW</td>
<td align="left">0.554</td>
<td align="left">0.660</td>
<td align="left">0.918</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">RIW</td>
<td align="left">0.646</td>
<td align="left">0.780</td>
<td align="left">0.925</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">ChIW</td>
<td align="left">0.373</td>
<td align="left">0.450</td>
<td align="left">0.955</td>
</tr>
<tr>
<td align="left">MLP-PSO</td>
<td align="left">LDIW</td>
<td align="left">0.148</td>
<td align="left">0.240</td>
<td align="left">0.979</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s6">
<label>6</label>
<title>Conclusion</title>
<p>Traffic flow forecasting is research of estimating the number of vehicles in the future that may flow through a particular lane at a specified period. The accurate estimation of future traffic flows is quite challenging. Numerous machine learning models are prevalent in the forecasting of time series data. In this paper, a novel approach of optimizing the MLP framework with PSO is proposed. The time series traffic flow data of MIDAS highway is considered for forecasting in this study. The MLP and MLP-PSO models with variants of Inertia Weight initialization are trained and tested to suggest a model with high accuracy. The MLP-PSO with Linear Decreasing Inertia Weight (LDIW) provides better accuracy with Mean Square Error (MSE) of 0.148, Mean Absolute Error (MAE) of 0.240 and <italic>R</italic><sup>2</sup> of 0.979 compared to MLP with MSE of 0.666, MAE of 0.620 and <italic>R</italic><sup>2</sup> of 0.899 in testing. Finally, the results obtained shows that MLP-PSO with all variants of Inertia weight provides improved accuracy rate than MLP.</p>
</sec>
</body>
<back>
<ack>
<p>We would like to thank our colleagues from our institution for providing deeper insights and expertise that helped our research to great extent.</p>
</ack><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> The authors received no specific funding for this study.</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Garg</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Patra</surname></string-name> and <string-name><given-names>S. K.</given-names> <surname>Pal</surname></string-name></person-group>, &#x201C;<article-title>Particle swarm optimization of a neural network model in a machining process</article-title>,&#x201D; <source>Sadhana</source>, vol. <volume>39</volume>, pp. <fpage>533</fpage>&#x2013;<lpage>548</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Nayak</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Naik</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Dinesh</surname></string-name></person-group>, &#x201C;<article-title>Firefly algorithm in biomedical and health care</article-title>,&#x201D; <source>Advances, Issues and Challenges. SN Computer Science</source>, vol. <volume>1</volume>, pp. <fpage>311</fpage>&#x2013;<lpage>320</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Deb</surname></string-name>, <string-name><given-names>X. Z.</given-names> <surname>Gao</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Tammi</surname></string-name></person-group>, &#x201C;<article-title>Recent studies on chicken swarm optimization algorithm: A review</article-title>,&#x201D; <source>Artificial Intelligence Review</source>, vol. <volume>53</volume>, pp. <fpage>1737</fpage>&#x2013;<lpage>1765</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Juan</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Hong</surname></string-name>, <string-name><given-names>A. H.</given-names> <surname>Alavi</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Elephant herding optimization: Variants, hybrids, and applications</article-title>,&#x201D; <source>Mathematics</source>, vol. <volume>8</volume>, pp. <fpage>1415</fpage>&#x2013;<lpage>1425</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. S.</given-names> <surname>Joshi</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Omkar</surname></string-name>, <string-name><given-names>G. M.</given-names> <surname>Kakandika</surname></string-name> and <string-name><given-names>V. M.</given-names> <surname>Nandedkar</surname></string-name></person-group>, &#x201C;<article-title>Cuckoo search optimization-A review</article-title>,&#x201D; <source>Materials Today: Proceedings</source>, vol. <volume>4</volume>, no. <issue>8</issue>, pp. <fpage>7262</fpage>&#x2013;<lpage>7269</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>Pu</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Fang</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Liu</surname></string-name></person-group>, &#x201C;<chapter-title>Multilayer perceptron networks training using particle swarm optimization with minimum velocity constraints</chapter-title>,&#x201D; in <source>Advances in Neural Networks&#x2013;ISNN 2007. Lecture Notes in Computer Science</source>, <publisher-name>Springer</publisher-name>, <publisher-loc>Berlin, Heidelberg</publisher-loc>, vol. <volume>4493</volume>, <year>2007</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Beheshti</surname></string-name>, <string-name><given-names>S. M. H.</given-names> <surname>Shamsuddin</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Beheshti</surname></string-name></person-group>, &#x201C;<article-title>Enhancement of artificial neural network learning using centripetal accelerated particle swarm optimization for medical diseases diagnosis</article-title>,&#x201D; <source>Soft Computing</source>, vol. <volume>18</volume>, pp. <fpage>2253</fpage>&#x2013;<lpage>2270</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N. M.</given-names> <surname>Dang</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Tran Anh</surname></string-name> and <string-name><given-names>T. D.</given-names> <surname>Dang</surname></string-name></person-group>, &#x201C;<article-title>ANN optimized by PSO and firefly algorithms for predicting scour depths around bridge piers</article-title>,&#x201D; <source>Engineering with Computers</source>, vol. <volume>37</volume>, pp. <fpage>293</fpage>&#x2013;<lpage>303</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Guofeng</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Hossein</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Mehdi</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Zongjie</surname></string-name></person-group>, &#x201C;<article-title>Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings</article-title>,&#x201D; <source>Journal of Cleaner Production</source>, vol. <volume>254</volume>, pp. 1&#x2013;14, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Mehmet</surname></string-name> and <string-name><given-names>I. H.</given-names> <surname>Mohammed</surname></string-name></person-group>, &#x201C;<article-title>A novel multimean particle swarm optimization algorithm for nonlinear continuous optimization: Application to feed-forward neural network training</article-title>,&#x201D; <source>Scientific Programming</source>, vol. <volume>18</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Dash</surname></string-name> and <string-name><given-names>H. S.</given-names> <surname>Behera</surname></string-name></person-group>, &#x201C;<article-title>A comprehensive study on evolutionary algorithm-based multilayer perceptron for real-world data classification under uncertainty</article-title>,&#x201D; <source>Expert Systems</source>, vol. <volume>36</volume>, pp. <fpage>145</fpage>&#x2013;<lpage>161</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T. N.</given-names> <surname>Thang</surname></string-name>, <string-name><given-names>V. Q.</given-names> <surname>Nguyen</surname></string-name> and <string-name><given-names>V. D.</given-names> <surname>Le</surname></string-name></person-group>, &#x201C;<article-title>Improved firefly algorithm: A novel method for optimal operation of thermal generating units</article-title>,&#x201D; <source>Complexity</source>, vol. <volume>20</volume>, pp. 23, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Khatibi</surname></string-name>, <string-name><given-names>M. A.</given-names> <surname>Ghorbani</surname></string-name> and <string-name><given-names>F. P.</given-names> <surname>Akhoni</surname></string-name></person-group>, &#x201C;<article-title>Stream flow predictions using nature-inspired firefly algorithms and a multiple model strategy&#x2013;Directions of innovation towards next generation practices</article-title>,&#x201D; <source>Advanced Engineering Informatics</source>, vol. <volume>34</volume>, pp. <fpage>80</fpage>&#x2013;<lpage>89</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. M.</given-names> <surname>Emad</surname></string-name>, <string-name><given-names>M. F.</given-names> <surname>Enas</surname></string-name>, <string-name><given-names>H.</given-names> <surname>El</surname></string-name>, <string-name><given-names>T. W.</given-names> <surname>Khaled</surname></string-name> and <string-name><given-names>I. S.</given-names> <surname>Akram</surname></string-name></person-group>, &#x201C;<article-title>A novel classifier based on firefly algorithm</article-title>,&#x201D; <source>Journal of King Saud University-Computer and Information Sciences</source>, vol. <volume>32</volume>, no. <issue>10</issue>, pp. <fpage>1173</fpage>&#x2013;<lpage>1181</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E. H.</given-names> <surname>Aboul</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Tarek</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Usama</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Hesham</surname></string-name></person-group>, &#x201C;<article-title>An improved moth flame optimization algorithm based on rough sets for tomato diseases detection</article-title>,&#x201D; <source>Computers and Electronics in Agriculture</source>, vol. <volume>136</volume>, pp. <fpage>86</fpage>&#x2013;<lpage>96</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Xiaodong</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Yiming</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Le</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Miao</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Ameliorated moth-flame algorithm and its application for modeling of silicon content in liquid iron of blast furnace based fast learning network</article-title>,&#x201D; <source>Applied Soft Computing</source>, vol. <volume>94</volume>, pp. <fpage>106418</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Sengar</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Liu</surname></string-name></person-group>, &#x201C;<article-title>Ensemble approach for short term load forecasting in wind energy system using hybrid algorithm</article-title>,&#x201D; <source>Journal of Ambient Intelligence and Humanized Computing</source>, vol. <volume>11</volume>, pp. <fpage>5297</fpage>&#x2013;<lpage>5314</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Saghatforoush</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Monjezi</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Shirani Faradonbeh</surname></string-name></person-group>, &#x201C;<article-title>Combination of neural network and ant colony optimization algorithms for prediction and optimization of flyrock and back-break induced by blasting</article-title>,&#x201D; <source>Engineering with Computers</source>, vol. <volume>32</volume>, pp. <fpage>255</fpage>&#x2013;<lpage>266</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Hossein</surname></string-name>, <string-name><given-names>A. M.</given-names> <surname>Mohammed</surname></string-name> and <string-name><given-names>K. F.</given-names> <surname>Loke</surname></string-name></person-group>, &#x201C;<article-title>Novel swarm-based approach for predicting the cooling load of residential buildings based on social behavior of elephant herds</article-title>,&#x201D; <source>Energy and Buildings</source>, vol. <volume>20</volume>, pp. <fpage>109579</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. H.</given-names> <surname>Ashraf</surname></string-name>, <string-name><given-names>A. H.</given-names> <surname>Somaia</surname></string-name>, <string-name><given-names>A. A.</given-names> <surname>Mohamed</surname></string-name>, <string-name><given-names>A. M.</given-names> <surname>Salem</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Mahmoud</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Nature-inspired algorithms for feed-forward neural network classifiers: A survey of one decade of research</article-title>,&#x201D; <source>Ain Shams Engineering Journal</source>, vol. <volume>11</volume>, no. <issue>3</issue>, pp. <fpage>659</fpage>&#x2013;<lpage>675</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>AbdElRahman</surname></string-name>, <string-name><given-names>E. J.</given-names> <surname>Fatima</surname></string-name>, <string-name><given-names>H.</given-names> <surname>James</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Brandon</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Travis</surname></string-name></person-group>, &#x201C;<article-title>Using ant colony optimization to optimize long short-term memory recurrent neural networks</article-title>,&#x201D; in <conf-name>Proc. Genetic and Evolutionary Computation Conf.</conf-name>, <conf-loc>Kyoto, Japan</conf-loc>, pp. <fpage>13</fpage>&#x2013;<lpage>20</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Bansal</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Kumar</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Pasrija</surname></string-name></person-group>, &#x201C;<article-title>A hybrid grasshopper and new cat swarm optimization algorithm for feature selection and optimization of multi-layer perceptron</article-title>,&#x201D; <source>Soft Computing</source>, vol. <volume>24</volume>, pp. <fpage>15463</fpage>&#x2013;<lpage>15489</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Monalisa</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Soumya</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Urmila</surname></string-name> and <string-name><given-names>R. S.</given-names> <surname>Manas</surname></string-name></person-group>, &#x201C;<article-title>Elephant herding optimization technique based neural network for cancer prediction</article-title>,&#x201D; <source>Informatics in Medicine Unlocked</source>, vol. <volume>21</volume>, pp. 1&#x2013;10, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Zheng</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Yuhui</surname></string-name></person-group>, &#x201C;<article-title>Inertia weight adapation in particle swarm optimization algorithm</article-title>,&#x201D; in <conf-name>Proc. Int. Conf. on Advances in Swarm Intelligence</conf-name>, <conf-loc>Chongqing, China</conf-loc>, pp. <fpage>71</fpage>&#x2013;<lpage>79</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. A.</given-names> <surname>Martins</surname></string-name> and <string-name><given-names>A. O.</given-names> <surname>Adewum</surname></string-name></person-group>, &#x201C;<article-title>On the performance of linear decreasing inertia weight particle swarm optimization for global optimization</article-title>,&#x201D; <source>The Scientific World Journal</source>, vol. <volume>13</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>12</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y. S.</given-names> <surname>Kushwah</surname></string-name> and <string-name><given-names>R. K.</given-names> <surname>Shrivastava</surname></string-name></person-group>, &#x201C;<article-title>Particle swarm optimization with dynamic inertia weights</article-title>,&#x201D; <source>International Journal of Research and Scientific Innovation (IJRSI)</source>, vol. <volume>4</volume>, no. <issue>8</issue>, pp. <fpage>129</fpage>&#x2013;<lpage>135</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Marini</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Walczak</surname></string-name></person-group>, &#x201C;<article-title>Particle swarm optimization (PSO). A tutorial</article-title>,&#x201D; <source>Chemometrics and Intelligent Laboratory Systems</source>, vol. <volume>149</volume>, pp. <fpage>153</fpage>&#x2013;<lpage>165</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Shang</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Bing</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Zhou</surname></string-name></person-group>, &#x201C;<article-title>Short-term traffic flow prediction model using particle swarm optimization&#x2013;Based combined kernel function-least squares support vector machine combined with chaos theory</article-title>,&#x201D; <source>Advances in Mechanical Engineering</source>, vol. <volume>8</volume>, no. <issue>8</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>12</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Cong</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>Traffic flow forecasting by a least squares support vector machine with a fruit fly optimization algorithm</article-title>,&#x201D; <source>Procedia Engineering</source>, vol. <volume>37</volume>, pp. <fpage>59</fpage>&#x2013;<lpage>68</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Yin</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>Z.</given-names> <surname>Hui</surname></string-name></person-group>, &#x201C;<article-title>Vessel traffic flow analysis and prediction by an improved PSO-BP mechanism based on AIS data</article-title>,&#x201D; <source>Evolving Systems</source>, vol. <volume>10</volume>, pp. <fpage>397</fpage>&#x2013;<lpage>407</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="web">Dataset: <uri xlink:href="http://tris.highwaysengland.co.uk/detail/trafficflowdata&#x0023;site-collapse">http://tris.highwaysengland.co.uk/detail/trafficflowdata&#x0023;site-collapse</uri>, <year>2011</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>