<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">EE</journal-id>
<journal-id journal-id-type="nlm-ta">EE</journal-id>
<journal-id journal-id-type="publisher-id">EE</journal-id>
<journal-title-group>
<journal-title>Energy Engineering</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-0118</issn>
<issn pub-type="ppub">0199-8595</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">26185</article-id>
<article-id pub-id-type="doi">10.32604/ee.2023.026185</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Improving Performance of Recurrent Neural Networks Using Simulated Annealing for Vertical Wind Speed Estimation</article-title>
<alt-title alt-title-type="left-running-head">Improving Performance of Recurrent Neural Networks Using Simulated Annealing for Vertical Wind Speed Estimation</alt-title>
<alt-title alt-title-type="right-running-head">Improving Performance of Recurrent Neural Networks Using Simulated Annealing for Vertical Wind Speed Estimation</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Rehman</surname><given-names>Shafiqur</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>srehman@kfupm.edu.sa</email></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Nuha</surname><given-names>Hilal H.</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Shaikhi</surname><given-names>Ali Al</given-names></name><xref ref-type="aff" rid="aff-3">3</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Akbar</surname><given-names>Satria</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Mohandes</surname><given-names>Mohamed</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><xref ref-type="aff" rid="aff-3">3</xref></contrib>
<aff id="aff-1"><label>1</label><institution>Interdisciplinary Research Center for Renewable Energy and Power Systems (IRC-REPS), KFUPM</institution>, <addr-line>Dhahran, 31261</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-2"><label>2</label><institution>HUMIC Engineering, School of Computing, Telkom University</institution>, <addr-line>Bandung, 40257</addr-line>, <country>Indonesia</country></aff>
<aff id="aff-3"><label>3</label><institution>Electrical Engineering Department, King Fahd University of Petroleum &#x0026; Minerals (KFUPM)</institution>, <addr-line>Dhahran, 31261</addr-line>, <country>Saudi Arabia</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Shafiqur Rehman. Email: <email>srehman@kfupm.edu.sa</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2023</year></pub-date>
<pub-date date-type="pub" publication-format="electronic"><day>9</day>
<month>2</month>
<year>2023</year></pub-date>
<volume>120</volume>
<issue>4</issue>
<fpage>775</fpage>
<lpage>789</lpage>
<history>
<date date-type="received">
<day>22</day>
<month>8</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>12</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Rehman et al.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Rehman et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_EE_26185.pdf"></self-uri>
<abstract>
<p>An accurate vertical wind speed (WS) data estimation is required to determine the potential for wind farm installation. In general, the vertical extrapolation of WS at different heights must consider different parameters from different locations, such as wind shear coefficient, roughness length, and atmospheric conditions. The novelty presented in this article is the introduction of two steps optimization for the Recurrent Neural Networks (RNN) model to estimate WS at different heights using measurements from lower heights. The first optimization of the RNN is performed to minimize a differentiable cost function, namely, mean squared error (MSE), using the Broyden-Fletcher-Goldfarb-Shanno algorithm. Secondly, the RNN is optimized to reduce a non-differentiable cost function using simulated annealing (RNN-SA), namely mean absolute error (MAE). Estimation of WS vertically at 50 m height is done by training RNN-SA with the actual WS data a 10&#x2013;40 m heights. The estimated WS at height of 50 m and the measured WS at 10&#x2013;40 heights are further used to train RNN-SA to obtain WS at 60 m height. This procedure is repeated continuously until the WS is estimated at a height of 180 m. The RNN-SA performance is compared with the standard RNN, Multilayer Perceptron (MLP), Support Vector Machine (SVM), and state of the art methods like convolutional neural networks (CNN) and long short-term memory (LSTM) networks to extrapolate the WS vertically. The estimated values are also compared with real WS dataset acquired using LiDAR and tested using four error metrics namely, mean squared error (MSE), mean absolute percentage error (MAPE), mean bias error (MBE), and coefficient of determination (<inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>). The numerical experimental results show that the MSE values between the estimated and actual WS at 180 m height for the RNN-SA, RNN, MLP, and SVM methods are found to be 2.09, 2.12, 2.37, and 2.63, respectively.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Vertical wind speed estimation</kwd>
<kwd>recurrent neural networks</kwd>
<kwd>simulated annealing</kwd>
<kwd>multilayer perceptron</kwd>
<kwd>support vector machine</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Wind power plants have been used globally as a renewable energy source for household and industrial energy demand. Total deployed wind power capacity, worldwide in 2020 cumulatively, reached 743 GW [<xref ref-type="bibr" rid="ref-1">1</xref>], and more wind power plants are being planned to be installed to meet the demand and to combat the greenhouse gases (GHG) emissions, globally. However, wind power plant installation requires accurate and precise data on the availability and changes in wind speed (WS) of any site since WS may vary rapidly and increase with height as shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref> where WS at180 m is higher than that at 140 to 20 m. Therefore, more energy can be harvested if the hub height of the turbine is high. Today&#x2019;s modern wind turbines have a rotor diameter of 120 m or more with a hub height of 100 m or more where the investment and installation costs are in proportion with the dimension. This indicates that an accurate estimation of the wind profile is needed at the turbine hub height for accurate estimation of the energy that can be generated from the turbine to be installed and prevents failure to achieve return of investment due to inaccurate wind profile investigation.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Measured wind speeds at 20, 60, 100, 140, and 180 m</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-1.tif"/>
</fig>
<p>The increase in WS with height is due to less human activities and lower surface roughness at higher heights. In addition, atmospheric conditions at a higher height have less friction and are more stable which may account for higher WS at higher heights [<xref ref-type="bibr" rid="ref-2">2</xref>]. However, in most countries, WS is measured at lower heights from 10 to 40 m. Therefore, for accurate estimation of wind power potential in an area for turbine positions above 40 m, WS must be estimated for higher turbine hub heights using physical, statistical, or machine learning intelligence techniques.</p>
<p>Several approaches have been used to estimate WS at higher positions based on measurements at lower heights as indicated by increasing WS average and its percentage increment in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>. Under stable atmospheric conditions, logarithmic approaches and power-law techniques for WS estimation have shown good accuracy [<xref ref-type="bibr" rid="ref-3">3</xref>]. Parametric approaches require fields survey to obtain meteorological coefficients as reported by Banuelos-Ruedas&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-4">4</xref>] that used Hellman&#x2019;s law, 1/7th power law, and a logarithmic approach to estimate WS at a given altitude. In addition to parametric methods, artificial intelligence (AI) approaches have been widely used for the temporal prediction of WS [<xref ref-type="bibr" rid="ref-5">5</xref>,<xref ref-type="bibr" rid="ref-6">6</xref>] due to its practicality. For example, Islam&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-7">7</xref>] proposed a hybrid artificial neural network for WS estimation up to 100 m using measurements at the height of 10&#x2013;40 m. T&#x00FC;rkan&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-8">8</xref>] compared seven different AI techniques for vertical WS estimation at a height up to 30 m using a WS measured at a height of 10 m. Emeksiz [<xref ref-type="bibr" rid="ref-9">9</xref>] developed multigene genetic programming for the temporal estimation of WS at different altitudes. Mohandes&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-10">10</xref>] employed the Restricted Boltzmann Machine (RBM) technique to pre-train the neural network to estimate WS up to 120 m height using WS measurements at lower altitudes. Ti&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-11">11</xref>] proposed artificial neural networks (ANN) based wake model to predict power produced by a wind farm. Wake modelling was also combined with the bilateral convolutional neural networks for dynamic wind farm power prediction [<xref ref-type="bibr" rid="ref-12">12</xref>]. Yang et al [<xref ref-type="bibr" rid="ref-13">13</xref>] developed double layer machine learning framework using ANN and Bayesian machine learning for wind farm power prediction. It can be noticed that the modern and advanced AI approaches have been widely investigated for temporal WS estimation, but limited number of works are used for vertical WS estimation tasks using WS measured at lower heights. Therefore, the literature above has shown the necessity for the investigation of a hybrid AI method for WS estimation above 120 m.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Wind speed average and its corresponding percentage increment</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-2.tif"/>
</fig>
<p>The contributions and novelties of this study are as follows:
<list list-type="order">
<list-item>
<p>The paper introduces two steps performance enhancement of recurrent neural networks using gradient based and simulated annealing methods to estimate WS at a higher turbine position using the measured value at lower heights.</p></list-item>
<list-item>
<p>This paper presents the comparison of the proposed method with the standard RNN, Multilayer Perceptron (MLP), Support Vector Machine (SVM), and state of the art methods like convolutional neural networks (CNN) and long short-term memory methods.</p></list-item>
<list-item>
<p>A real dataset from Dhahran, Kingdom of Saudi Arabia is used to evaluate the accuracy of the proposed method.</p></list-item>
<list-item>
<p>All methods are evaluated using error metrics such as mean squared error (MSE), mean absolute percentage error (MAPE), mean bias error (MBE), and coefficient of determination&#x00A0;(<inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>).</p></list-item>
</list></p>
<p>The remainder of this paper is structured as methodology, experimental results, and conclusion.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Methodology</title>
<sec id="s2_1">
<label>2.1</label>
<title>Proposed Method and Its Mathematical Models</title>
<p>In this paper, we propose a hybrid method, namely recurrent neural networks with simulated annealing (RNN-SA) for vertical WS extrapolation. The RNN uses the Elman model where the hidden unit output is connected back to itself using an adjustable recurrent weight <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mrow><mml:mo>(</mml:mo><mml:mi>w</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>in addition to feeding the output layer with weights (<inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>V</mml:mi></mml:math></inline-formula>), as shown in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>. Although the architecture is simple, this model is capable of performing a variety of prediction tasks [<xref ref-type="bibr" rid="ref-14">14</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>]. The data is pre-processed in two stages. First, the data from the input <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mrow><mml:mo>(</mml:mo><mml:mi>U</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> and recurrent connections are provided to the hidden units as inputs. Next, the output from the hidden units is sent to both the output layer and through the recurrent connections back to the hidden units. Mathematically, the output of the <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi>k</mml:mi></mml:math></inline-formula>th hidden unit <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> given the <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>n</mml:mi></mml:math></inline-formula>th input vector is given by:</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>RNN with Elman model for WS extrapolation</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-3.tif"/>
</fig>
<p><disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>U</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>b</mml:mi></mml:math></inline-formula> is the bias vector. The input <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>x</mml:mi></mml:math></inline-formula> represents the actual and estimated WS from the lower heights. The output layer provides the WS value at one step higher. The output <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> of the <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>n</mml:mi></mml:math></inline-formula>th sample is given by:</p>
<p><disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mi>h</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>This study uses the tangent-hyperbolic activation function defined by:</p>
<p><disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>z</mml:mi></mml:mrow></mml:msup><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>z</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>z</mml:mi></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>z</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>In this study, the numerical experiment is carried out by dividing the data into three parts, i.e., 70%, 10%, and 20% for training, validation, and testing, respectively. The RNN-SA is trained using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [<xref ref-type="bibr" rid="ref-16">16</xref>]. Since the model uses a medium number of units and layers, the BFGS algorithm is the most suitable for unconstrained optimization [<xref ref-type="bibr" rid="ref-17">17</xref>]. The BFGS algorithm is utilized to obtain the RNN model with minimum mean squared error (MSE):</p>
<p><disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mi>M</mml:mi><mml:mi>S</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mi>N</mml:mi></mml:mfrac></mml:math></disp-formula></p>
<p>It can be noticed that the <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi>M</mml:mi><mml:mi>S</mml:mi><mml:mi>E</mml:mi></mml:math></inline-formula> is a continuously differentiable function that represents the difference between the predicted outputs (<inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula>) and the actual values (y). The BFGS algorithm uses an iterative update to obtain the weights that minimize the cost function <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mi>e</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>w</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>E</mml:mi></mml:math></inline-formula> from <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref>. The weight update is specified by:</p>
<p><disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>where <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denote the step size and direction at iteration <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mi>k</mml:mi></mml:math></inline-formula>. The direction update is given by:</p>
<p><disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing=".2em" columnspacing="1em" displaystyle="false"><mml:mtr><mml:mtd><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo>&#x2212;</mml:mo><mml:msubsup><mml:mi>B</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mtext>e</mml:mtext></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mi>s</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msubsup><mml:mi>S</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>+</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mi>y</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msubsup><mml:mi>y</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>I</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mtext>e</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi mathvariant="normal">&#x2207;</mml:mi><mml:mrow><mml:mtext>e</mml:mtext></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:msub><mml:mi>B</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is an approximation of the Hessian matrix that represent the second derivative of the cost function. The results of the validation data showed that the RNN method with <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mn>20</mml:mn></mml:math></inline-formula> hidden units and number of iterations, <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mi>K</mml:mi><mml:mo>=</mml:mo><mml:mn>20</mml:mn><mml:mo>,</mml:mo></mml:math></inline-formula> exhibited the best performance. The model is further optimized using simulated annealing (SA), which is an optimization method that accepts not only better solution candidates but also includes not so good ones with a certain likelihood to enable the optimization to explore a new area for new candidates [<xref ref-type="bibr" rid="ref-18">18</xref>]. In the sense of the RNN, the SA is utilized further to optimize the trained RNN with respect to a non-differentiable cost function, namely, the mean absolute error (MAE):</p>
<p><disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mfrac><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>100</mml:mn><mml:mrow><mml:mtext>%</mml:mtext></mml:mrow></mml:math></disp-formula></p>
<p>Given the cost function above, the solution candidate acceptance is given by:</p>
<p><disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mi>p</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mn>1</mml:mn><mml:mo>+</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mfrac><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x0394;</mml:mi></mml:mrow><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mi>T</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:math></disp-formula>where <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mrow><mml:mi mathvariant="normal">&#x0394;</mml:mi></mml:mrow><mml:mi>E</mml:mi></mml:math></inline-formula> is the difference between new and old cost functions and T is the temperature governed by the following relation:</p>
<p><disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mi>T</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msup><mml:mn>0.95</mml:mn><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msup></mml:math></disp-formula>where the initial temperature <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is set to 100 and <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mi>k</mml:mi></mml:math></inline-formula> is the iteration number. The proposed RNN-SA utilizes two-stage optimizations. First, the BFGS method trains the RNN to minimize the MSE. The first cost function uses a smooth quadratic function that allows gradient-based optimization to obtain the best solution. Second, the SA optimizes the trained RNN weights to achieve the minimum MAE. Since the MAE uses a non-smooth absolute function, the SA offers an optimal solution for a complex recurrent model. The proposed two steps optimization approach to improve the performance metrics with different methods namely, the MSE with continuous derivative and the non-smooth MAE. The traditional RNN is typically trained using gradient based method with differentiable cost function. Whereas, the proposed RNN, in addition to the gradient based method, is further optimized using SA to enhance the performance. Both RNNs use the same architecture with recurrent units as shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>. In the implementation, the maximum number of iterations of the SA method is <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>K</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:math></inline-formula>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Multi-layer perceptron model</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-4.tif"/>
</fig>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Multilayer Perceptron</title>
<p>In this study, the multilayer perceptron (MLP) is used as a benchmark for the comparison of the performance of RNN-SA. MLP has a much simpler structure than RNN [<xref ref-type="bibr" rid="ref-19">19</xref>], as shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>. In contrast to RNN, MLP uses a feed-forward structure that sequentially processes the data from the input layer to the output layer. Mathematically, the output of the kth hidden unit <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:msub><mml:mrow><mml:mtext>h</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>k</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> given the <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:math></inline-formula>th input vector is obtained by:</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03C4;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mtext>W</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>h</mml:mtext></mml:mrow></mml:mrow></mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the input weight matrix. Given the hidden unit values, the output of the MLP is defined by:<disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mi>y</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mtext>W</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>o</mml:mtext></mml:mrow></mml:mrow></mml:msup><mml:mi>h</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>b</mml:mi></mml:math></disp-formula></p>
<p>This study utilizes gradient descent with momentum and adaptive learning rate <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03BC;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> backpropagation to train the MLP. This method is able to train any model with differentiable activation and cost function, which is suitable in this case where the activation and cost function uses differentiable tangent hyperbolic and MSE, respectively. Each weight is adjusted according to gradient descent defined by:<disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext>k</mml:mtext></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mrow><mml:mtext>w</mml:mtext></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>where the weight update (<inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mrow><mml:mtext>w</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is calculated by:</p>
<p><disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mrow><mml:mtext>w</mml:mtext></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>m</mml:mi><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mrow><mml:mtext>k</mml:mtext></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>m</mml:mi><mml:mi>&#x03BC;</mml:mi><mml:mfrac><mml:mrow><mml:mo>(</mml:mo><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></disp-formula>where <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mi>m</mml:mi><mml:mo>=</mml:mo><mml:mn>0.9</mml:mn></mml:math></inline-formula> and <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mi>&#x03BC;</mml:mi><mml:mo>=</mml:mo><mml:mn>0.01</mml:mn></mml:math></inline-formula> denote the momentum coefficient and learning rate, respectively [<xref ref-type="bibr" rid="ref-20">20</xref>]. The cost function is evaluated at each epoch. If the cost function decreases toward the cost target, then the learning rate is increased by the factor <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mrow><mml:mi mathvariant="normal">&#x0394;</mml:mi></mml:mrow><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mo>+</mml:mo><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1.05</mml:mn></mml:math></inline-formula>. If the cost function improves by more than the factor <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mrow><mml:mi mathvariant="normal">&#x0394;</mml:mi></mml:mrow><mml:mi>M</mml:mi><mml:mi>S</mml:mi><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1.04</mml:mn></mml:math></inline-formula>, the weight update that increased the cost function is canceled, and the learning rate is updated by the factor <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mrow><mml:mi mathvariant="normal">&#x0394;</mml:mi></mml:mrow><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mo>&#x2212;</mml:mo><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>0.7</mml:mn></mml:math></inline-formula>. The experiments use <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mi>H</mml:mi><mml:mo>=</mml:mo><mml:mn>20</mml:mn></mml:math></inline-formula> hidden units. Training halts when the maximum number of iterations <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mo stretchy="false">(</mml:mo><mml:mi>K</mml:mi><mml:mo>=</mml:mo><mml:mn>1200</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is exceeded.</p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Support Vector Machine</title>
<p>The last benchmarking method used in this paper is the support vector machine (SVM) for regression [<xref ref-type="bibr" rid="ref-21">21</xref>]. Theoretically, SVM is represented by a mapping function <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:msubsup><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, with input vectors <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:msub><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> and output targets <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:math></inline-formula>. The SVM attempts to create the relationship between <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:msub><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and expected results <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mi>f</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> as follows:</p>
<p><disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi mathvariant="bold-italic">i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mi mathvariant="bold-italic">&#x03C9;</mml:mi><mml:mo>&#x22C5;</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi mathvariant="bold-italic">i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>b</mml:mi></mml:math></disp-formula>where <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mi mathvariant="bold-italic">&#x03C9;</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> and <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>b</mml:mi></mml:math></inline-formula> are the weight vectors and threshold, respectively. From this model, the target of SVM is to obtain the weights that minimize the following error function:</p>
<p><disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi mathvariant="bold-italic">&#x03C9;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mtext>N</mml:mtext></mml:mrow></mml:mfrac><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">x</mml:mi><mml:mrow><mml:mi mathvariant="bold-italic">i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03F5;</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>&#x03BB;</mml:mi><mml:msup><mml:mrow><mml:mo>|</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula>where <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mi>&#x03BB;</mml:mi></mml:math></inline-formula> is a constant and the robust error function <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:msub><mml:mrow><mml:mo>|</mml:mo><mml:mi>z</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03F5;</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is defined as follows:</p>
<p><disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:msub><mml:mrow><mml:mo>|</mml:mo><mml:mi>z</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03F5;</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing=".2em" columnspacing="1em" displaystyle="false"><mml:mtr><mml:mtd><mml:mn>0</mml:mn><mml:mo>,</mml:mo></mml:mtd><mml:mtd><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mrow><mml:mo>|</mml:mo><mml:mi>z</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03F5;</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mo>|</mml:mo><mml:mi>z</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd><mml:mtd><mml:mrow><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">h</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">w</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>The SVM model is trained using sequential minimal optimization (SMO), where the number of inputs determines the dimension of <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mi>&#x03C9;</mml:mi></mml:math></inline-formula>. Since the model has four input elements (10, 20, 30, and 40 m), the model for estimating WS at 50 m uses <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>4</mml:mn></mml:math></inline-formula>.</p>
</sec>
<sec id="s2_4">
<label>2.4</label>
<title>State of the Art Methods</title>
<p>Recently, state of the art methods like deep learning have been used for many tasks including wind speed prediction. For instance, convolutional neural networks (CNN) is used for WS prediction in a wind farm in Hebei Province, China with high accuracy [<xref ref-type="bibr" rid="ref-22">22</xref>]. In addition to that, long short-term memory (LSTM) network is used for WS prediction on a wind farm in Inner Mongolia [<xref ref-type="bibr" rid="ref-23">23</xref>]. These two methods are also used as benchmarking methods in this paper.</p>
</sec>
<sec id="s2_5">
<label>2.5</label>
<title>Error Measures</title>
<p>This study employs four error measures based on the difference between the measured values (<inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mi>y</mml:mi></mml:math></inline-formula>), the estimated outputs (<inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula>), and the average of the measured values (<inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mover><mml:mi>y</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover></mml:math></inline-formula>), including MSE, mean bias error (MBE), mean absolute percent error (MAPE), and the coefficient of determination (<inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. These error measures are calculated using the following equations:</p>
<p><disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:mi>M</mml:mi><mml:mi>B</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>N</mml:mi></mml:mfrac></mml:math></disp-formula></p>
<p><disp-formula id="eqn-18"><label>(18)</label><mml:math id="mml-eqn-18" display="block"><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>P</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mfrac><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mrow><mml:mo>|</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mfrac><mml:mo>|</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>100</mml:mn><mml:mrow><mml:mtext>\%&#xA0;</mml:mtext></mml:mrow></mml:math></disp-formula></p>
<p><disp-formula id="eqn-19"><label>(19)</label><mml:math id="mml-eqn-19" display="block"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mover><mml:mi>y</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>100</mml:mn><mml:mrow><mml:mtext>\%&#xA0;</mml:mtext></mml:mrow></mml:math></disp-formula></p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Numerical Experimental Results</title>
<p>This section provides the description of the experimental setup and the results. The estimation starts by training each model using WS values at heights 10, 20, 30, and 40 m, as inputs and actual WS at the height of 50 m as the target. Next, the actual WS at 10&#x2013;40 m and the estimated WS at a height 50 m are used to train new model with five inputs to estimate the WS at 60 m height. This process, which uses the actual and estimated WS values at lower heights to predict the WS values at one level higher is further repeated until the estimation of the WS at a height of 180 m using the actual WS at 10&#x2013;40 m, and the estimated WS at 50&#x2013;170 m is obtained.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Wind Speed Dataset</title>
<p>Due to lack of resources in some countries, WS is typically measured up to a height of 40 m and its extrapolation to higher levels require sophisticated methods. Therefore, the WS at the higher heights is obtained using models, given the WS measured at lower heights. As shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>, the WS increases with height but in a random fashion [<xref ref-type="bibr" rid="ref-24">24</xref>,<xref ref-type="bibr" rid="ref-25">25</xref>]. This paper uses WS data acquired in Dhahran, Saudi Arabia during 20 June 2015 to 29 February 2016 using a LiDAR wind measurement device at 10, 20, 30, &#x2026;, 180 m heights. The WS is scanned every three second and averages at each 10 min are saved and used. The data is divided into training, validation, and testing at 70%, 10%, and 20%, respectively. The LiDAR based measured WS is used as the reference for the actual WS.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Results</title>
<p><xref ref-type="table" rid="table-1">Table 1</xref> shows the performance of the RNN-SA under validation data for WS estimation at 50&#x2013;180 m heights based on measured WS at 10&#x2013;40 m heights. The model with the best performance is further used for testing. <xref ref-type="table" rid="table-2">Table 2</xref> summarizes the testing data experimental results of the RNN-SA, standard RNN, MLP, SVM, CNN, and LSTM techniques in terms of MSE. It is obvious that MSE values for RNN-SA increases progressively from 0.027 to 2.089 while that for MLP from 0.049 to 2.367, corresponding to 50 to 180 m heights. Similarly, for the SVM method, the MSE values varied from 0.032 at 50 to 2.632 at 180 m. In all the numerical experiments, that RNN-SA method resulted in lowest values of MSE at all heights. The developed method is able to achieve better MSE than the standard RNN for all heights. It also can be noticed that the RNN-SA outperformed other methods based on the obtained MSE values. <xref ref-type="table" rid="table-3">Table 3</xref> reports the MBE of all methods and heights. MBE may not quantify the performance or accuracy of the estimation methods; however, it indicates whether the method is over or under predictive [<xref ref-type="bibr" rid="ref-26">26</xref>]. Positive MBE represents over prediction where the estimated values are larger than the actual one. Whereas, negative MBE indicates under-prediction where the estimated values are less than the actual. The experimental results show that all methods have negative values for all the heights. This indicate that the predicted values are less than the measured WS or the model underestimated the WS at all heights.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>The performance of the RNN-SA under validation data for WS estimation at 50&#x2013;180 m heights based on measured WS at 10&#x2013;40 m heights</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Heights (m)</th>
<th>50</th>
<th>60</th>
<th>70</th>
<th>80</th>
<th>90</th>
<th>100</th>
<th>110</th>
<th>120</th>
<th>130</th>
<th>140</th>
<th>150</th>
<th>160</th>
<th>170</th>
<th>180</th>
</tr>
</thead>
<tbody>
<tr>
<td>MSE</td>
<td>0.024</td>
<td>0.098</td>
<td>0.162</td>
<td>0.263</td>
<td>0.371</td>
<td>0.516</td>
<td>0.628</td>
<td>0.780</td>
<td>0.931</td>
<td>1.051</td>
<td>1.193</td>
<td>1.359</td>
<td>1.457</td>
<td>1.649</td>
</tr>
<tr>
<td>MBE</td>
<td>0.012</td>
<td>0.037</td>
<td>0.048</td>
<td>0.068</td>
<td>0.081</td>
<td>0.093</td>
<td>0.061</td>
<td>0.024</td>
<td>0.000</td>
<td>0.056</td>
<td>0.054</td>
<td>0.053</td>
<td>0.049</td>
<td>0.144</td>
</tr>
<tr>
<td>MAPE (%)</td>
<td>2.06</td>
<td>3.79</td>
<td>4.57</td>
<td>5.57</td>
<td>6.31</td>
<td>7.22</td>
<td>7.63</td>
<td>8.10</td>
<td>8.49</td>
<td>8.85</td>
<td>9.15</td>
<td>9.37</td>
<td>9.54</td>
<td>10.16</td>
</tr>
<tr>
<td><inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi mathvariant="normal">&#x0025;</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula></td>
<td>99.03</td>
<td>96.56</td>
<td>94.72</td>
<td>92.24</td>
<td>89.78</td>
<td>86.96</td>
<td>84.75</td>
<td>82.32</td>
<td>80.35</td>
<td>78.74</td>
<td>77.07</td>
<td>75.10</td>
<td>73.94</td>
<td>72.53</td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>MSE of the WS estimation at 50&#x2013;180 m heights based on measured WS at 10&#x2013;40 m heights</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Heights (m)</th>
<th>50</th>
<th>60</th>
<th>70</th>
<th>80</th>
<th>90</th>
<th>100</th>
<th>110</th>
<th>120</th>
<th>130</th>
<th>140</th>
<th>150</th>
<th>160</th>
<th>170</th>
<th>180</th>
</tr>
</thead>
<tbody>
<tr>
<td>RNN-SA</td>
<td>0.027</td>
<td>0.105</td>
<td>0.182</td>
<td>0.313</td>
<td>0.463</td>
<td>0.674</td>
<td>0.834</td>
<td>1.054</td>
<td>1.259</td>
<td>1.416</td>
<td>1.601</td>
<td>1.817</td>
<td>1.929</td>
<td>2.089</td>
</tr>
<tr>
<td>RNN</td>
<td>0.028</td>
<td>0.113</td>
<td>0.185</td>
<td>0.314</td>
<td>0.475</td>
<td>0.681</td>
<td>0.855</td>
<td>1.064</td>
<td>1.262</td>
<td>1.426</td>
<td>1.614</td>
<td>1.831</td>
<td>1.950</td>
<td>2.123</td>
</tr>
<tr>
<td>MLP</td>
<td>0.049</td>
<td>0.120</td>
<td>0.209</td>
<td>0.351</td>
<td>0.510</td>
<td>0.739</td>
<td>0.931</td>
<td>1.165</td>
<td>1.321</td>
<td>1.552</td>
<td>1.735</td>
<td>2.005</td>
<td>2.168</td>
<td>2.367</td>
</tr>
<tr>
<td>SVM</td>
<td>0.032</td>
<td>0.145</td>
<td>0.239</td>
<td>0.401</td>
<td>0.581</td>
<td>0.834</td>
<td>1.042</td>
<td>1.364</td>
<td>1.516</td>
<td>1.852</td>
<td>2.017</td>
<td>2.323</td>
<td>2.365</td>
<td>2.632</td>
</tr>
<tr>
<td>CNN</td>
<td>0.125</td>
<td>0.265</td>
<td>0.539</td>
<td>0.392</td>
<td>1.059</td>
<td>1.402</td>
<td>1.092</td>
<td>1.411</td>
<td>2.241</td>
<td>1.833</td>
<td>2.437</td>
<td>1.940</td>
<td>3.265</td>
<td>2.378</td>
</tr>
<tr>
<td>LSTM</td>
<td>0.054</td>
<td>0.140</td>
<td>0.221</td>
<td>0.458</td>
<td>0.570</td>
<td>0.726</td>
<td>1.469</td>
<td>1.160</td>
<td>1.440</td>
<td>2.396</td>
<td>1.750</td>
<td>1.982</td>
<td>2.676</td>
<td>3.497</td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>MBE (m/s) of the WS estimation at 50&#x2013;180 m heights based on measured WS at 10&#x2013;40 m heights</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Heights</th>
<th>50</th>
<th>60</th>
<th>70</th>
<th>80</th>
<th>90</th>
<th>100</th>
<th>110</th>
<th>120</th>
<th>130</th>
<th>140</th>
<th>150</th>
<th>160</th>
<th>170</th>
<th>180</th>
</tr>
</thead>
<tbody>
<tr>
<td>RNN-SA</td>
<td>&#x2212;0.02</td>
<td>&#x2212;0.03</td>
<td>&#x2212;0.05</td>
<td>&#x2212;0.07</td>
<td>&#x2212;0.09</td>
<td>&#x2212;0.11</td>
<td>&#x2212;0.15</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.22</td>
<td>&#x2212;0.18</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.13</td>
</tr>
<tr>
<td>RNN</td>
<td>&#x2212;0.02</td>
<td>&#x2212;0.03</td>
<td>&#x2212;0.04</td>
<td>&#x2212;0.06</td>
<td>&#x2212;0.08</td>
<td>&#x2212;0.14</td>
<td>&#x2212;0.15</td>
<td>&#x2212;0.13</td>
<td>&#x2212;0.18</td>
<td>&#x2212;0.19</td>
<td>&#x2212;0.18</td>
<td>&#x2212;0.18</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.14</td>
</tr>
<tr>
<td>MLP</td>
<td>&#x2212;0.01</td>
<td>&#x2212;0.04</td>
<td>&#x2212;0.06</td>
<td>&#x2212;0.07</td>
<td>&#x2212;0.11</td>
<td>&#x2212;0.15</td>
<td>&#x2212;0.17</td>
<td>&#x2212;0.18</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.21</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.21</td>
<td>&#x2212;0.20</td>
</tr>
<tr>
<td>SVM</td>
<td>&#x2212;0.02</td>
<td>&#x2212;0.07</td>
<td>&#x2212;0.09</td>
<td>&#x2212;0.12</td>
<td>&#x2212;0.15</td>
<td>&#x2212;0.18</td>
<td>&#x2212;0.20</td>
<td>&#x2212;0.24</td>
<td>&#x2212;0.24</td>
<td>&#x2212;0.26</td>
<td>&#x2212;0.25</td>
<td>&#x2212;0.28</td>
<td>&#x2212;0.26</td>
<td>&#x2212;0.26</td>
</tr>
<tr>
<td>CNN</td>
<td>&#x2212;0.16</td>
<td>&#x2212;0.31</td>
<td>&#x2212;0.39</td>
<td>0.03</td>
<td>&#x2212;0.56</td>
<td>&#x2212;0.84</td>
<td>&#x2212;0.45</td>
<td>&#x2212;0.58</td>
<td>&#x2212;0.98</td>
<td>&#x2212;0.58</td>
<td>&#x2212;0.82</td>
<td>&#x2212;0.31</td>
<td>&#x2212;1.14</td>
<td>&#x2212;0.18</td>
</tr>
<tr>
<td>LSTM</td>
<td>0.983</td>
<td>0.959</td>
<td>0.940</td>
<td>0.908</td>
<td>0.870</td>
<td>0.838</td>
<td>0.810</td>
<td>0.776</td>
<td>0.758</td>
<td>0.724</td>
<td>0.710</td>
<td>0.685</td>
<td>0.675</td>
<td>0.656</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-5">Fig. 5</xref> compares the MAPE values for all the methods and show almost the same pattern as in case of MSE (<xref ref-type="table" rid="table-1">Table 1</xref>). The MAPE value of RNN-SA, standard RNN, MLP, and SVM increased from 2.2% to 11.2%, 2.3% to 11.3%, 2.9% to 11.7%, and 2.4% to 12.2% for 50 to 180 m of heights, respectively. The proposed method achieves the least MAPE over all methods for all the heights. RNN methods proved to be the next best estimator of WS at all heights while MLP the third best option. <xref ref-type="fig" rid="fig-6">Fig. 6</xref> visualizes the <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> values for all heights and methods. The figure shows that the estimated values deviate more and more from the actual values as height increases. It may be accounted to the fact that actual WS values at 10&#x2013;40 m only are being used for the estimation at higher heights. So, the estimated WS near the 40 m height has higher accuracy, and as the height increases, the correlation between the estimated and the actual WS decreases. The RNN-SA outperforms the other methods and achieved the highest <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> values for all heights.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>MAPE of different methods and all heights</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-5.tif"/>
</fig><fig id="fig-6">
<label>Figure 6</label>
<caption>
<title><inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> of different methods and all heights</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-6.tif"/>
</fig>
<p>The scatter plots between the estimated RNN-SA with actual WS values at 60, 100, 140, and180 m are shown in <xref ref-type="fig" rid="fig-7">Figs. 7a</xref>&#x2013;<xref ref-type="fig" rid="fig-7">7d</xref>, respectively. It is observed that the scatter plots start to diverge as height increases. This is confirmed by the temporal variation of actual and extrapolated WS values at 60, 100, 140, and 180 m heights, shown in <xref ref-type="fig" rid="fig-8">Figs. 8a</xref>&#x2013;<xref ref-type="fig" rid="fig-8">8d</xref>, respectively. It is noticed that the estimated WS values follow the temporal trend of actual values at all heights, as depicted in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>. The figure also shows the coefficient of determination values for all methods. However, the trends of estimated WS at 180 (<xref ref-type="fig" rid="fig-8">Fig. 8d</xref>) m are away from the actual values compared to the trends at 60 m (<xref ref-type="fig" rid="fig-8">Fig. 8a</xref>), 100 m (<xref ref-type="fig" rid="fig-8">Fig. 8b</xref>), and 140 m (<xref ref-type="fig" rid="fig-8">Fig. 8c</xref>). The maximum number of estimated WS values for training contribute to these deviations.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Scatter plots actual and estimated WS using the proposed method at 60, 100, 140, and 180&#x00A0;m</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-7a.tif"/><graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-7b.tif"/>
</fig><fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Wind profile of measured and estimated WS at (a) 60, (b) 100, (c) 140, and (d) 180 m</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-8.tif"/>
</fig>
<p>The estimated WS profiles along with the actual ones up to 180 m, are compared in <xref ref-type="fig" rid="fig-9">Fig. 9</xref>. Each line in <xref ref-type="fig" rid="fig-9">Figs. 9a</xref>&#x2013;<xref ref-type="fig" rid="fig-9">9e</xref> represents an individual WS value at a particular time sample. It is evident from the figure that the estimated WSs at lower heights are very close to the actual values. In contrast, the estimated WS higher heights tend to deviate from the actual. <xref ref-type="fig" rid="fig-9">Fig. 9f</xref> represents the average of WS for all time samples. The average estimated WS for all methods confirm the observation in <xref ref-type="table" rid="table-2">Table 2</xref>, where the average of estimated WS is less than the actual one.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Wind profile of actual and estimated for all heights</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-9.tif"/>
</fig>
<p>Finally, the training duration statistics is shown in <xref ref-type="fig" rid="fig-10">Fig. 10</xref>. It can be noticed that the proposed training method takes a bit more extra training time as indicated by higher training duration. Maximum training duration of the proposed and traditional RNN approaches are 47.77 and 36.94 s, respectively. Despite the relative high difference of the maximum duration (10.8 s), the average training duration of the proposed and traditional RNN methods are 32.39 and 31.62 s, respectively. In other words, the additional step consumes a relatively small-time duration since the average training time difference is only 0.77 s. The extra training duration is compensated by the performance improvement, as shown by the data in <xref ref-type="table" rid="table-2">Tables 2</xref> and <xref ref-type="table" rid="table-3">3</xref>.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Training duration</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="EE_26185-fig-10.tif"/>
</fig>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Conclusions</title>
<p>This paper introduced a novel optimization method for the RNN using simulated annealing for accurate vertical WS extrapolation tasks. Each model is trained using the actual WS at lower heights (10&#x2013;40 m) to estimate the WS up to 180 m. The novel method outperformed the standard RNN as well as other methods and state of the art models for all error measures on a real WS dataset. The novel method achieved the highest coefficient of determination (<inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>) values of 96.88%, 82.25%, 74.79%, and 68.83% between the estimated and the actual WS values at 60, 100, 140, and 180 m, respectively. Similarly, the other error matric measures MAPE and MSE showed the lowest values of error between the measured and estimated wind speeds at all the heights. The proposed novel method is recommended for wind speed extrapolation to higher heights using the available measurements at lower heights.</p>
</sec>
</body>
<back>
<glossary content-type="abbreviations" id="glossary-1">
<title>Nomenclature</title>
<def-list>
<def-item>
<term>AI</term>
<def>
<p>Artificial intelligence</p>
</def>
</def-item>
<def-item>
<term>ANN</term>
<def>
<p>Artificial neural networks</p>
</def>
</def-item>
<def-item>
<term>BFGS</term>
<def>
<p>Broyden-Fletcher-Goldfarb-Shanno</p>
</def>
</def-item>
<def-item>
<term>CNN</term>
<def>
<p>Convolution neural networks</p>
</def>
</def-item>
<def-item>
<term>GHG</term>
<def>
<p>Greenhouse gases</p>
</def>
</def-item>
<def-item>
<term>GW</term>
<def>
<p>Gigawatt</p>
</def>
</def-item>
<def-item>
<term>GWh</term>
<def>
<p>Gigawatt hour</p>
</def>
</def-item>
<def-item>
<term>kW</term>
<def>
<p>Kilowatt</p>
</def>
</def-item>
<def-item>
<term>kWh</term>
<def>
<p>Kilowatt hour</p>
</def>
</def-item>
<def-item>
<term>LSTM</term>
<def>
<p>Long short-term memory</p>
</def>
</def-item>
<def-item>
<term>m</term>
<def>
<p>Meter</p>
</def>
</def-item>
<def-item>
<term>MAE</term>
<def>
<p>Mean absolute error</p>
</def>
</def-item>
<def-item>
<term>MAPE</term>
<def>
<p>Mean absolute percentage error</p>
</def>
</def-item>
<def-item>
<term>MBE</term>
<def>
<p>Mean bias error</p>
</def>
</def-item>
<def-item>
<term>MLP</term>
<def>
<p>Multilayer perceptron</p>
</def>
</def-item>
<def-item>
<term>MSE</term>
<def>
<p>Mean square error</p>
</def>
</def-item>
<def-item>
<term>MW</term>
<def>
<p>Megawatt</p>
</def>
</def-item>
<def-item>
<term>MWh</term>
<def>
<p>Megawatt hour</p>
</def>
</def-item>
<def-item>
<term>RNN</term>
<def>
<p>Recurrent neural networks</p>
</def>
</def-item>
<def-item>
<term>RNN-SA</term>
<def>
<p>Recurrent neural networks with simulation annealing</p>
</def>
</def-item>
<def-item>
<term>SA</term>
<def>
<p>Simulation annealing</p>
</def>
</def-item>
<def-item>
<term>SMO</term>
<def>
<p>Sequential minimal optimization</p>
</def>
</def-item>
<def-item>
<term>SVM</term>
<def>
<p>Support vector machine</p>
</def>
</def-item>
<def-item>
<term>R<sup>2</sup></term>
<def>
<p>Coefficient of determination</p>
</def>
</def-item>
<def-item>
<term>WS</term>
<def>
<p>Wind speed (m/s)</p>
</def>
</def-item>
</def-list>
</glossary>
<sec><title>Funding Statement</title>
<p>This research was funded by <funding-source>KFUPM</funding-source> under Grant No. <award-id>DF191024</award-id>.</p>
</sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>The program is written using Matlab and is available at (<ext-link ext-link-type="uri" xlink:href="https://github.com/hilalnuha/RNNverticalWS">https://github.com/hilalnuha/RNNverticalWS</ext-link>).</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>1.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>GWEC</collab></person-group> (<year>2020</year>). <article-title>Global Wind Report 2021&#x2014;Annual Market Update (Gwec-2021)</article-title>. <ext-link ext-link-type="uri" xlink:href="https://gwec.net/global-wind-report-2021/">https://gwec.net/</ext-link><ext-link ext-link-type="uri" xlink:href="https://gwec.net/global-wind-report-2021/">global-wind-report-2021/</ext-link>.</mixed-citation></ref>
<ref id="ref-2"><label>2.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>H&#x00F6;gstr&#x00F6;m</surname>, <given-names>U.</given-names></string-name>, <string-name><surname>Smedman</surname>, <given-names>A. S.</given-names></string-name>, <string-name><surname>Bergstr&#x00F6;m</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2006</year>). <article-title>Calculation of wind speed variation with height over the sea</article-title>. <source>Wind Engineering</source><italic>,</italic> <volume>30</volume><italic>(</italic><issue>4</issue><italic>),</italic> <fpage>269</fpage>&#x2013;<lpage>286</lpage>. DOI <pub-id pub-id-type="doi">10.1260/030952406779295480</pub-id>.</mixed-citation></ref>
<ref id="ref-3"><label>3.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Newman</surname>, <given-names>J. F.</given-names></string-name>, <string-name><surname>Klein</surname>, <given-names>P. M.</given-names></string-name></person-group> (<year>2014</year>). <article-title>The impacts of atmospheric stability on the accuracy of wind speed extrapolation methods</article-title>. <source>Resources</source><italic>,</italic> <volume>3</volume><italic>(</italic><issue>1</issue><italic>),</italic> <fpage>81</fpage>&#x2013;<lpage>105</lpage>. DOI <pub-id pub-id-type="doi">10.3390/resources3010081</pub-id>.</mixed-citation></ref>
<ref id="ref-4"><label>4.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Banuelos-Ruedas</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Angeles-Camacho</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Rios-Marcuello</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2011</year>). <chapter-title>Methodologies used in the extrapolation of wind speed data at different heights and its impact on the wind energy resource assessment in a region</chapter-title>. In: <source>Wind farm&#x2014;technical regulations, potential estimation and siting assessment</source>, pp. <fpage>97</fpage>&#x2013;<lpage>114</lpage>. <ext-link ext-link-type="uri" xlink:href="http://www.intechopen.com/">http://www.intechopen.com/</ext-link>. </mixed-citation></ref>
<ref id="ref-5"><label>5.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gupta</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Kumar</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Ayus</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Vasudevan</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Natarajan</surname>, <given-names>N.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Short-term prediction of wind power density using convolutional RNN-SA network</article-title>. <source>FME Transactions</source><italic>,</italic> <volume>49</volume><italic>(</italic><issue>3</issue><italic>),</italic> <fpage>653</fpage>&#x2013; <lpage>663</lpage>. DOI <pub-id pub-id-type="doi">10.5937/fme2103653G</pub-id>.</mixed-citation></ref>
<ref id="ref-6"><label>6.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Mohandes</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Rehman</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Nuha</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Islam</surname>, <given-names>M. S.</given-names></string-name>, <string-name><surname>Schulze</surname>, <given-names>F. H.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Accuracy of wind speed predictability with heights using long short-term memory</article-title>. <source>FME Transactions</source><italic>,</italic> <volume>49</volume><italic>(</italic><issue>4</issue><italic>),</italic> <fpage>908</fpage>&#x2013;<lpage>918</lpage>. DOI <pub-id pub-id-type="doi">10.5937/fme2104908M</pub-id> 
<comment>2021</comment>.</mixed-citation></ref>
<ref id="ref-7"><label>7.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Islam</surname>, <given-names>M. S.</given-names></string-name>, <string-name><surname>Mohandes</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Rehman</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2017</year>). <article-title>Vertical extrapolation of wind speed using artificial neural network hybrid system</article-title>. <source>Neural Computing Applications</source><italic>,</italic> <volume>28</volume><italic>(</italic><issue>8</issue><italic>),</italic> <fpage>2351</fpage>&#x2013;<lpage>2361</lpage>. DOI <pub-id pub-id-type="doi">10.1007/s00521-016-2373-x</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>8.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>T&#x00FC;rkan</surname>, <given-names>Y. S.</given-names></string-name>, <string-name><surname>Aydo&#x011F;mu&#x015F;</surname>, <given-names>H. Y.</given-names></string-name>, <string-name><surname>Erdal</surname>, <given-names>H.</given-names></string-name></person-group> (<year>2016</year>). <article-title>The prediction of the wind speed at different heights by machine learning methods</article-title>. <source>An International Journal of Optimization and Control: Theories &#x0026; Applications</source><italic>,</italic> <volume>6</volume><italic>(</italic><issue>2</issue><italic>),</italic> <fpage>179</fpage>&#x2013;<lpage>187</lpage>. DOI <pub-id pub-id-type="doi">10.11121/Ijocta.01.2016.00315</pub-id>.</mixed-citation></ref>
<ref id="ref-9"><label>9.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Emeksiz</surname>, <given-names>C.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Multi-gen genetic programming based improved innovative model for extrapolation of wind data at high altitudes, case study: Turkey</article-title>. <source>Computers and Electrical Engineering</source><italic>,</italic> <volume>100</volume><italic>(</italic><issue>1</issue><italic>),</italic> <fpage>107966</fpage>. DOI <pub-id pub-id-type="doi">10.1016/j.compeleceng.2022.107966</pub-id>.</mixed-citation></ref>
<ref id="ref-10"><label>10.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Mohandes</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Rehman</surname>, <given-names>S.</given-names></string-name></person-group> (<year>2018</year>). <article-title>Wind speed extrapolation using machine learning methods and LiDAR measurements</article-title>. <source>IEEE Access</source><italic>,</italic> <volume>6</volume><italic>,</italic> <fpage>77634</fpage>&#x2013;<lpage>77642</lpage>. DOI <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2883677</pub-id>.</mixed-citation></ref>
<ref id="ref-11"><label>11.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zilong</surname>, <given-names>Ti</given-names></string-name>, <string-name><surname>Deng</surname>, <given-names>X. W.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>M.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Artificial neural networks based wake model for power prediction of wind farm</article-title>. <source>Renewable Energy</source><italic>,</italic> <volume>172</volume><italic>,</italic> <fpage>618</fpage>&#x2013;<lpage>631</lpage>. DOI <pub-id pub-id-type="doi">10.1016/j.renene.2021.03.030</pub-id>.</mixed-citation></ref>
<ref id="ref-12"><label>12.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Zhao</surname>, <given-names>X.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Dynamic wind farm wake modeling based on a bilateral convolutional neural network and high-fidelity les data</article-title>. <source>Energy</source><italic>,</italic> <volume>258</volume><italic>(</italic><issue>8</issue><italic>),</italic> <fpage>124845</fpage>. DOI <pub-id pub-id-type="doi">10.1016/j.energy.2022.124845</pub-id>.</mixed-citation></ref>
<ref id="ref-13"><label>13.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yang</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Deng</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Ti</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Yan</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Yang</surname>, <given-names>Q.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Cooperative yaw control of wind farm using a double-layer machine learning framework</article-title>. <source>Renewable Energy</source><italic>,</italic> <volume>193</volume><italic>(</italic><issue>5</issue><italic>),</italic> <fpage>519</fpage>&#x2013;<lpage>537</lpage>. DOI <pub-id pub-id-type="doi">10.1016/j.renene.2022.04.104</pub-id>.</mixed-citation></ref>
<ref id="ref-14"><label>14.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Mohandes</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Rehman</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Nuha</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Islam</surname>, <given-names>M. S.</given-names></string-name>, <string-name><surname>Schulze</surname>, <given-names>F. H.</given-names></string-name></person-group> (<year>2021</year>). <article-title>Accuracy of wind speed predictability with heights using recurrent neural networks</article-title>. <source>FME Transactions</source><italic>,</italic> <volume>49</volume><italic>(</italic><issue>4</issue><italic>),</italic> <fpage>908</fpage>&#x2013;<lpage>918</lpage>. DOI <pub-id pub-id-type="doi">10.5937/fme2104908M</pub-id>.</mixed-citation></ref>
<ref id="ref-15"><label>15.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cao</surname>, <given-names>Q.</given-names></string-name>, <string-name><surname>Ewing</surname>, <given-names>B. T.</given-names></string-name>, <string-name><surname>Thompson</surname>, <given-names>M. A.</given-names></string-name></person-group> (<year>2012</year>). <article-title>Forecasting wind speed with recurrent neural networks</article-title>. <source>European Journal of Operational Research</source><italic>,</italic> <volume>221</volume><italic>(</italic><issue>1</issue><italic>),</italic> <fpage>148</fpage>&#x2013;<lpage>154</lpage>. DOI <pub-id pub-id-type="doi">10.1016/j.ejor.2012.02.042</pub-id>.</mixed-citation></ref>
<ref id="ref-16"><label>16.</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Fletcher</surname>, <given-names>R.</given-names></string-name></person-group> (<year>2000</year>). <source>Practical methods of optimization</source>. <edition>Second Edition</edition>. <publisher-loc>The Atrium, Southern Gate, Chichester, West Sussex PO 19 8SQ, England</publisher-loc>: <publisher-name>John Wiley &#x0026; Sons, Ltd</publisher-name>. DOI <pub-id pub-id-type="doi">10.1002/9781118723203</pub-id>.</mixed-citation></ref>
<ref id="ref-17"><label>17.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Lv</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Deng</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Wan</surname>, <given-names>Z.</given-names></string-name></person-group> (<year>2020</year>). <article-title>An efficient single-parameter scaling memoryless Broyden-Fletcher-Goldfarb-Shanno algorithm for solving large scale unconstrained optimization problems</article-title>. <source>IEEE Access</source><italic>,</italic> <volume>8</volume><italic>,</italic> <fpage>85664</fpage>&#x2013;<lpage>85674</lpage>. DOI <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2992340</pub-id>.</mixed-citation></ref>
<ref id="ref-18"><label>18.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Da</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Xiurun</surname>, <given-names>G.</given-names></string-name></person-group> (<year>2005</year>). <article-title>An improved PSO-based ANN with simulated annealing technique</article-title>. <source>Neurocomputing</source><italic>,</italic> <volume>63</volume><italic>,</italic> <fpage>527</fpage>&#x2013;<lpage>533</lpage>. DOI <pub-id pub-id-type="doi">10.1016/j.neucom.2004.07.002</pub-id>.</mixed-citation></ref>
<ref id="ref-19"><label>19.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>MATHWORKS</collab></person-group> (2022). <article-title>Gradient descent with momentum and adaptive learning rate backpropagation</article-title>. <ext-link ext-link-type="uri" xlink:href="https://www.mathworks.com/help/deeplearning/ref/traingdx.html">https://www.mathworks.com/help/deeplearning/ref/traingdx.html</ext-link>.</mixed-citation></ref>
<ref id="ref-20"><label>20.</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Yu</surname>, <given-names>C. C.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>B. D.</given-names></string-name></person-group> (<year>2002</year>). <article-title>A backpropagation algorithm with adaptive learning rate and momentum coefficient</article-title>. <conf-name>Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN&#x2019;02 (Cat. No. 02CH37290)</conf-name>, vol. <comment>2</comment>, pp. <fpage>1218</fpage>&#x2013;<lpage>1223</lpage>. <publisher-name>IEEE</publisher-name>.</mixed-citation></ref>
<ref id="ref-21"><label>21.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Wen</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name></person-group> (<year>2014</year>). <article-title>Real estate price forecasting based on SVM optimized by PSO</article-title>. <source>Optik</source><italic>,</italic> <volume>125</volume><italic>(</italic><issue>3</issue><italic>),</italic> <fpage>1439</fpage>&#x2013;<lpage>1443</lpage>. DOI <pub-id pub-id-type="doi">10.1016/j.ijleo.2013.09.017</pub-id>.</mixed-citation></ref>
<ref id="ref-22"><label>22.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Liu</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Gao</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name> <etal>et al.</etal></person-group> (<year>2021</year>). <article-title>Wind speed behaviors feather analysis and its utilization on wind speed prediction using 3D-CNN</article-title>. <source>Energy</source><italic>,</italic> <volume>236</volume><italic>(</italic><issue>5</issue><italic>),</italic> <fpage>121523</fpage>. DOI <pub-id pub-id-type="doi">10.1016/j.energy.2021.121523</pub-id>.</mixed-citation></ref>
<ref id="ref-23"><label>23.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name>Hu, Y. L.</string-name>, <string-name><surname>Chen</surname>, <given-names>L.</given-names></string-name></person-group> (<year>2018</year>). <article-title>A nonlinear hybrid wind speed forecasting model using LSTM network, hysteretic ELM and differential evolution algorithm</article-title>. <source>Energy Conversion and Management</source><italic>,</italic> <volume>173</volume><italic>(</italic><issue>2</issue><italic>),</italic> <fpage>123</fpage>&#x2013;<lpage>142</lpage>. DOI <pub-id pub-id-type="doi">10.1016/j.enconman.2018.07.070</pub-id>.</mixed-citation></ref>
<ref id="ref-24"><label>24.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rehman</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Al-Hadhrami</surname>, <given-names>L. M.</given-names></string-name>, <string-name><surname>Alam</surname>, <given-names>M. M.</given-names></string-name>, <string-name><surname>Meyer</surname>, <given-names>J. P.</given-names></string-name></person-group> (<year>2013</year>). <article-title>Empirical correlation between hub height and local logarithmic law for different sizes of wind turbines</article-title>. <source>Sustainable Energy Technologies and Assessments</source><italic>,</italic> <volume>4</volume><italic>(</italic><issue>3</issue><italic>),</italic> <fpage>45</fpage>&#x2013;<lpage>51</lpage>. DOI <pub-id pub-id-type="doi">10.1016/j.seta.2013.09.003</pub-id>.</mixed-citation></ref>
<ref id="ref-25"><label>25.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Haby</surname>, <given-names>J.</given-names></string-name></person-group> (<year>2022</year>). <article-title>Wind speed increasing with heights</article-title>. <ext-link ext-link-type="uri" xlink:href="https://www.theweatherprediction.com/habyhints3/749/">https://www.theweatherprediction.com/habyhints3/749/</ext-link>.</mixed-citation></ref>
<ref id="ref-26"><label>26.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Shrestha</surname>, <given-names>R.</given-names></string-name>, <string-name>Di, L. P.</string-name>, <string-name><surname>Eugene</surname>, <given-names>G. Y.</given-names></string-name>, <string-name><surname>Kang</surname>, <given-names>L.</given-names></string-name>, <string-name>Shao, Y. Z.</string-name> <etal>et al.</etal></person-group> (<year>2017</year>). <article-title>Regression model to estimate flood impact on corn yield using MODIS NDVI and USDA cropland data layer</article-title>. <source>Journal of Integrative Agriculture</source><italic>,</italic> <volume>16</volume><italic>(</italic><issue>2</issue><italic>),</italic> <fpage>398</fpage>&#x2013;<lpage>407</lpage>. DOI <pub-id pub-id-type="doi">10.1016/S2095-3119(16)61502-2</pub-id>.</mixed-citation></ref>
</ref-list>
</back>
</article>