<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">33422</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2023.033422</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Lightning Search Algorithm with Deep Transfer Learning-Based Vehicle Classification</article-title>
<alt-title alt-title-type="left-running-head">Lightning Search Algorithm with Deep Transfer Learning-Based Vehicle Classification</alt-title>
<alt-title alt-title-type="right-running-head">Lightning Search Algorithm with Deep Transfer Learning-Based Vehicle Classification</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Alnfiai</surname><given-names>Mrim M.</given-names>
</name><email>m.alnofiee@tu.edu.sa</email></contrib>
<aff><institution>Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099</institution>, <addr-line>Taif, 21944</addr-line>, <country>Saudi Arabia</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Mrim M. Alnfiai. Email: <email>m.alnofiee@tu.edu.sa</email></corresp>
</author-notes>
<pub-date publication-format="print" date-type="pub" iso-8601-date="2022-12-15"><day>15</day>
<month>12</month>
<year>2022</year></pub-date>
<volume>74</volume>
<issue>3</issue>
<fpage>6505</fpage>
<lpage>6521</lpage>
<history>
<date date-type="received">
<day>16</day>
<month>6</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>10</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Alnfiai</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Alnfiai</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_33422.pdf"></self-uri>
<abstract>
<p>There is a drastic increase experienced in the production of vehicles in recent years across the globe. In this scenario, vehicle classification system plays a vital part in designing Intelligent Transportation Systems (ITS) for automatic highway toll collection, autonomous driving, and traffic management. Recently, computer vision and pattern recognition models are useful in designing effective vehicle classification systems. But these models are trained using a small number of hand-engineered features derived from small datasets. So, such models cannot be applied for real-time road traffic conditions. Recent developments in Deep Learning (DL)-enabled vehicle classification models are highly helpful in resolving the issues that exist in traditional models. In this background, the current study develops a Lightning Search Algorithm with Deep Transfer Learning-based Vehicle Classification Model for ITS, named LSADTL-VCITS model. The key objective of the presented LSADTL-VCITS model is to automatically detect and classify the types of vehicles. To accomplish this, the presented LSADTL-VCITS model initially employs You Only Look Once (YOLO)-v5 object detector with Capsule Network (CapsNet) as baseline model. In addition, the proposed LSADTL-VCITS model applies LSA with Multilayer Perceptron (MLP) for detection and classification of the vehicles. The performance of the proposed LSADTL-VCITS model was experimentally validated using benchmark dataset and the outcomes were examined under several measures. The experimental outcomes established the superiority of the proposed LSADTL-VCITS model compared to existing approaches.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Intelligent transportation system</kwd>
<kwd>object detection</kwd>
<kwd>vehicle classification</kwd>
<kwd>deep learning</kwd>
<kwd>machine learning</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>The advent of Machine Learning (ML) techniques and its implementation in multiple domains for problem solving, replacing statistical methods have transformed to the thresholds to higher levels with the application of these novel techniques [<xref ref-type="bibr" rid="ref-1">1</xref>]. Transportation systems have been positively influenced with the development of ML techniques, specifically from Intelligent Transportation Systems (ITS). With the expansion of big data and computational methods like Graphical Processing Units (GPUs), a particular class of ML named Deep Learning (DL) gained familiarity among the researchers. The ability of DL methods to handle huge volumes of data and extract knowledge from complicated systems are leveraged to achieve powerful and feasible solutions in the field of ITS [<xref ref-type="bibr" rid="ref-2">2</xref>]. Researchers make use of different DL networks in framing the issues faced in network, for instance resolving an issue with one of the Neural Network (NN) methods [<xref ref-type="bibr" rid="ref-3">3</xref>,<xref ref-type="bibr" rid="ref-4">4</xref>]. Several innovative solutions have been proposed and validated for traffic signal control to achieve optimum traffic management, increased transportation security through investigation sensor nodes, traffic rerouting model, health observation of transportation structure, and other such issues. These issues have been overcome with the help of robust techniques in the field of transportation engineering [<xref ref-type="bibr" rid="ref-5">5</xref>].</p>
<p>With an exponential production of vehicles across the globe, vehicle classification models serve a crucial role in different areas such as the advancement of ITS in traffic flow control systems, automatic highway toll collection, and perception in self-driving vehicles [<xref ref-type="bibr" rid="ref-6">6</xref>,<xref ref-type="bibr" rid="ref-7">7</xref>]. In early research works, loop and laser induction sensors-related methodologies were suggested for classification of the vehicle types [<xref ref-type="bibr" rid="ref-8">8</xref>]. Such sensors are fixed under road pavement for collection and analysis of the data to extract appropriate data with regards to vehicles. But, it is important to know that the stability and accuracy of such methods have been affected due to unfavourable weather conditions and damage occurred in road pavement [<xref ref-type="bibr" rid="ref-9">9</xref>,<xref ref-type="bibr" rid="ref-10">10</xref>]. Essentially, computer vision-related classification system is a two-step procedure; firstly, handcrafted extraction methodologies are used to obtain visual characteristics from input visual frames [<xref ref-type="bibr" rid="ref-11">11</xref>].</p>
<p>Secondly, ML classifiers are well-trained on the derived characteristics to classify group related data [<xref ref-type="bibr" rid="ref-12">12</xref>]. These algorithms accomplish the objective properly in specific controlled environments and are very comfortable with regards to maintenance and installation over the prevailing laser and inductive-related schemes. Further, these techniques are well-trained on the confined handcrafted characteristics that are derived from minor dataset. However, there is a need exists to have wide and in-depth knowledge to maintain and achieve precise period situation [<xref ref-type="bibr" rid="ref-13">13</xref>]. In recent years, DL-related Feature Selection (FS) and classification techniques have been launched which show better adaptability and applicability than the conventional classification methods. Convolutional Neural Network (CNN)-related classification algorithms reached high accuracy on large scale image dataset because of its sophisticated infrastructure.</p>
<p>The current study develops a Lightning Search Algorithm with Deep Transfer Learning based Vehicle Classification Model for ITS abbreviated as LSADTL-VCITS model. The key objective of the presented LSADTL-VCITS is to automatically detect and classify the types of vehicles. To accomplish this, the presented LSADTL-VCITS model initially employs You Only Look Once (YOLO)-v5 object detector with Capsule Network (CapsNet) as a baseline model. In addition, LSADTL-VCITS model applies LSA with Multilayer Perceptron (MLP) for detection and classification of the vehicles. The performance of the proposed LSADTL-VCITS model was experimentally validated utilizing benchmark dataset and the outcomes were examined under several measures.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Works</title>
<p>Butt&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-14">14</xref>] suggested a CNN-related vehicle classification algorithm to improve the strength of vehicle classification in real-time applications. Previously, pre-trained Inception-v3, AlexNet, Visual Geometry Group (VGG), ResNet, and GoogleNet were fine-tuned over self-constructed vehicle datasets so as to evaluate its execution efficiency in terms of convergence and accuracy. ResNet architecture was found to have performed well, based on its superior performance, after inclusion of a new classification block in its network. In literature [<xref ref-type="bibr" rid="ref-15">15</xref>], a DL-related traffic safety solution was suggested for a combination of manual and autonomous vehicle classification system in 5G-enabled ITS. In this method, the ending intention probability was acquired by combining the mean rule in decision layer. The authors in the study conducted earlier [<xref ref-type="bibr" rid="ref-16">16</xref>] designed a target detection system based on DL methods, especially CNN and NN modelling. Having been designed on the scrutiny of conventional Haar-like vehicle recognition system, this study suggested a vehicle recognition system related to CNN with Fused Edge features (FE-CNN).</p>
<p>Ashraf&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-17">17</xref>] provided a DL-related Intrusion Detection System (IDS) for ITS to identify the suspicious activities in Vehicles to Infrastructure (V2I) networks, In-Vehicles Networks (IVN), and Vehicles to Vehicle (V2V) transmissions. A DL architecture-related Long Short Term Memory (LSTM) Autoencoder system was developed earlier to identify intrusions in central network gateways of Autonomous Vehicles (AVs). Tsai&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-18">18</xref>] suggested an optimized vehicle classification and detection technique on the basis of DL technology for intellectual transportation applications. In this study, the authors enhanced the CNN infrastructure by fine-tuning the prevailing CNN infrastructure for intelligent transportation applications. Wang&#x00A0;et&#x00A0;al.&#x00A0;[<xref ref-type="bibr" rid="ref-19">19</xref>] recommended a new Rear-end Collision Prediction Mechanism by means of DL technique (RCPM), in which a CNN technique was utilized. In RCPM, the dataset was expanded and smoothed on the basis of genetic theory so as to alleviate class imbalance issue. The pre-processed dataset was classified into testing and training datasets as inputs to train the CNN method.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>The Proposed Model</title>
<p>In this study, a novel LSADTL-VCITS approach is introduced for identification and classification of vehicles in ITS environment. The presented LSADTL-VCITS model has two stages such as vehicle detection and vehicle classification. At first, YOLO-v5 model is applied for the recognition of vehicles. Secondly, LSA is used with MLP method for the classification of vehicles under distinct classes.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Vehicle Detection Module</title>
<p>In this study, YOLO-v5 model is applied for the recognition of vehicles. YOLOv5 technique, the state-of-the-art version of YOLO technique, is an established technique known for its breakneck recognition speed and high accuracy. YOLOv5 approach has a recognition speed as low as 2 ms per image on single NVIDIA Tesla v100 [<xref ref-type="bibr" rid="ref-20">20</xref>]. The presented technique needs an input image, identified from chunks which is then integrated into a single image; thus, the YOLOv5 technique is selected as the object recognition technique with superior recognition speed and real-time efficiency. YOLOv5 network method has three important infrastructures such as recognition head, backbone, and feature pyramid network.</p>
<p>The loss function is formulated as follows.</p>
<p><disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>l</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>o</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:math></disp-formula>whereas <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>o</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:math></inline-formula> <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> signify the bounding box regression loss function, classifier loss function, and confidence loss function correspondingly.</p>
<p>Bounding box regression loss function can be determined as given below.</p>
<p><disp-formula id="ueqn-2"><mml:math id="mml-ueqn-2" display="block"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>b</mml:mi><mml:mi>o</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:msup><mml:mi>S</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:munderover><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:munderover><mml:msubsup><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi><mml:mi>j</mml:mi><mml:mi>b</mml:mi></mml:mrow></mml:msubsup><mml:mi>b</mml:mi><mml:mi>j</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p><disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:mo>[</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>x</mml:mi><mml:msubsup><mml:mo>&#x2227;</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>y</mml:mi><mml:msubsup><mml:mo>&#x2227;</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>w</mml:mi><mml:msubsup><mml:mo>&#x2227;</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>h</mml:mi><mml:msubsup><mml:mo>&#x2227;</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<p>Further, classifier loss function is expressed as follows</p>
<p><disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">s</mml:mi></mml:mrow></mml:mrow></mml:msub><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mi>S</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:msubsup><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msubsup><mml:msubsup><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msubsup><mml:msub><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>C</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>c</mml:mi><mml:mn>1</mml:mn><mml:mrow><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">s</mml:mi></mml:mrow></mml:mrow></mml:msub><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mi>log</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi>p</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>c</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<p>Confidence loss function can be calculated as given below</p>
<p><disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">b</mml:mi><mml:mi mathvariant="italic">j</mml:mi></mml:mrow></mml:mrow></mml:msub><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mi>S</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:msubsup><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msubsup><mml:msubsup><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">b</mml:mi><mml:mi mathvariant="italic">j</mml:mi></mml:mrow></mml:mrow></mml:msubsup><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>c</mml:mi><mml:msub><mml:mo>&#x2227;</mml:mo><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:msup><mml:mi>s</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:msubsup><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>B</mml:mi></mml:mrow></mml:msubsup><mml:msubsup><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mi>c</mml:mi><mml:msub><mml:mo>&#x2227;</mml:mo><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:math></disp-formula></p>
<p>Here, <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> implies place loss co-efficient, <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">s</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> signifies category loss co-efficient, <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mrow><mml:mover><mml:mi>x</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula>, <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mrow><mml:mover><mml:mi>y</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> refers to true central co-ordinates of the target, and <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mrow><mml:mover><mml:mi>w</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mo>,</mml:mo></mml:math></inline-formula> <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mrow><mml:mover><mml:mi>h</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> represent the width and height of the target.</p>
<p>When the anchor box at <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mo stretchy="false">(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> comprises of targets, before the value <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:msubsup><mml:mi>I</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is 1; else, the value is <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mn>0.</mml:mn></mml:math></inline-formula> <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>c</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> denotes the type probability of targets, and <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:msub><mml:mrow><mml:mover><mml:mi>p</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>c</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> refers to true value of the categories. The length of 2 is equivalent to the entire number of categories, <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mi>C</mml:mi><mml:mo>.</mml:mo></mml:math></inline-formula></p>
<p>Besides, CapsNet model is also utilized as the baseline model. CapsNet is conceived from capsules [<xref ref-type="bibr" rid="ref-21">21</xref>] i.e., a collection of neurons in which the activity vector signifies discrete parameters of a particular entity. In addition, the capsule exhibits better resistance to white box adversarial attack compared to standard CNN. This is to ensure that the capsule has the capacity to retain every spatial information. The underlying CapsNet contains a convolution layer, alternative convolution layer named PrimaryCaps layer, and the last layer in the name of DigitCaps layer. CapsNet has one PrimaryCaps layer, one DigitCaps layer, and one convolution layer. Every set of 8 scalars in a feature map tensor constitutes <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mi>i</mml:mi></mml:math></inline-formula> PrimaryCap. <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> signifies the output of <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mi>i</mml:mi></mml:math></inline-formula> PrimaryCap. <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mrow><mml:mover><mml:mi>u</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> specifies the prediction vector viz. input of <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mi>j</mml:mi></mml:math></inline-formula> final DigitCaps. <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> refers to the weight matrix.</p>
<p><disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:msub><mml:mrow><mml:mover><mml:mi>u</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>Routing, by agreement, might transfer the output of PrimaryCaps to final aps by enhancing or reducing the close relationship between PrimaryCaps and DigitCaps in place of pooling process and then retaining the spatial relationship amongst other object parts. The coupling coefficients amongst the two Caps are increased, when the output is equal. <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> indicates a logit of <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mrow><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">f</mml:mi><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">m</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">x</mml:mi></mml:mrow></mml:math></inline-formula> operation that defines <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> coupling coefficient amongst <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>i</mml:mi></mml:math></inline-formula> Caps in layer above and below <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mi>j</mml:mi></mml:math></inline-formula> Caps in layer, as follows.</p>
<p><disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="normal">&#x03A3;</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:msup><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mover><mml:mi>u</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p>The vector length <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> symbolizes the input of DigitCaps layer and is constraint to one by squash determination to attain the output <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The inner product of <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mrow><mml:mover><mml:mi>u</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> upgrades the <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mi>log</mml:mi></mml:math></inline-formula> likelihood <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The most comparable two vectors are, lengthier the vector <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> will be. <xref ref-type="fig" rid="fig-1">Fig. 1</xref> depicts the framework of CapsNet.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Structure of CapsNet</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-1.png"/>
</fig>
<p>Varied CapsNet utilizes three level caps layers to learn dissimilar features and focus on multi-dimension vectors. DeepCaps exploits 3-D convolution layers and exceeds the advanced outcome in the region of CapsNet. CapsNet has been extensively applied in several fields. The attention method has accomplished a tremendous growth in the field of CV. It assists the model in emphasizing the relationship amongst regions of images and encompassing long range dependency via image region. SENet uses channel-wise importance to get considered for the technique that places more weight on the network with real significance. GCNet collectively fuses the benefits of Squeeze Excitation (SE) and Non-local blocks to attain better global context blocks.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Vehicle Classification Module</title>
<p>After vehicle detection process, vehicle classification is carried out using LSA with MLP model. Perceptron is a basic Artificial Neural Network (ANN) architecture based on slight distinct artificial neurons in the name of Threshold Logic Unit (TLU) or the Linear Threshold Unit (LTU) [<xref ref-type="bibr" rid="ref-22">22</xref>]. TLU assesses the weighed amount of the input as follows:</p>
<p><disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mi>z</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03C4;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="script">W</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>Then, a step function is applied for the sum and the outcomes are regarded as output.</p>
<p><disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>w</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>But, <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mi>z</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03C4;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="script">W</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:msup></mml:math></inline-formula>. Perceptron is composed of a single layer of TLU linked with all the input values. When the neurons in a layer are connected together, it is called Fully-Connected (FC) layer or a dense layer. The resulting ANN is also known as MLP. In order to train the MLP, Back Propagation training method is applied for automatic computation of the gradients.</p>
<p>In order to adjust the parameters related to MLP approach, LSA is employed. LSA is a metaheuristic approach that simulates the natural phenomenon of lightning [<xref ref-type="bibr" rid="ref-23">23</xref>]. LSA depends upon Step Leader (SL) approach in which the projectiles are considered as individuals and are quicker particles. The primary stage of LSA is to determine the projectile that signifies the population. Conversely, the solution of populations refers to the tip of present SL. There are different phases of LSA exist and the particulars of all of them are provided in the subsequent section.</p>
<p>When a projectile travels in normal criteria within the atmosphere, its kinetic energy gets reduced during collision with atoms and molecules present in the air. The velocity of the projectiles is expressed as follows</p>
<p><disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msqrt><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>c</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:msqrt><mml:mo>&#x2212;</mml:mo><mml:mi>s</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>m</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:msup><mml:mi>c</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>Here, <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> imply the primary velocity and present velocity of the projectiles correspondingly. <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mi>c</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> signify the speed of lights and rate of ionizations respectively. While the rate of ionization can be constant as well. But <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mi>m</mml:mi></mml:math></inline-formula> stands for mass of the projectiles, and <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mi>s</mml:mi></mml:math></inline-formula> demonstrates the length of the path. In <xref ref-type="disp-formula" rid="eqn-9">Eq. (9)</xref>, it can be experiential that the velocity is dependent upon the mass of projectiles and the place of leader tips. Therefore, both exploration and exploitation of LSA are controlled through the comparative energies of SLS.</p>
<p>SL is another property named &#x2018;forking&#x2019; which happens if two symmetrical and simultaneous branches have occurred. Forking is inspired based on two approaches, (1) utilizing the opposite number to represent the design of symmetrical channels in nuclei collision from the subsequent formula:</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:msub><mml:mover><mml:mi>P</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>U</mml:mi><mml:mi>B</mml:mi><mml:mo>+</mml:mo><mml:mi>L</mml:mi><mml:mi>B</mml:mi><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>whereas <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:msub><mml:mover><mml:mi>p</mml:mi><mml:mo accent="false">&#x00AF;</mml:mo></mml:mover><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denote the projectiles and their opposites correspondingly. But, <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mi>U</mml:mi><mml:mi>B</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>L</mml:mi><mml:mi>B</mml:mi></mml:math></inline-formula> signify the restrictions of search space.</p>
<p>A group of three kinds of the projectiles is executed for Transition Projectiles (TP) procedure and it is utilized to construct the First-Step Leader (FSL) population, <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mi>N</mml:mi></mml:math></inline-formula>. Besides, it can be utilized to create the Space Projectiles (SP) that attempt at determining the place of an optimum leader. Similarly, an individual&#x2019;s varieties are utilized to build the Lead Projectiles (LP) that signify the entire optimum place, <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mi>N</mml:mi></mml:math></inline-formula> SL. For clarification, the particulars of all these types are provided subsequently.</p>
<p>Then, the leader tip is created at primary stage, because the ejected projectile is arbitrarily generated in thunder cell with the help of transitions. Therefore, it is signified by creating an arbitrary number in a uniform distribution which is determined using <xref ref-type="disp-formula" rid="eqn-11">Eq. (11)</xref>.</p>
<p><disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing=".2em" columnspacing="1em" displaystyle="false"><mml:mtr><mml:mtd><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mfrac></mml:mstyle><mml:mo>&#x2212;</mml:mo><mml:mi>a</mml:mi></mml:mtd><mml:mtd><mml:mi>f</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mo>&#x2264;</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:mo>&#x2264;</mml:mo><mml:mi>b</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mi>x</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>a</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>x</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mi>b</mml:mi></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Here, <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mi>a</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mi>b</mml:mi></mml:math></inline-formula> denote the restricted searching spaces. <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> defines the solution or tip energy <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>l</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> of SL <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mo stretchy="false">(</mml:mo><mml:mi>s</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. For the population i.e., <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mi>N</mml:mi></mml:math></inline-formula> SL (that is, <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:mi>S</mml:mi><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mi>s</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>s</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mi>s</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">]</mml:mo></mml:math></inline-formula>), a set of <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mi>N</mml:mi></mml:math></inline-formula> projectiles <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:mi>P</mml:mi><mml:mi>T</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy="false">]</mml:mo></mml:math></inline-formula> is desired. <xref ref-type="fig" rid="fig-2">Fig. 2</xref> demonstrates the flowchart of LSA.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Flowchart of LSA</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-2.png"/>
</fig>
<p>In LSA, SP is demonstrated as an arbitrary number that is created with the help of exponential distribution as determined herewith.</p>
<p><disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing=".2em" columnspacing="1em" displaystyle="false"><mml:mtr><mml:mtd><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mn>1</mml:mn><mml:mi>&#x03BC;</mml:mi></mml:mfrac></mml:mstyle><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msup><mml:mi>&#x03BC;</mml:mi></mml:mfrac></mml:mrow></mml:msup></mml:mtd><mml:mtd><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msup><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mrow><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">h</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">w</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Here, <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula> refers to the shape of distribution which can be utilized for controlling SP <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:msup><mml:mi>P</mml:mi><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy="false">]</mml:mo></mml:math></inline-formula> at <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>p</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>. The <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula> for <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:msup><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mi>h</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> SP is determined as the distance between (i.e., <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>) and its leader project, <inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:msup><mml:mi>p</mml:mi><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula>. In line with this description, the value of <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is updated using the subsequent equation:</p>
<disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mrow><mml:msup><mml:mi>n</mml:mi><mml:mrow><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x00B1;</mml:mo><mml:mrow><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">x</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>whereas <inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mrow><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">x</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:math></inline-formula> denotes the exponential arbitrary number with shape, <inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. In case of <inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> before <inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:mrow><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">x</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:math></inline-formula> is subtracted, <xref ref-type="disp-formula" rid="eqn-13">Eq. (13)</xref> offers a value superior to 0. But, <inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> promises a stepped leader propagation, if the projectile energy <inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:msubsup><mml:mi>E</mml:mi><mml:mrow><mml:msub><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x003E;</mml:mo><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mrow><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula>. In this scenario, it is possible still to achieve an optimum solution. Both <inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mi>s</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> are extended <inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:mi>s</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> correspondingly, if their equivalent <inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> at step <inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> attains the optimum solution. Then, each other (i.e., <inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and <inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mi>s</mml:mi><mml:mi>l</mml:mi></mml:math></inline-formula>) remains unchanged until the step completes.</p>
<p>With regards to <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:mi>S</mml:mi><mml:mi>P</mml:mi></mml:math></inline-formula>, LP gets upgraded to a predictable formula with the help of arbitrary number that is created in normal distribution as given below.</p>
<p><disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>&#x03C3;</mml:mi><mml:msqrt><mml:mi>n</mml:mi><mml:mi>&#x03C0;</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mstyle><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mrow><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup></mml:math></disp-formula>whereas <inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:mi>&#x03BC;</mml:mi></mml:math></inline-formula> signify the scale and shape of the distributions correspondingly. The place of LP (p) is upgraded using a normal random number <inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:mrow><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">m</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">d</mml:mi></mml:mrow></mml:math></inline-formula> as determined in <xref ref-type="disp-formula" rid="eqn-15">Eq. (15)</xref>.</p>
<p><disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>p</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>&#x00B1;</mml:mo><mml:mrow><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">m</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">d</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>LSA approach resolves a Fitness Function (FF) to achieve high classification efficiency. It defines a positive integer so as to signify the optimal efficiency of candidate outcomes. In this case, the minimum classifier error rate is assumed to be the FF as offered in <xref ref-type="disp-formula" rid="eqn-16">Eq. (16)</xref>.</p>
<p><disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:mrow><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>C</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>E</mml:mi><mml:mi>r</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>R</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>h</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mrow><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>h</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mrow></mml:mfrac><mml:mo>&#x2217;</mml:mo><mml:mn>100</mml:mn></mml:math></disp-formula></p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Performance Validation</title>
<p>The current section validates the vehicle classification performance of the proposed LSADTL-VCITS model using Veri dataset [<xref ref-type="bibr" rid="ref-24">24</xref>]. The dataset holds images under six classes namely bus, Multi-Purpose Vehicle (MPV), pickup, sedan, truck, and van. The details, related to the dataset, are shown in <xref ref-type="table" rid="table-1">Table 1</xref>.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Dataset details</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Class names</th>
<th>No. of images</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bus</td>
<td>1000</td>
</tr>
<tr>
<td>MPV</td>
<td>1000</td>
</tr>
<tr>
<td>Pickup</td>
<td>1000</td>
</tr>
<tr>
<td>Sedan</td>
<td>1000</td>
</tr>
<tr>
<td>Truck</td>
<td>1000</td>
</tr>
<tr>
<td>Van</td>
<td>1000</td>
</tr>
<tr>
<td>Total</td>
<td>6000</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-3">Fig. 3</xref> illustrates the confusion matrices generated by the proposed LSADTL-VCITS model on the applied test dataset. The figure indicates that the proposed LSADTL-VCITS approach gained effectual outcomes under all classes. On 70% of training (TR) data, LSADTL-VCITS model categorized 654 images under bus, 700 images under MPV, 684 images under pickup, 673 images under Sedan, 683 images under truck, and 674 images under van class respectively. Also, on 30% of testing (TS) data, the proposed LSADTL-VCITS system categorized 332 images as bus, 284 images as MPV, 287 images as pickup, 295 images as Sedan, 276 images as truck, and 284 images as van respectively. In addition to these, on 90% of TR data, the presented LSADTL-VCITS approach classified 907 images under bus, 888 images under MPV, 894 images under pickup, 892 images under sedan, 896 images under truck, and 900 images under van respectively. At last, on 10% of TS data, the proposed LSADTL-VCITS algorithm segregated 90 images into bus, 110 images into MPV, 104 images into pickup, 100 images into Sedan, 100 images into truck, and 94 images into van classes correspondingly.</p>
<fig id="fig-3"><label>Figure 3</label><caption><title>Confusion matrices of LSADTL-VCITS technique (a) 70% of TR data, (b) 30% of TS data, (c) 80% of TR data, (d) 20% of TS data, (e) 90% of TR data, and (f) 10% of TS data</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-3.png"/></fig>
<p><xref ref-type="table" rid="table-2">Table 2</xref> offers a detailed view on vehicle classification results accomplished by the proposed LSADTL-VCITS model under distinct aspects. <xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows the comprehensive vehicle classification performance accomplished by the proposed LSADTL-VCITS model on 70% of TR data. The figure indicates that the proposed LSADTL-VCITS model achieved the maximum performance under all classes. For sample, LSADTL-VCITS model recognized the bus images with an <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.10%, <inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 96.18%, <inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.20%, and an <inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97.18% respectively.</p>
<table-wrap id="table-2"><label>Table 2</label>
<caption>
<title>Results of the analysis of LSADTL-VCITS approach under distinct measures with 70% of TR and 30% of TS data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Class labels</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>F-score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="5">Training phase (70%)</td>
</tr>
<tr>
<td>Bus</td>
<td>99.10</td>
<td>96.18</td>
<td>98.20</td>
<td>97.18</td>
</tr>
<tr>
<td>MPV</td>
<td>99.48</td>
<td>98.31</td>
<td>98.59</td>
<td>98.45</td>
</tr>
<tr>
<td>Pickup</td>
<td>98.48</td>
<td>94.21</td>
<td>96.88</td>
<td>95.53</td>
</tr>
<tr>
<td>Sedan</td>
<td>98.90</td>
<td>96.97</td>
<td>96.42</td>
<td>96.70</td>
</tr>
<tr>
<td>Truck</td>
<td>99.00</td>
<td>98.41</td>
<td>95.66</td>
<td>97.02</td>
</tr>
<tr>
<td>Van</td>
<td>98.76</td>
<td>97.12</td>
<td>95.47</td>
<td>96.29</td>
</tr>
<tr>
<td>Average</td>
<td>98.95</td>
<td>96.87</td>
<td>96.87</td>
<td>96.86</td>
</tr>
<tr>
<td align="center" colspan="5">Testing phase (30%)</td>
</tr>
<tr>
<td>Bus</td>
<td>99.17</td>
<td>96.13</td>
<td>99.00</td>
<td>97.54</td>
</tr>
<tr>
<td>MPV</td>
<td>99.31</td>
<td>98.48</td>
<td>97.36</td>
<td>97.92</td>
</tr>
<tr>
<td>Pickup</td>
<td>98.90</td>
<td>99.73</td>
<td>93.56</td>
<td>96.55</td>
</tr>
<tr>
<td>Sedan</td>
<td>99.48</td>
<td>97.32</td>
<td>99.63</td>
<td>98.46</td>
</tr>
<tr>
<td>Truck</td>
<td>98.65</td>
<td>96.88</td>
<td>95.10</td>
<td>95.98</td>
</tr>
<tr>
<td>Van</td>
<td>98.96</td>
<td>95.13</td>
<td>98.74</td>
<td>96.90</td>
</tr>
<tr>
<td>Average</td>
<td>99.08</td>
<td>97.28</td>
<td>97.23</td>
<td>97.22</td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-4"><label>Figure 4</label><caption><title>Results of the analysis of LSADTL-VCITS approach under 70% of TR data</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-4.png"/></fig>
<p>The proposed LSADTL-VCITS system recognized the sedan images with an <inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.90%, <inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 96.97%, <inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 96.42%, and an <inline-formula id="ieqn-89"><mml:math id="mml-ieqn-89"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 96.70% correspondingly. Moreover, LSADTL-VCITS algorithm recognized the van images with an <inline-formula id="ieqn-90"><mml:math id="mml-ieqn-90"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.76%, <inline-formula id="ieqn-91"><mml:math id="mml-ieqn-91"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.12%, <inline-formula id="ieqn-92"><mml:math id="mml-ieqn-92"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 95.47%, and an <inline-formula id="ieqn-93"><mml:math id="mml-ieqn-93"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 96.29% correspondingly.</p>
<p><xref ref-type="fig" rid="fig-5">Fig. 5</xref> shows the comprehensive vehicle classification performance achieved by the proposed LSADTL-VCITS approach on 30% of TS data. The figure exposes that the proposed LSADTL-VCITS algorithm achieved the maximal performance under all the classes. For sample, LSADTL-VCITS model recognized the bus images with an <inline-formula id="ieqn-94"><mml:math id="mml-ieqn-94"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.17%, <inline-formula id="ieqn-95"><mml:math id="mml-ieqn-95"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 96.13%, <inline-formula id="ieqn-96"><mml:math id="mml-ieqn-96"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99%, and an <inline-formula id="ieqn-97"><mml:math id="mml-ieqn-97"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97.54% correspondingly. Similarly, LSADTL-VCITS model recognized the sedan images with an <inline-formula id="ieqn-98"><mml:math id="mml-ieqn-98"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.48%, <inline-formula id="ieqn-99"><mml:math id="mml-ieqn-99"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.32%, <inline-formula id="ieqn-100"><mml:math id="mml-ieqn-100"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.63%, and an <inline-formula id="ieqn-101"><mml:math id="mml-ieqn-101"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.46% respectively. Moreover, the proposed LSADTL-VCITS method recognized the van images with an <inline-formula id="ieqn-102"><mml:math id="mml-ieqn-102"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.96%, <inline-formula id="ieqn-103"><mml:math id="mml-ieqn-103"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 95.13%, <inline-formula id="ieqn-104"><mml:math id="mml-ieqn-104"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.74%, and an <inline-formula id="ieqn-105"><mml:math id="mml-ieqn-105"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 96.90% correspondingly.</p>
<fig id="fig-5"><label>Figure 5</label><caption><title>Results of the analysis of LSADTL-VCITS approach with 30% of TS data</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-5.png"/></fig>
<p><xref ref-type="table" rid="table-3">Table 3</xref> provides a detailed view on the vehicle classification outcome attained by the proposed LSADTL-VCITS system under distinct aspects. <xref ref-type="fig" rid="fig-6">Fig. 6</xref> shows the comprehensive vehicle classification performance achieved by the proposed LSADTL-VCITS model on 80% of TR data. The figure infers that the proposed LSADTL-VCITS model achieved the maximum performance under all the classes. For instance, LSADTL-VCITS model recognized the bus images with an <inline-formula id="ieqn-106"><mml:math id="mml-ieqn-106"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.20%, <inline-formula id="ieqn-107"><mml:math id="mml-ieqn-107"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.48%, <inline-formula id="ieqn-108"><mml:math id="mml-ieqn-108"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.80%, and an <inline-formula id="ieqn-109"><mml:math id="mml-ieqn-109"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97.64% correspondingly. Followed by, LSADTL-VCITS model recognized the sedan images with an <inline-formula id="ieqn-110"><mml:math id="mml-ieqn-110"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.33%, <inline-formula id="ieqn-111"><mml:math id="mml-ieqn-111"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.87%, <inline-formula id="ieqn-112"><mml:math id="mml-ieqn-112"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.22%, and an <inline-formula id="ieqn-113"><mml:math id="mml-ieqn-113"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97.93% correspondingly. Furthermore, LSADTL-VCITS methodology recognized the van images with an <inline-formula id="ieqn-114"><mml:math id="mml-ieqn-114"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.26%, <inline-formula id="ieqn-115"><mml:math id="mml-ieqn-115"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.22%, <inline-formula id="ieqn-116"><mml:math id="mml-ieqn-116"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.35%, and an <inline-formula id="ieqn-117"><mml:math id="mml-ieqn-117"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97.78% respectively.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Results of the analysis of LSADTL-VCITS approach under distinct measures with 80% of TR and 20% of TS data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Class labels</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>F-score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="5">Training phase (80%)</td>
</tr>
<tr>
<td>Bus</td>
<td>99.20</td>
<td>97.48</td>
<td>97.80</td>
<td>97.64</td>
</tr>
<tr>
<td>MPV</td>
<td>98.94</td>
<td>98.04</td>
<td>95.51</td>
<td>96.76</td>
</tr>
<tr>
<td>Pickup</td>
<td>98.57</td>
<td>92.96</td>
<td>98.88</td>
<td>95.83</td>
</tr>
<tr>
<td>Sedan</td>
<td>99.31</td>
<td>98.65</td>
<td>97.22</td>
<td>97.93</td>
</tr>
<tr>
<td>Truck</td>
<td>99.33</td>
<td>98.87</td>
<td>97.11</td>
<td>97.98</td>
</tr>
<tr>
<td>Van</td>
<td>99.26</td>
<td>98.22</td>
<td>97.35</td>
<td>97.78</td>
</tr>
<tr>
<td>Average</td>
<td>99.10</td>
<td>97.37</td>
<td>97.31</td>
<td>97.32</td>
</tr>
<tr>
<td align="center" colspan="5">Testing phase (20%)</td>
</tr>
<tr>
<td>Bus</td>
<td>99.33</td>
<td>97.08</td>
<td>99.40</td>
<td>98.22</td>
</tr>
<tr>
<td>MPV</td>
<td>99.28</td>
<td>97.59</td>
<td>97.93</td>
<td>97.76</td>
</tr>
<tr>
<td>Pickup</td>
<td>99.22</td>
<td>97.62</td>
<td>97.62</td>
<td>97.62</td>
</tr>
<tr>
<td>Sedan</td>
<td>99.17</td>
<td>97.36</td>
<td>97.68</td>
<td>97.52</td>
</tr>
<tr>
<td>Truck</td>
<td>99.22</td>
<td>98.57</td>
<td>96.50</td>
<td>97.53</td>
</tr>
<tr>
<td>Van</td>
<td>99.11</td>
<td>97.93</td>
<td>96.60</td>
<td>97.26</td>
</tr>
<tr>
<td>Average</td>
<td>99.22</td>
<td>97.69</td>
<td>97.62</td>
<td>97.65</td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Results of the analysis of LSADTL-VCITS approach under 80% of TR data</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-6.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-7">Fig. 7</xref> illustrates the comprehensive vehicle classification performance achieved by the proposed LSADTL-VCITS model on 20% of TS data. The figure reveals that the proposed LSADTL-VCITS approach attained the maximum performance under all the classes. For sample, the proposed LSADTL-VCITS technique recognized the bus images with an <inline-formula id="ieqn-118"><mml:math id="mml-ieqn-118"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.33%, <inline-formula id="ieqn-119"><mml:math id="mml-ieqn-119"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.08%, <inline-formula id="ieqn-120"><mml:math id="mml-ieqn-120"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.40%, and an <inline-formula id="ieqn-121"><mml:math id="mml-ieqn-121"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.22% respectively. Likewise, LSADTL-VCITS system recognized the sedan images with an <inline-formula id="ieqn-122"><mml:math id="mml-ieqn-122"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.17%, <inline-formula id="ieqn-123"><mml:math id="mml-ieqn-123"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.36%, <inline-formula id="ieqn-124"><mml:math id="mml-ieqn-124"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.68%, and an <inline-formula id="ieqn-125"><mml:math id="mml-ieqn-125"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97.52% correspondingly. Additionally, the proposed LSADTL-VCITS methodology recognized the van images with an <inline-formula id="ieqn-126"><mml:math id="mml-ieqn-126"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.11%, <inline-formula id="ieqn-127"><mml:math id="mml-ieqn-127"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97.93%, <inline-formula id="ieqn-128"><mml:math id="mml-ieqn-128"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 96.60%, and an <inline-formula id="ieqn-129"><mml:math id="mml-ieqn-129"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97.26% correspondingly.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Results of the analysis of LSADTL-VCITS approach under 20% of TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-7.png"/>
</fig>
<p><xref ref-type="table" rid="table-4">Table 4</xref> shows the detailed vehicle classification result yielded by the proposed LSADTL-VCITS methodology under different aspects. <xref ref-type="fig" rid="fig-8">Fig. 8</xref> depicts the comprehensive vehicle classification performance produced by the proposed LSADTL-VCITS model on 90% of TR data. The figure indicates that the proposed LSADTL-VCITS methodology reached the maximal performance under all the classes. For instance, the proposed LSADTL-VCITS system recognized the bus images with an <inline-formula id="ieqn-130"><mml:math id="mml-ieqn-130"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.87%, <inline-formula id="ieqn-131"><mml:math id="mml-ieqn-131"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.56%, <inline-formula id="ieqn-132"><mml:math id="mml-ieqn-132"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.67%, and an <inline-formula id="ieqn-133"><mml:math id="mml-ieqn-133"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99.62% respectively. Along with that, the proposed LSADTL-VCITS model recognized the sedan images with an <inline-formula id="ieqn-134"><mml:math id="mml-ieqn-134"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.76%, <inline-formula id="ieqn-135"><mml:math id="mml-ieqn-135"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.44%, <inline-formula id="ieqn-136"><mml:math id="mml-ieqn-136"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.11%, and an <inline-formula id="ieqn-137"><mml:math id="mml-ieqn-137"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99.28% respectively. Eventually, LSADTL-VCITS approach recognized the van images with an <inline-formula id="ieqn-138"><mml:math id="mml-ieqn-138"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.85%, <inline-formula id="ieqn-139"><mml:math id="mml-ieqn-139"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.78%, <inline-formula id="ieqn-140"><mml:math id="mml-ieqn-140"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.34%, and an <inline-formula id="ieqn-141"><mml:math id="mml-ieqn-141"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99.56% respectively.</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Results of the analysis of LSADTL-VCITS approach under distinct measures with 90% of TR and 10% of TS data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Class labels</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>F-score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="5">Training phase (90%)</td>
</tr>
<tr>
<td>Bus</td>
<td>99.87</td>
<td>99.56</td>
<td>99.67</td>
<td>99.62</td>
</tr>
<tr>
<td>MPV</td>
<td>99.93</td>
<td>99.78</td>
<td>99.78</td>
<td>99.78</td>
</tr>
<tr>
<td>Pickup</td>
<td>99.81</td>
<td>99.00</td>
<td>99.89</td>
<td>99.44</td>
</tr>
<tr>
<td>Sedan</td>
<td>99.76</td>
<td>99.44</td>
<td>99.11</td>
<td>99.28</td>
</tr>
<tr>
<td>Truck</td>
<td>99.93</td>
<td>99.89</td>
<td>99.67</td>
<td>99.78</td>
</tr>
<tr>
<td>Van</td>
<td>99.85</td>
<td>99.78</td>
<td>99.34</td>
<td>99.56</td>
</tr>
<tr>
<td>Average</td>
<td>99.86</td>
<td>99.57</td>
<td>99.57</td>
<td>99.57</td>
</tr>
<tr>
<td align="center" colspan="5">Testing phase (10%)</td>
</tr>
<tr>
<td>Bus</td>
<td>99.83</td>
<td>98.90</td>
<td>100.00</td>
<td>99.45</td>
</tr>
<tr>
<td>MPV</td>
<td>99.83</td>
<td>99.10</td>
<td>100.00</td>
<td>99.55</td>
</tr>
<tr>
<td>Pickup</td>
<td>99.83</td>
<td>100.00</td>
<td>99.05</td>
<td>99.52</td>
</tr>
<tr>
<td>Sedan</td>
<td>100.00</td>
<td>100.00</td>
<td>100.00</td>
<td>100.00</td>
</tr>
<tr>
<td>Truck</td>
<td>99.83</td>
<td>100.00</td>
<td>99.01</td>
<td>99.50</td>
</tr>
<tr>
<td>Van</td>
<td>100.00</td>
<td>100.00</td>
<td>100.00</td>
<td>100.00</td>
</tr>
<tr>
<td>Average</td>
<td>99.89</td>
<td>99.67</td>
<td>99.68</td>
<td>99.67</td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-8"><label>Figure 8</label><caption><title>Results of the analysis of LSADTL-VCITS approach under 90% of TR data</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-8.png"/></fig>
<p><xref ref-type="fig" rid="fig-9">Fig. 9</xref> demonstrates the comprehensive vehicle classification performance attained by the proposed LSADTL-VCITS approach on 10% of TS data. The figure reveals that the proposed LSADTL-VCITS algorithm reached high performance under all the classes. For instance, LSADTL-VCITS model recognized the bus images with an <inline-formula id="ieqn-142"><mml:math id="mml-ieqn-142"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.83%, <inline-formula id="ieqn-143"><mml:math id="mml-ieqn-143"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.90%, <inline-formula id="ieqn-144"><mml:math id="mml-ieqn-144"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 100%, and an <inline-formula id="ieqn-145"><mml:math id="mml-ieqn-145"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99.45% correspondingly. Besides, the proposed LSADTL-VCITS approach recognized the sedan images with an <inline-formula id="ieqn-146"><mml:math id="mml-ieqn-146"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 100%, <inline-formula id="ieqn-147"><mml:math id="mml-ieqn-147"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 100%, <inline-formula id="ieqn-148"><mml:math id="mml-ieqn-148"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 100%, and an <inline-formula id="ieqn-149"><mml:math id="mml-ieqn-149"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 100% correspondingly. Furthermore, the proposed LSADTL-VCITS method recognized the van images with an <inline-formula id="ieqn-150"><mml:math id="mml-ieqn-150"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 100%, <inline-formula id="ieqn-151"><mml:math id="mml-ieqn-151"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 100%, <inline-formula id="ieqn-152"><mml:math id="mml-ieqn-152"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 100%, and an <inline-formula id="ieqn-153"><mml:math id="mml-ieqn-153"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 100% correspondingly.</p>
<fig id="fig-9"><label>Figure 9</label><caption><title>Results of the analysis of LSADTL-VCITS approach under 10% of TS data</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-9.png"/></fig>
<p><xref ref-type="fig" rid="fig-10">Fig. 10</xref> portrays the brief comparative accuracy analysis results accomplished by the proposed LSADTL-VCITS approach and other recent methodologies [<xref ref-type="bibr" rid="ref-11">11</xref>]. The results imply that the proposed LSADTL-VCITS model attained superior performance under all the classes. For instance, in bus class, LSADTL-VCITS model offered a high accuracy of 95.83%, whereas the CCLSTSV-CNN, CNN-Fusion, ensemble, Deep CNN, and CNN-AICITS models reported the least accuracy values such as 95.83%, 92.45%, 87.50%, 90.76%, and 99.55% respectively. In addition, in MPV class, the proposed LSADTL-VCITS technique obtained a superior accuracy of 99.83%, whereas CCLSTSV-CNN, CNN-Fusion, ensemble, Deep CNN, and CNN-AICITS approaches reported the least accuracy values such as 94.07%, 95.72%, 91.02%, 91.23%, and 99.53% correspondingly. Also, under Truck class, the proposed LSADTL-VCITS algorithm achieved an increased accuracy of 99.83%, whereas CCLSTSV-CNN, CNN-Fusion, ensemble, Deep CNN, and CNN-AICITS techniques reported minimal accuracy values such as 95.38%, 93.28%, 89.19%, 94.67%, and 99.57% correspondingly. At the same time, under Van class, the proposed LSADTL-VCITS approach achieved the highest accuracy of 100%, whereas CCLSTSV-CNN, CNN-Fusion, ensemble, Deep CNN, and CNN-AICITS methodologies reported the least accuracy values such as 92.79%, 91.94%, 88.85%, 89.90%, and 99.58% correspondingly.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Comparative analysis results of LSADTL-VCITS algorithm and other existing approaches</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-10.png"/>
</fig>
<p>Finally, average accuracy analysis was conducted between LSADTL-VCITS model and other recent models and the results are shown in <xref ref-type="fig" rid="fig-11">Fig. 11</xref>. The figure implies that ensemble model produced the least average accuracy of 88.63%. Simultaneously, deep CNN approach reached somewhat average accuracy of 92.90% whereas VCLSTSV-CNN and CNN-Fusion models reached reasonably closer average accuracy values such as 94.81% and 94.91% respectively. Though CNN-AICITS model reached a reasonable average accuracy of 99.58%, the proposed LSADTL-VCITS model achieved the maximum average accuracy of 99.89%. Based on these results and discussion, it is evident that the proposed LSADTL-VCITS model achieved effectual vehicle type classification performance in ITS environment.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Average accuracy analysis results of LSADTL-VCITS algorithm and other existing approaches</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_33422-fig-11.png"/>
</fig>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion</title>
<p>In this study, a novel LSADTL-VCITS approach has been designed and developed for vehicle classification in the ITS environment. The presented LSADTL-VCITS approach encompasses two major stages namely vehicle detection and vehicle classification. At the initial stage, the YOLO-v5 model has been applied for vehicle recognition process. Next, in the second stage, LSA is utilized with MLP model for the classification of vehicles under distinct classes. The performance of the proposed LSADTL-VCITS approach is validated utilizing benchmark dataset and the outcomes were inspected under several measures. The experimental outcomes highlighted the better performance of the LSADTL-VCITS model over recent approaches. Thus, the presented LSADTL-VCITS model can be utilized as a proficient tool for vehicle classification in the ITS environment. In future, hybrid DL methods can be utilized to enhance the classification performance in the ITS environment.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> The authors received no specific funding for this study.</p>
</fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that he has no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>J. T.</given-names> <surname>Lee</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Chung</surname></string-name></person-group>, &#x201C;<article-title>Deep learning-based vehicle classification using an ensemble of local expert and global networks</article-title>,&#x201D; in <conf-name>2017 IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW)</conf-name>, <publisher-loc>Honolulu, HI, USA</publisher-loc>, pp. <fpage>920</fpage>&#x2013;<lpage>925</lpage>, <year>2017</year>. </mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Wan</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Qiao</surname></string-name> and <string-name><given-names>Q.</given-names> <surname>Pei</surname></string-name></person-group>, &#x201C;<article-title>An edge traffic flow detection scheme based on deep learning in an intelligent transportation system</article-title>,&#x201D; <source>IEEE Transactions on Intelligent Transportation Systems</source>, vol. <volume>22</volume>, no. <issue>3</issue>, pp. <fpage>1840</fpage>&#x2013;<lpage>1852</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Nguyen</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Kieu</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Wen</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Cai</surname></string-name></person-group>, &#x201C;<article-title>Deep learning methods in transportation domain: A review</article-title>,&#x201D; <source>IET Intelligent Transport Systems</source>, vol. <volume>12</volume>, no. <issue>9</issue>, pp. <fpage>998</fpage>&#x2013;<lpage>1004</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>I.</given-names> <surname>Abunadi</surname></string-name>, <string-name><given-names>M. M.</given-names> <surname>Althobaiti</surname></string-name>, <string-name><given-names>F. N.</given-names> <surname>Al-Wesabi</surname></string-name>, <string-name><given-names>A. M.</given-names> <surname>Hilal</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Medani</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Federated learning with blockchain assisted image classification for clustered UAV networks</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>72</volume>, no. <issue>1</issue>, pp. <fpage>1195</fpage>&#x2013;<lpage>1212</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Gholamhosseinian</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Seitz</surname></string-name></person-group>, &#x201C;<article-title>Vehicle classification in intelligent transport systems: An overview, methods and software perspective</article-title>,&#x201D; <source>IEEE Open Journal of Intelligent Transportation Systems</source>, vol. <volume>2</volume>, pp. <fpage>173</fpage>&#x2013;<lpage>194</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Al Duhayyim</surname></string-name>, <string-name><given-names>H. M.</given-names> <surname>Alshahrani</surname></string-name>, <string-name><given-names>F. N.</given-names> <surname>Al-Wesabi</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Abdullah Al-Hagery</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Mustafa Hilal</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Intelligent machine learning based EEG signal classification model</article-title>,&#x201D; <source>Computers, Materials &#x0026; Continua</source>, vol. <volume>71</volume>, no. <issue>1</issue>, pp. <fpage>1821</fpage>&#x2013;<lpage>1835</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. M.</given-names> <surname>Hilal</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Alsolai</surname></string-name>, <string-name><given-names>F. N.</given-names> <surname>Al-Wesabi</surname></string-name>, <string-name><given-names>M. K.</given-names> <surname>Nour</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Motwakel</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Fuzzy cognitive maps with bird swarm intelligence optimization-based remote sensing image classification</article-title>,&#x201D; <source>Computational Intelligence and Neuroscience</source>, vol. <volume>2022</volume>, no. <issue>4</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>12</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Yu</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Wu</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Song</surname></string-name> and <string-name><given-names>W.</given-names> <surname>Zeng</surname></string-name></person-group>, &#x201C;<article-title>A model for fine-grained vehicle classification based on deep learning</article-title>,&#x201D; <source>Neurocomputing</source>, vol. <volume>257</volume>, no. <issue>2</issue>, pp. <fpage>97</fpage>&#x2013;<lpage>103</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Maungmai</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Nuthong</surname></string-name></person-group>, &#x201C;<article-title>Vehicle classification with deep learning</article-title>,&#x201D; in <conf-name>2019 IEEE 4th Int. Conf. on Computer and Communication Systems (ICCCS)</conf-name>, <publisher-loc>Singapore</publisher-loc>, pp. <fpage>294</fpage>&#x2013;<lpage>298</lpage>, <year>2019</year>. </mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Hao</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Jin</surname></string-name></person-group>, &#x201C;<article-title>Fine-grained traffic flow prediction of various vehicle types via fusion of multisource data and deep learning approaches</article-title>,&#x201D; <source>IEEE Transactions on Intelligent Transportation Systems</source>, vol. <volume>22</volume>, no. <issue>11</issue>, pp. <fpage>6921</fpage>&#x2013;<lpage>6930</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Srivastava</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Narayan</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Mittal</surname></string-name></person-group>, &#x201C;<article-title>A survey of deep learning techniques for vehicle detection from UAV images</article-title>,&#x201D; <source>Journal of Systems Architecture</source>, vol. <volume>117</volume>, no. <issue>11</issue>, pp. <fpage>102152</fpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Kavitha</surname></string-name> and <string-name><given-names>D. N.</given-names> <surname>Chandrappa</surname></string-name></person-group>, &#x201C;<article-title>Optimized YOLOv2 based vehicle classification and tracking for intelligent transportation system</article-title>,&#x201D; <source>Results in Control and Optimization</source>, vol. <volume>2</volume>, no. <issue>3</issue>, pp. <fpage>100008</fpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Lv</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>W.</given-names> <surname>Xiu</surname></string-name></person-group>, &#x201C;<article-title>Solving the security problem of intelligent transportation system with deep learning</article-title>,&#x201D; <source>IEEE Transactions on Intelligent Transportation Systems</source>, vol. <volume>22</volume>, no. <issue>7</issue>, pp. <fpage>4281</fpage>&#x2013;<lpage>4290</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. A.</given-names> <surname>Butt</surname></string-name>, <string-name><given-names>A. M.</given-names> <surname>Khattak</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Shafique</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Hayat</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Abid</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Convolutional neural network based vehicle classification in adverse illuminous conditions for intelligent transportation systems</article-title>,&#x201D; <source>Complexity</source>, vol. <volume>2021</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Yu</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Alazab</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Tan</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Gu</surname></string-name></person-group>, &#x201C;<article-title>Deep learning-based traffic safety solution for a mixture of autonomous and manual vehicles in a 5G-enabled intelligent transportation system</article-title>,&#x201D; <source>IEEE Transactions on Intelligent Transportation Systems</source>, vol. <volume>22</volume>, no. <issue>7</issue>, pp. <fpage>4337</fpage>&#x2013;<lpage>4347</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Qiu</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Tian</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Al-Nabhan</surname></string-name></person-group>, &#x201C;<article-title>Deep learning-based algorithm for vehicle detection in intelligent transportation systems</article-title>,&#x201D; <source>The Journal of Supercomputing</source>, vol. <volume>77</volume>, no. <issue>10</issue>, pp. <fpage>11083</fpage>&#x2013;<lpage>11098</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Ashraf</surname></string-name>, <string-name><given-names>A. D.</given-names> <surname>Bakhshi</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Moustafa</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Khurshid</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Javed</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Novel deep learning-enabled LSTM autoencoder architecture for discovering anomalous events from intelligent transportation systems</article-title>,&#x201D; <source>IEEE Transactions on Intelligent Transportation Systems</source>, vol. <volume>22</volume>, no. <issue>7</issue>, pp. <fpage>4507</fpage>&#x2013;<lpage>4518</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>C. C.</given-names> <surname>Tsai</surname></string-name>, <string-name><given-names>C. K.</given-names> <surname>Tseng</surname></string-name>, <string-name><given-names>H. C.</given-names> <surname>Tang</surname></string-name> and <string-name><given-names>J. I.</given-names> <surname>Guo</surname></string-name></person-group>, &#x201C;<article-title>Vehicle detection and classification based on deep neural network for intelligent transportation applications</article-title>,&#x201D; in <conf-name>2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conf. (APSIPA ASC)</conf-name>, <publisher-loc>Honolulu, HI, USA</publisher-loc>, pp. <fpage>1605</fpage>&#x2013;<lpage>1608</lpage>, <year>2018</year>. </mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Qiu</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Mu</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Chen</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>A real-time collision prediction mechanism with deep learning for intelligent transportation system</article-title>,&#x201D; <source>IEEE Transactions on Vehicular Technology</source>, vol. <volume>69</volume>, no. <issue>9</issue>, pp. <fpage>9497</fpage>&#x2013;<lpage>9508</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Zhu</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Ge</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Zang</surname></string-name></person-group>, &#x201C;<article-title>Effective face detector based on YOLOv5 and superresolution reconstruction</article-title>,&#x201D; <source>Computational and Mathematical Methods in Medicine</source>, vol. <volume>2021</volume>, no. <issue>4</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Xiang</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Tang</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Zou</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Xu</surname></string-name></person-group>, &#x201C;<article-title>MS-CapsNet: A novel multi-scale capsule network</article-title>,&#x201D; <source>IEEE Signal Processing Letters</source>, vol. <volume>25</volume>, no. <issue>12</issue>, pp. <fpage>1850</fpage>&#x2013;<lpage>1854</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Ragab</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Albukhari</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Alyami</surname></string-name> and <string-name><given-names>R. F.</given-names> <surname>Mansour</surname></string-name></person-group>, &#x201C;<article-title>Ensemble deep-learning-enabled clinical decision support system for breast cancer diagnosis and classification on ultrasound images</article-title>,&#x201D; <source>Biology</source>, vol. <volume>11</volume>, no. <issue>3</issue>, pp. <fpage>439</fpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. S.</given-names> <surname>Hassan</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Sun</surname></string-name> and <string-name><given-names>Z.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Optimization techniques applied for optimal planning and integration of renewable energy sources based on distributed generation: Recent trends</article-title>,&#x201D; <source>Cogent Engineering</source>, vol. <volume>7</volume>, no. <issue>1</issue>, pp. <fpage>1766394</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="other"><uri>https://www.v7labs.com/open-datasets/veri-dataset</uri>.</mixed-citation></ref>
</ref-list>
</back>
</article>















