<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">51562</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2024.051562</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A New Speed Limit Recognition Methodology Based on Ensemble Learning: Hardware Validation</article-title>
<alt-title alt-title-type="left-running-head">A New Speed Limit Recognition Methodology Based on Ensemble Learning: Hardware Validation</alt-title>
<alt-title alt-title-type="right-running-head">A New Speed Limit Recognition Methodology Based on Ensemble Learning: Hardware Validation</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Karray</surname><given-names>Mohamed</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>mohamed.karray@esme.fr</email></contrib>
<contrib id="author-2" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Triki</surname><given-names>Nesrine</given-names></name><xref ref-type="aff" rid="aff-2">2</xref><email>nesrine.triki@isims.usf.tn</email></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Ksantini</surname><given-names>Mohamed</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<aff id="aff-1"><label>1</label><institution>ESME, ESME Research Lab</institution>, <addr-line>Ivry Sur Seine, 94200</addr-line>, <country>France</country></aff>
<aff id="aff-2"><label>2</label><institution>CEM, Lab ENIS, University of Sfax</institution>, <addr-line>Sfax, 3038</addr-line>, <country>Tunisia</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Authors: Mohamed Karray. Email: <email>mohamed.karray@esme.fr</email>; Nesrine Triki. Email: <email>nesrine.triki@isims.usf.tn</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2024</year></pub-date>
<pub-date date-type="pub" publication-format="electronic"><day>18</day><month>7</month><year>2024</year></pub-date>
<volume>80</volume>
<issue>1</issue>
<fpage>119</fpage>
<lpage>138</lpage>
<history>
<date date-type="received">
<day>08</day>
<month>3</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>01</day>
<month>6</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2024 Karray, Triki and Ksantini</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Karray, Triki and Ksantini</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_51562.pdf"></self-uri>
<abstract>
<p>Advanced Driver Assistance Systems (ADAS) technologies can assist drivers or be part of automatic driving systems to support the driving process and improve the level of safety and comfort on the road. Traffic Sign Recognition System (TSRS) is one of the most important components of ADAS. Among the challenges with TSRS is being able to recognize road signs with the highest accuracy and the shortest processing time. Accordingly, this paper introduces a new real time methodology recognizing Speed Limit Signs based on a trio of developed modules. Firstly, the Speed Limit Detection (SLD) module uses the Haar Cascade technique to generate a new SL detector in order to localize SL signs within captured frames. Secondly, the Speed Limit Classification (SLC) module, featuring machine learning classifiers alongside a newly developed model called DeepSL, harnesses the power of a CNN architecture to extract intricate features from speed limit sign images, ensuring efficient and precise recognition. In addition, a new Speed Limit Classifiers Fusion (SLCF) module has been developed by combining trained ML classifiers and the DeepSL model by using the Dempster-Shafer theory of belief functions and ensemble learning&#x2019;s voting technique. Through rigorous software and hardware validation processes, the proposed methodology has achieved highly significant F1 scores of 99.98% and 99.96% for DS theory and the voting method, respectively. Furthermore, a prototype encompassing all components demonstrates outstanding reliability and efficacy, with processing times of 150 ms for the Raspberry Pi board and 81.5 ms for the Nano Jetson board, marking a significant advancement in TSRS technology.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Driving automation</kwd>
<kwd>advanced driver assistance systems (ADAS)</kwd>
<kwd>traffic sign recognition (TSR)</kwd>
<kwd>artificial intelligence</kwd>
<kwd>ensemble learning</kwd>
<kwd>belief functions</kwd>
<kwd>voting method</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Road transportation is considered as the most widely used mode of transport in the world. Unfortunately, this situation has led to an increased number of traffic accidents, which has numerous negative drawbacks for public health, the economy, and society. According to the World Health Organization (WHO), about 1.3 million people die and about 50 million get injured every year. Moreover, in 2016, a study carried out by the National Highway Transportation Safety Administration (NHTSA) revealed that 94% of vehicle accidents are caused either by driver negligence, misinterpretation of the road signs, or non-compliance with them. Therefore, it is imperative to deploy automated and/or intelligent systems either to aid the driver in decision-making or to potentially replace the driver and autonomously make appropriate decisions.</p>
<p>In recent years, research and development of intelligent driving have been the topic of several initiatives and studies since there have been considerable breakthroughs in the performance of onboard equipment in vehicles. These advancements have allowed automotive manufacturers to include the following systems: Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS), both belonging to Driver Automation (DA) [<xref ref-type="bibr" rid="ref-1">1</xref>] and offering various levels of autonomy and safety for drivers. In fact, the study [<xref ref-type="bibr" rid="ref-2">2</xref>] underlines the importance of ADAS in enhancing the mobility of older drivers by assisting them on unfamiliar roads and mitigating age-related impairments that may affect driving abilities. In addition, the Auto Outlook 2040 report by Deepwater Asset Management predicts that by 2040, over 90% of all vehicles sold will fall into the categories of Level 4 and Level 5 automation [<xref ref-type="bibr" rid="ref-3">3</xref>]. According to the American auto insurers&#x2019; statistics, vehicles equipped with collision warning systems that alert the driver of an imminent danger or equipped with automatic braking systems cause fewer accidents compared to those without such features. For example, at Mercedes, the use of a forward collision warning system has resulted in a 3% reduction in accident frequency, according to a report published by the Insurance Institute for Highway Safety.</p>
<p>In the context of intelligent and/or autonomous vehicles and the enhancement of road safety, the suggested methodology involves acquiring and analyzing a real-time video stream from a camera mounted on the dashboard of a vehicle to recognize encountered signs. Such methodology serves a dual purpose: in the case of ADAS, it displays and informs the driver about the nature of the sign, and in the case of ADS, it enables the appropriate decision-making for driving and vehicle control. One of the most impacting challenges of a Traffic Sign Recognition (TSR) system is the rapidity and effectiveness of traffic sign recognition in real-world scenes, especially when the vehicle is moving at high speed in critical weather conditions. Indeed, several parameters influence the detection and classification of signs, such as fog, rain, sandstorms, snow, differences in brightness between day and night, similar objects, degradation of sign quality, partial obstruction, etc. [<xref ref-type="bibr" rid="ref-4">4</xref>].</p>
<p>This paper focuses on the recognition speed limit signs, a special category of traffic signs from the German Traffic Sign Recognition Benchmark (GTSRB) dataset, that thanks to their pivotal role in guiding drivers have enhanced traffic safety therefore reducing the risk of accidents on the roads. For these reasons, the suggested real time Speed Limit Recognition (SLR) methodology includes three modules: a Speed Limit Detection (SLD) module aiming to detect the traffic sign in the captured frame using the Haar Cascade technique, a Speed Limit Classification (SLC) module in which Machine Learning (ML) classifiers (KNN, SVM and Random Forest) and a new developed deep learning model based on CNN called DeepSL focused on speed limit images from the GTSRB dataset. Due to various conditions of image capturing, road sign recognition is subject to uncertainties. As a result, a Speed Limit Classifiers Fusion (SLCF) module has been introduced in the suggested approach. This module combines trained ML classifiers and the DeepSL trained model by using Data fusion techniques like Dempster Shafer&#x2019;s (DS) theory of belief functions and voting classifier from Ensemble Learning. These methods are employed to identify the most accurate combination of classifiers to improve the recognition process. In order to validate the effectiveness and the reliability of the proposed SLR methodology, a prototype including all software and hardware components is developed and tested initially, by simulation using video sequences and secondly, by using electronic targets such as Raspberry Pi 4 Model B and Nvidea Nano Jetson.</p>
<p>Remaining sections have been organized as follows: <xref ref-type="sec" rid="s2">Section 2</xref> describes different road signs found in the driving environment and summarizes methods used to detect and classify them. <xref ref-type="sec" rid="s3">Section 3</xref> details the suggested SLR methodology and discusses the obtained results. <xref ref-type="sec" rid="s4">Section 4</xref> illustrates the developed prototype of the elaborated SLR system. Finally, the last section is dedicated to a conclusion and the suggestion of potential future work.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Works</title>
<p>In the driving environment, various kinds of traffic signs have been installed to ensure safety on roads. Every road sign has a certain purpose, such as indicating which way to go, informing drivers about the rules of road, notifying of a danger, crossing, etc. In addition, some road signs have the same meaning around the world but look different. <xref ref-type="table" rid="table-1">Table 1</xref> shows an example of the different categories of signs used on European and American roads [<xref ref-type="bibr" rid="ref-5">5</xref>].</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Different traffic sign categories in European Union and United States</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Traffic signs categories</th>
<th>European Union</th>
<th>United States</th>
</tr>
</thead>
<tbody>
<tr>
<td>Warning</td>
<td><inline-graphic xlink:href="CMC_51562-inline-1.tif"/></td>
<td><inline-graphic xlink:href="CMC_51562-inline-6.tif"/></td>
</tr>
<tr>
<td>Regulatory</td>
<td><inline-graphic xlink:href="CMC_51562-inline-2.tif"/></td>
<td><inline-graphic xlink:href="CMC_51562-inline-7.tif"/></td>
</tr>
<tr>
<td>Obligatory</td>
<td><inline-graphic xlink:href="CMC_51562-inline-3.tif"/></td>
<td><inline-graphic xlink:href="CMC_51562-inline-8.tif"/></td>
</tr>
<tr>
<td>Priority</td>
<td><inline-graphic xlink:href="CMC_51562-inline-4.tif"/></td>
<td><inline-graphic xlink:href="CMC_51562-inline-9.tif"/></td>
</tr>
<tr>
<td>Informative</td>
<td><inline-graphic xlink:href="CMC_51562-inline-5.tif"/></td>
<td><inline-graphic xlink:href="CMC_51562-inline-10.tif"/></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In recent research, TSR systems have been widely studied. It has been understood that they are developed by using road sign detection and classification methods on various traffic signs datasets like GTSRB [<xref ref-type="bibr" rid="ref-6">6</xref>], DITS (Dataset of Italian Traffic Signs) [<xref ref-type="bibr" rid="ref-7">7</xref>], BTSD (Belgium Traffic Sign Dataset) [<xref ref-type="bibr" rid="ref-8">8</xref>], RTSD (Russian Traffic Sign Dataset) [<xref ref-type="bibr" rid="ref-9">9</xref>]. In fact, a TSR system aims to recognize a road sign from an image and instantly transmits the result to the driver, either through a display on the dashboard or an audible signal in the case of ADAS, or a command signal that interacts with the vehicle&#x2019;s driving equipment (braking, steering, acceleration, etc.) in the case of ADS. However, the TSR&#x2019;s ability to accurately identify a sign depends on the vehicle&#x2019;s speed and the distance between the vehicle and the sign. One notable work by [<xref ref-type="bibr" rid="ref-10">10</xref>] introduced a deep learning-based approach, specifically YOLOv5, which achieved high accuracy of up to 97.70% and a faster recognition speed of 30 fps compared to SSD. Additionally, a study in [<xref ref-type="bibr" rid="ref-11">11</xref>] emphasized the use of finely crafted features and dimension reduction techniques to enhance traffic sign recognition accuracy to 93.98%. These works highlight the advantages of improved accuracy and faster recognition speeds in TSR systems. However, challenges such as environmental restrictions, limited driving conditions, and the need for continuous dataset expansion remain as notable disadvantages in the field [<xref ref-type="bibr" rid="ref-10">10</xref>,<xref ref-type="bibr" rid="ref-11">11</xref>].</p>
<p>The process of recognition relies on five stages: image capture, preprocessing, detection, classification, and decision-making, as shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>General architecture of TSR system</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_51562-fig-1.tif"/>
</fig>
<p>Initially, the image acquired from onboard camera(s) in a vehicle is cleaned and prepared through processes such as correcting image distortion, changing dimensions, and eliminating noise are carried out. These activities aim to make the image more suitable for in-depth analysis in subsequent stages. The system then utilizes object detection algorithms to locate road signs in the pre-processed images, which are subsequently classified according to their meanings. Finally, the system uses the obtained information to alert the driver to decision-making or to transmit commands to other vehicle components. In most TSR systems, the detection step is dissociated and independent from the classification step. This dissociation can lead to certain malfunctions such as false detections, multiple detection for the same panel, failure of detection due to temporary occultations, etc. In order to overcome these problems, it is possible to add a time tracking step for the processing of video sequences [<xref ref-type="bibr" rid="ref-12">12</xref>] as illustrated in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. Indeed, this process makes it possible to consider the redundancy of a traffic sign in several consecutive frames before its disappearance from the camera&#x2019;s field of vision in order to confirm its presence [<xref ref-type="bibr" rid="ref-13">13</xref>].</p>

<sec id="s2_1">
<label>2.1</label>
<title>Image Acquisition</title>
<p>The onboard camera of the TSR system ensures the real-time acquisition of images related to the driving environment. The quality of the captured images can impact the reliability and accuracy of the detected sign information. Hence, it is crucial to equip the system with a high-quality camera to guarantee optimal performance.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Pre-Processing</title>
<p>A road sign is typically exposed to various challenges in its environment, such as varying brightness, diverse weather conditions, viewing angle issues, damage, or partial obstruction which in turn can lead to variable visibility or various traffic sign appearances, a chaotic background, and poor image quality caused by high speeds. To mitigate the impact of these challenges, serval preprocessing techniques are employed at the initial stage to eliminate noise, reduce complexity, and enhance overall sensitivity and recognition accuracy. The most used preprocessing techniques include normalization, noise removal, and image binarization. These techniques can be applied individually or in a combination to improve image quality and facilitate sign detection. A pre-processed image then undergoes the next step, which is the detection process.</p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Traffic Sign Detection (TSD) Methods</title>
<p>The aim of the traffic sign detection is to identify traffic sign sizes and locations in real visual scenarios. The speed and efficiency of the detection process are crucial elements that significantly impact the overall system. Their role is to minimize the search area and only pinpoint the road sign present in the captured image. Many approaches for detecting traffic signs depend on attributes such as the sign&#x2019;s color and shape, or leverage machine and deep learning methods.</p>
<p>Color-based traffic sign detection methods seek to identify the region of interest (ROI) in a captured image based on the colors of traffic sign. According to [<xref ref-type="bibr" rid="ref-14">14</xref>], they are remarkably affected by illumination, the daytime, weather conditions, color and surface reflection of the sign. Shape-based traffic sign detection methods entail the process of identifying and estimating contours, which then lead to a decision based on the count of these contours [<xref ref-type="bibr" rid="ref-15">15</xref>]. Experiments show that these approaches are more reliable than colorimetric techniques because they are not affected by variations in daylight or colors. Yet, they are sensitive to small and ambiguous signs, need a large capacity of memory and time-consuming calculations [<xref ref-type="bibr" rid="ref-16">16</xref>]. Color and shape-based detection methods are a combination of color and shape characteristics [<xref ref-type="bibr" rid="ref-17">17</xref>]. Typically, they depend on appropriate parameters and effective color enhancement results [<xref ref-type="bibr" rid="ref-14">14</xref>,<xref ref-type="bibr" rid="ref-15">15</xref>]. Various factors, including changes in lighting, occlusions, translations, rotation, and scale change, are lacking, however.</p>
<p>Methods based on ML and deep learning can detect traffic signs accurately and address the short comings of the previous methods, such as lighting changes, occlusions, translations, rotation, and scale change. Most ML based detection methods, such as AdaBoost detection technique and SVM algorithm, employ handcrafted features for extracting the traffic sign whereas deep learning-based detection methods learn features through CNN (Convolutional Neural Network). In fact, the Haar-like Cascade technique has been developed by [<xref ref-type="bibr" rid="ref-18">18</xref>]. It involves a cascade of Haar-like features to identify objects within images. Support Vector Machine (SVM) employs HOG-like features to characterize and detect objects as an SVM classification problem. In this situation, each potential area of interest is categorized as either containing objects or being part of the background [<xref ref-type="bibr" rid="ref-19">19</xref>]. According to [<xref ref-type="bibr" rid="ref-20">20</xref>], using Haar-like Cascade features is faster and more useful in detecting faded and blurry traffic signs in different lighting conditions than HOG-like features.</p>
<p>In general, CNN-based detection networks which are deep learning techniques, tends to be slow. Nonetheless, there are networks like You Only Look Once (YOLO) that demonstrate swift and efficient performance. Yet, some networks, such as You Only Look Once (YOLO), have fast performance [<xref ref-type="bibr" rid="ref-21">21</xref>]. Using the German Traffic Sign Detection Benchmark dataset (GTSDB) and Faster RCNN 84.5% in 261 ms are achieved, compared to 94.2% in 155 ms using the standard RCNN algorithm for Haar like cascade technique [<xref ref-type="bibr" rid="ref-22">22</xref>]. Furthermore, ROI extraction is not necessary for AdaBoost-based approaches, unlike SVM-based approaches, which have a substantial effect on the efficiency of SVM-based TSD detectors.</p>
</sec>
<sec id="s2_4">
<label>2.4</label>
<title>Traffic Sign Classification (TSC) Methods</title>
<p>The TSC step is the third stage in the TSR process. It concerns standard computer vision and ML techniques. These methods are replaced later by deep learning models. In fact, ML based classification methods involve two major phases: First, image features are extracted to emphasize distinction between classes. Subsequently, they undergo classification through ML algorithms. Hand-crafted features including Histogram of Oriented Gradients (HOG) which focuses on the structure or shape of an object [<xref ref-type="bibr" rid="ref-23">23</xref>], Locally Binary Patterns (LBP) describes the texture of an image [<xref ref-type="bibr" rid="ref-24">24</xref>] and Gabor filter is used for edge detection, texture recognition, and image segmentation [<xref ref-type="bibr" rid="ref-19">19</xref>]. These manually designed features are frequently employed in conjunction with ML classifiers like KNN (K-Nearest Neighbors), Random Forest, and SVM (Support Vector Machine) [<xref ref-type="bibr" rid="ref-25">25</xref>].</p>
<p><xref ref-type="table" rid="table-2">Table 2</xref> displays examples derived from the application of ML based classification methods on the GTSRB dataset. As detailed in [<xref ref-type="bibr" rid="ref-26">26</xref>], HOG features are utilized with KNN and Random Forest classifiers, yielding accuracy percentages of 92.9% and 97.2%, respectively. Additionally, another study referenced by [<xref ref-type="bibr" rid="ref-19">19</xref>] employs a combination of Gabor, LBP and HOG features in conjunction with a SVM classifier, achieving an accuracy rate of 92.9%.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Examples of ML and DL based classification methods</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Works</th>
<th colspan="2">ML methods</th>
<th>Accuracy (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2">[<xref ref-type="bibr" rid="ref-26">26</xref>]</td>
<td rowspan="2">HOG</td>
<td>KNN</td>
<td>92.9</td>
</tr>
<tr>
<td>Random forest</td>
<td>97.2</td>
</tr>
<tr>
<td>[<xref ref-type="bibr" rid="ref-19">19</xref>]</td>
<td>Gabor &#x002B; LBP &#x002B; HOG</td>
<td>SVM</td>
<td>92.9</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>According to DL classification methods, they are based on ConvNets (CNNs). In fact, a CNN extracts features from images through the training of a multitude of hidden layers on a set of images. Hence, the deeper the network is, the more exclusive and informative the features become, with reduced redundancy. In [<xref ref-type="bibr" rid="ref-27">27</xref>], authors suggest using 3CNNs, a new variant of their previous model (CNN) and they achieve 99.70% compared to 99.51% on GTSRB. Additionally, another study referenced by [<xref ref-type="bibr" rid="ref-28">28</xref>] propose a TSC method using CNNs, which achieved an accuracy level of 99.6% on the validation set on the GTSRB dataset. Based on these results, DL methods based on CNN demonstrate superior accuracy and robustness in classifying traffic signs. These findings highlight the potential of DL approaches to enhance road safety and traffic management systems.</p>
</sec>
<sec id="s2_5">
<label>2.5</label>
<title>Decision</title>
<p>The decision-making step in the TSR system involves finding the type of existing sign in the image and determining the appropriate action based on its type such as displaying a warning signal on the dashboard, sounding an audible alert indicating the nature of the sign, or even transmitting a command signal to other vehicle equipment. It is crucial for the TSR system to be capable of making quick and accurate decisions, as an error in sign detection can lead to dangerous consequences for road users. For this very reason, these systems require rigorous testing and continuous improvement to ensure their effectiveness and adaptability to changes in the environment.</p>
</sec>
<sec id="s2_6">
<label>2.6</label>
<title>Temporal Tracking</title>
<p>Temporal tracking identifies previously recognized signs to avoid reclassifying them, which reduces processing time. The step of grouping the data, linking known signs (previously recognized) with perceived signs (detected at the current moment), ensures this identification. This coupling helps prevent signaling the same signs to the driver again, avoiding unnecessary disturbances.</p>
<p>A study by [<xref ref-type="bibr" rid="ref-29">29</xref>] highlights different problems of existing TSR systems that have reduced their robustness when tested in various illumination scenarios, leading to a significant drop in performance in low or strong lighting conditions. Additionally, these systems struggle with scalability, primarily due to a lack of diverse and high-quality training data. Lack of transparency in decision-making is also a common issue with TSR systems, making debugging and gaining trust challenging. Furthermore, existing TSR systems have difficulty accurately recognizing rare or new road signs that are not present during their training period, posing a potential road hazard in real-world situations, especially during cross-country travel where traffic signs can differ significantly. To address these constraints, the following section describes a new SLR methodology, aiming to fulfill the real-time, robustness, and accuracy requirements.</p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Proposed SLR Methodology</title>
<p>The accuracy and the fast-processing time are extremely important for a robust and efficient SLR system. Thus, the proposed SLR methodology satisfies these two constraints. Moreover, it is based on three modules: Speed Limit Detection (SLD), Traffic Sign Classification (SLC) and Traffic Sign Classifiers Fusion (SLCF). Different steps of the SLR system are summarized in the algorithm below:
<list list-type="simple">
<list-item><label>Step 1:</label><p>Prepare the speed limit Haar cascade detector.</p></list-item>
<list-item><label>Step 2:</label><p>a) Develop a deep neural network for speed limit signs (DeepSL).</p>
<list list-type="simple">
<list-item><label>b)</label><p>Train and test KNN, SVM, RF and DeepSL on speed limit images of the GTSRB dataset.</p></list-item>
<list-item><label>c)</label><p>Save trained KNN, SVM, RF and DeepSL.</p></list-item></list></list-item>
<list-item><label>Step 3:</label><p>Fuse classifiers using a data fusion technique.</p></list-item>
<list-item><label>Step 4:</label><p>Capture frame from the video camera.</p></list-item>
<list-item><label>Step 5:</label><p>Extract the ROI from the frame by using the Haar cascade detector.</p></list-item>
<list-item><label>Step 6:</label><p>Predict detected ROI by using the most accurate combination.</p></list-item>
</list></p>
<p><xref ref-type="fig" rid="fig-2">Fig. 2</xref> illustrates how the SL sign recognition process starts with a real-time camera frame capture. Then, a list of pre-processing treatments drawn up for each captured frame like resizing the image and applying filters on the image. A pre-processed image is after that treated by the SLD module which uses a speed limit Haar cascade detector to extract the ROI (SL sign). In the SLC module, SVM, KNN, RF and the newly developed ConvNet model (DeepSL) are trained on SL images from the GTSRB dataset and then saved onto the local disk to be used later in the Speed Limit Classifiers Fusion (SLCF). As a matter of fact, different combinations of classifiers are established after using fusion techniques like Dempster Shafer theory and the voting technique from EL. The output of the SLCF module is the most accurate combination of classifiers used to predict the detected SL sign.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Proposed speed limit recognition methodology</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_51562-fig-2.tif"/>
</fig>
<sec id="s3_1">
<label>3.1</label>
<title>Speed Limit Detection (SLD) Module</title>
<p>The rate and the time processing for the detection of road signs are important factors to minimize the area of detection and indicate only potential regions. Hence, the Haar Cascade method is used to extract Regions of Interest (ROIs) from the captured frame. The Haar-like Cascade method is an object detection algorithm used to identify faces in a real-time image or video [<xref ref-type="bibr" rid="ref-18">18</xref>] and then used for detecting other objects. Indeed, this detector extracts first the Haar-like features from an input image and builds cascaded classifiers which are embedded into one strong detector (Adaboost: Adaptive boosting) able to discard negative images quickly and identify the candidate region of traffic sign.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Speed Limit Classification (SLC) Module</title>
<p>The performance of an automatic SLR system must firstly be validated first using a publicly accessible dataset. Therefore, speed limit images from the GTSRB dataset are used to train and evaluate ML classifiers (SVM, KNN, and Random Forest) and the developed deep learning model with CNN called DeepSL.</p>
<sec id="s3_2_1">
<label>3.2.1</label>
<title>Speed Limit Dataset</title>
<p>The GTSRB is a large, organized, and open-source dataset used for developing classification machine learning models for traffic sign recognition. It contains more than 50,000 images, with over 39,000 images in the training set and 12,630 images in the test set, classified into more than 40 classes [<xref ref-type="bibr" rid="ref-30">30</xref>]. The dataset is widely used for traffic sign recognition tasks and has been used in numerous studies to evaluate the performance of various machine learning algorithms. In this work, nine speed limit classes (20, 30, 50, 70, 80, end of 80, 100 and 120 km/h) from the GTSRB dataset are used with approximately 13,200 images taken under different conditions (blurring, lighting, etc.) and presents noises like pixelization, low resolution, and low contrast of traffic sign images. In addition, the distribution of samples for each class is unbalanced. Actually, the largest classes (major) contain 10 times as many images of traffic signs as the smallest classes (minor) as in reality some signs, such as 50 km/h, appear more frequently than others.</p>
</sec>
<sec id="s3_2_2">
<label>3.2.2</label>
<title>Classification Metrics</title>
<p>In order to evaluate the classification process, various metrics are used:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mrow><mml:mtext>Accuracy</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mtext>True positives</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext>True negatifs</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>True positives</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext>False positives</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext>True negatifs</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext>False negatifs&#xA0;</mml:mtext></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula>
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:mtext>Precision</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mtext>True positives</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>True positives</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext>False positives</mml:mtext></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula>
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mrow><mml:mtext>Recall</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mtext>True positives</mml:mtext></mml:mrow><mml:mrow><mml:mrow><mml:mtext>True positives</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext>False negatives</mml:mtext></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula>
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:mtext>F</mml:mtext></mml:mrow><mml:mn>1</mml:mn><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x2217;</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mtext>Precision</mml:mtext></mml:mrow><mml:mo>&#x2217;</mml:mo><mml:mrow><mml:mtext>Recall</mml:mtext></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mtext>Precision</mml:mtext></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mtext>Recall</mml:mtext></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>In this work, speed limit classes have unbalanced distributions. For this, the weighted-averaged (precision, recall, F1) score is used to increase the lowest scores. Google Collaboratory is used as a framework for training and testing SVM, KNN, Random Forest and (DeepSL) on speed limit classes from the GTSRB dataset.</p>
</sec>
<sec id="s3_2_3">
<label>3.2.3</label>
<title>Speed Limit Classification Results Using ML Classifiers</title>
<p>Machine learning classifiers such as KNN, SVM, and Random Forest are used for traffic sign classification due to their specific advantages. In fact, KNN is simple, adaptable, and suitable for multi-class problems, while SVM excels in high-dimensional spaces and can handle non-linearly separable data [<xref ref-type="bibr" rid="ref-31">31</xref>]. According to the Random Forest classifier, it requires less feature engineering and provides feature importance insights. These classifiers are used in various traffic sign recognition systems, such as the one described in [<xref ref-type="bibr" rid="ref-32">32</xref>] which uses KNN and SVM classifiers for their ability to provide good accuracy rates in road safety applications and the system in [<xref ref-type="bibr" rid="ref-33">33</xref>] that employs Random Forest for its ability to detect traffic signs under various conditions. As a result, these classifiers are used in order to train speed limit images using three feature descriptors: RGB colour descriptor, 3D colour histogram and HOG descriptor. <xref ref-type="table" rid="table-3">Table 3</xref> summarizes obtained F1 scores.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>F1-scores obtained by SVM, KNN and random forest classifiers</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Features descriptors</th>
<th>ML classifiers</th>
<th>F1 score (%)</th>
<th>Dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3">RGB color descriptor</td>
<td>SVM</td>
<td>34.85</td>
<td rowspan="9">Speed limit (13,200 images: 75% for training and 25% for validation) from the GTSRB</td>
</tr>
<tr>
<td>KNN</td>
<td>79.39</td>
</tr>
<tr>
<td>Random forest</td>
<td>74.36</td>
</tr>
<tr>
<td rowspan="3">3D color histogram from the HSV color space</td>
<td>SVM</td>
<td>93.33</td>
</tr>
<tr>
<td>KNN</td>
<td>92.67</td>
</tr>
<tr>
<td>Random forest</td>
<td>96.42</td>
</tr>
<tr>
<td rowspan="3">HOG descriptor</td>
<td>SVM</td>
<td>95.88</td>
</tr>
<tr>
<td>KNN</td>
<td>95.85</td>
</tr>
<tr>
<td>Random forest</td>
<td>92.58</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="table-4">Table 4</xref> explains reached F1 scores for each type of speed limit by applying ML classifiers (SVM, KNN and Random Forest) using the RGB colour feature descriptor (DF1), the 3D colour histogram descriptor (DF2) and the HOG descriptor (DF3). Results confirm that the SVM, KNN and RF classifiers used with the (DF2) ensure higher classification rates compared to the other results obtained with (DF1) and with (DF3).</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Classification rates of each speed limit sign obtained from SVM, KNN and random forest classifiers</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>SL (km/h)</th>
<th align="center" colspan="9">F1 scores (%)</th>
</tr>
<tr>
<th/>
<th/>
<th>SVM</th>
<th/>
<th/>
<th>KNN</th>
<th/>
<th/>
<th>RF</th>
<th/>
</tr>
<tr>
<th/>
<th>DF1</th>
<th>DF2</th>
<th>DF3</th>
<th>DF1</th>
<th>DF2</th>
<th>DF3</th>
<th>DF1</th>
<th>DF2</th>
<th>DF3</th>
</tr>
</thead>
<tbody>
<tr>
<td>20</td>
<td>0</td>
<td>86</td>
<td>98</td>
<td>89</td>
<td>96</td>
<td>96</td>
<td>75</td>
<td>100</td>
<td>100</td>
</tr>
<tr>
<td>30</td>
<td>50</td>
<td>94</td>
<td>95</td>
<td>83</td>
<td>94</td>
<td>99</td>
<td>72</td>
<td>95</td>
<td>95</td>
</tr>
<tr>
<td>50</td>
<td>38</td>
<td>94</td>
<td>92</td>
<td>81</td>
<td>93</td>
<td>97</td>
<td>72</td>
<td>96</td>
<td>90</td>
</tr>
<tr>
<td>60</td>
<td>41</td>
<td>92</td>
<td>98</td>
<td>76</td>
<td>91</td>
<td>97</td>
<td>75</td>
<td>97</td>
<td>94</td>
</tr>
<tr>
<td>70</td>
<td>24</td>
<td>97</td>
<td>99</td>
<td>76</td>
<td>93</td>
<td>97</td>
<td>76</td>
<td>98</td>
<td>96</td>
</tr>
<tr>
<td>80</td>
<td>22</td>
<td>90</td>
<td>96</td>
<td>76</td>
<td>87</td>
<td>93</td>
<td>74</td>
<td>96</td>
<td>89</td>
</tr>
<tr>
<td>End of 80</td>
<td>78</td>
<td>96</td>
<td>100</td>
<td>88</td>
<td>98</td>
<td>100</td>
<td>92</td>
<td>100</td>
<td>100</td>
</tr>
<tr>
<td>100</td>
<td>0</td>
<td>97</td>
<td>96</td>
<td>79</td>
<td>90</td>
<td>96</td>
<td>75</td>
<td>97</td>
<td>94</td>
</tr>
<tr>
<td>120</td>
<td>26</td>
<td>98</td>
<td>95</td>
<td>79</td>
<td>93</td>
<td>89</td>
<td>74</td>
<td>97</td>
<td>89</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3_2_4">
<label>3.2.4</label>
<title>Speed Limit Classification Results Using DeepSL Model</title>
<p>Recently and with the development of deep learning, diverse deep neural network architectures have appeared and have drawn a lot of academic and industrial interest [<xref ref-type="bibr" rid="ref-34">34</xref>]. In this context, DeepSL (Deep Speed Limit), a new ConvNet, for the classification of speed limit signs is developed and presents the architecture detailed as follows: the images (ROIs), in gray level, are transferred to a first convolutional layer (Conv2D) with 32 size filters (3 &#x00D7; 3), followed by a ReLU activation function to improve nonlinearity and model performance. This layer extracts features from grayscale images, such as edges and textures. A Batch Normalization layer follows the first convolutional layer. Next, a second Conv2D layer with the same settings as the first convolutional layer is added, followed by another Batch Normalization layer. This sequence of convolutional and normalization layers is repeated with filters of size 64. Between each pair of convolutional layers, a 2D Maxpooling layer of 2 &#x00D7; 2 size is added to extract the most important features. Dropout layers with a dropout rate of 0.25 are added after each Maxpooling layer to avoid overfitting. Once all the features have been extracted, a Flatten layer is used Then, a Dense layer of 512 neurons with a ReLU is added. A layer of Batch Normalization and another layer of Dropout with a deactivation rate of 0.5 are added. Lastly, a final last Dense layer of 9 neurons with a Softmax activation function is added to perform the classification of the input images into the 9 specified speed limit classes.</p>
<p><xref ref-type="table" rid="table-5">Table 5</xref> shows experimental results found after training the model with 10 epochs on speed limit signs of the GTSRB dataset. We notice good F1 scores of classification results (&#x003E;91%) in recognizing limit road signs.</p>
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>Experimental results obtained from training DeepSl model</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>SL (km/h)</th>
<th>20</th>
<th>30</th>
<th>50</th>
<th>60</th>
<th>70</th>
<th>80</th>
<th>End of 80</th>
<th>100</th>
<th>120</th>
</tr>
</thead>
<tbody>
<tr>
<td>F1 scores (%)</td>
<td>98</td>
<td>98</td>
<td>99</td>
<td>100</td>
<td>100</td>
<td>91</td>
<td>100</td>
<td>99</td>
<td>99</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="table-6">Table 6</xref> shows that DeepSL model give a better classification rate of speed limit signs (98.8%) compared to those obtained by SVM, KNN and RF.</p>
<table-wrap id="table-6">
<label>Table 6</label>
<caption>
<title>F1 scores of TSR methods</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>TSC methods</th>
<th>F1 scores (%)</th>
<th>Weakness rates (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>DeepSL (the proposed model)</td>
<td>98.80</td>
<td>1.2</td>
</tr>
<tr>
<td>KNN &#x002B; HOG features</td>
<td>95.85</td>
<td>4.15</td>
</tr>
<tr>
<td>Random Forest &#x002B; 3D color histogram</td>
<td>96.42</td>
<td>3.58</td>
</tr>
<tr>
<td>SVM &#x002B; HOG features</td>
<td>95.88</td>
<td>4.12</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="table" rid="table-7">Table 7</xref> summarizes best classification rates for each speed limit road sign obtained from applying Random Forest and DeepSL classifiers.</p>
<table-wrap id="table-7">
<label>Table 7</label>
<caption>
<title>F1 scores obtained from RF classifier and DeepSL model</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th colspan="2">SL (km/h)</th>
<th>20</th>
<th>30</th>
<th>50</th>
<th>60</th>
<th>70</th>
<th>80</th>
<th>End of 80</th>
<th>100</th>
<th>120</th>
</tr>
</thead>
<tbody>
<tr>
<td>F1 scores (%)</td>
<td>DF2 &#x002B; RF</td>
<td>100</td>
<td>95</td>
<td>96</td>
<td>97</td>
<td>98</td>
<td>96</td>
<td>100</td>
<td>97</td>
<td>97</td>
</tr>
<tr>
<td/>
<td>DeepSL</td>
<td>98</td>
<td>98</td>
<td>99</td>
<td>100</td>
<td>100</td>
<td>91</td>
<td>100</td>
<td>99</td>
<td>99</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Speed Limit Classification Fusion (SLCF) Module</title>
<p>In the previous section, ML algorithms and the DeepSL model are used to classify different speed limit road signs with proportional accuracy rates. In fact, <xref ref-type="table" rid="table-6">Table 6</xref> shows the different weakness rate of each classifier that are respectively 1.2%, 4.15%, 3.58% and 4.12% for DeepSL, KNN, RF and SVM. Despite obtained good results, SLR system must have the minimum of possible errors to ensure road safety requirements and not be in a situation of uncertainty. In order to achieve lower classification error probability, classification rates can be improved by combining the results of classifiers to get benefit from the strengths of one method to overcome the weakness of another by using data fusion methods. Three non-exclusive categories can be used to group the available data fusion techniques: decision fusion, state estimation, and data association [<xref ref-type="bibr" rid="ref-35">35</xref>]. The decision fusion approach and the variety of classifier types have a major impact on a classification system&#x2019;s performance. Accordingly, a classifiers fusion module is introduced in the proposed approach in order to fuse the various classifiers outputs in order to boost the final classification results and reduce decision conflicts.</p>

<p>First, relevant features are extracted from the input data using techniques such as traditional feature extraction or the use of convolutional neural networks (CNNs). Then, each classifier is trained on a separate training dataset by learning from the already extracted features and builds its own classification model. Once trained, each classifier is used individually to predict validation and test data. The predictions are then merged to a final decision. Finally, an evaluation of the fusion performance is carried out using metrics to enhance the fusion of the trained classifiers. The effectiveness of the decision fusion theory has been discussed in [<xref ref-type="bibr" rid="ref-36">36</xref>,<xref ref-type="bibr" rid="ref-37">37</xref>]. The most used strategies are the voting method, Bayesian theory, and the Dempster&#x2013;Shafer evidence theory [<xref ref-type="bibr" rid="ref-38">38</xref>&#x2013;<xref ref-type="bibr" rid="ref-41">41</xref>].</p>
<sec id="s3_3_1">
<label>3.3.1</label>
<title>Dempster Shafer (DS) Theory</title>
<p>DS theory is a mathematical theory for reasoning under uncertainty, particularly in situations where there may be incomplete or conflicting evidence. It is presented first within the framework of statistical inference, and further expanded as an evidence theory [<xref ref-type="bibr" rid="ref-42">42</xref>]. Both supervised and unsupervised classification have used it. Unlike the Bayesian technique, the DS method explicitly accounts for unknown alternative sources of observed data. The DS technique uses probability and uncertainty intervals to assess the probability of hypotheses based on many pieces of data. Furthermore, it computes a probability for any valid hypothesis. When all the hypotheses investigated are mutually exclusive and the list of hypotheses is exhaustive, these two methods (DS and Bayesian theories) yield the same answers. In [<xref ref-type="bibr" rid="ref-43">43</xref>&#x2013;<xref ref-type="bibr" rid="ref-46">46</xref>], Classifier outputs are represented as belief functions, which are then coupled with Dempster&#x2019;s rule in the event of classifier fusion. In [<xref ref-type="bibr" rid="ref-47">37</xref>,<xref ref-type="bibr" rid="ref-37">47</xref>], the employed method involved converting the decisions made by the Support Vector Machine (SVM) classifiers into belief functions. In this work, DS theory has been chosen to be used rather than the other approaches since it permits the depiction of both imprecision and uncertainty data. The basics of DS theory are:</p>
<p>Mass function m defined by <xref ref-type="disp-formula" rid="eqn-5">(5)</xref>.
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mi>m</mml:mi><mml:mo>&#x003A;</mml:mo><mml:msup><mml:mn>2</mml:mn><mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x03A9;</mml:mi></mml:mrow></mml:mrow></mml:msup><mml:mo stretchy="false">&#x2192;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mtext>with</mml:mtext></mml:mrow><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2286;</mml:mo><mml:mi mathvariant="normal">&#x03A9;</mml:mi></mml:mrow></mml:munder><mml:mi>m</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math></disp-formula></p>
<p>Correction of the information: The new mass function of the weakness operation is defined by <xref ref-type="disp-formula" rid="eqn-6">(6)</xref>.
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mi>m</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03BC;</mml:mi><mml:mo>&#x2217;</mml:mo><mml:mi>m</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>;</mml:mo><mml:mi mathvariant="normal">&#x2200;</mml:mi><mml:mi>A</mml:mi><mml:mo>&#x2260;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03A9;</mml:mi></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mo stretchy="false">(</mml:mo><mml:mi>m</mml:mi><mml:mn>1</mml:mn><mml:mo>&#x2295;</mml:mo><mml:mi>m</mml:mi><mml:mn>2</mml:mn><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mi>C</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>,</mml:mo><mml:mi>B</mml:mi><mml:mo>&#x003A;</mml:mo><mml:mi>C</mml:mi><mml:mo>=</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2229;</mml:mo><mml:mi>B</mml:mi></mml:mrow></mml:munder><mml:mi>m</mml:mi><mml:mn>1</mml:mn><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x2217;</mml:mo></mml:mrow><mml:mspace width="thinmathspace" /><mml:mi>m</mml:mi><mml:mn>2</mml:mn><mml:mrow><mml:mo>(</mml:mo><mml:mi>B</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Information fusion: The new mass function after the use of Dempster&#x2019; rule is defined by <xref ref-type="disp-formula" rid="eqn-7">(7)</xref>.</p>
<p>Pignistic transformation (Decision making) defined by <xref ref-type="disp-formula" rid="eqn-8">(8)</xref>.
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mi>B</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mo>&#x2286;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03A9;</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>A</mml:mi><mml:mo fence="false" stretchy="false">}</mml:mo></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:munder><mml:mfrac><mml:mrow><mml:mi>m</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>m</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x2205;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:mi>A</mml:mi><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>The decision will be made by choosing the element x with the greatest probability from pignistic transformation by applying <xref ref-type="disp-formula" rid="eqn-9">(9)</xref>.
<disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mi>R</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mtext>argmax</mml:mtext></mml:mrow><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03A9;</mml:mi></mml:mrow></mml:mrow></mml:munder><mml:mi>B</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03C9;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>In this paper, DS theory is used by fusing two, three and four classifiers. 11 different forms of combinations (data fusion) between classifiers are possible based on the mass&#x2019;s combination DS rule, given the mass function for each classifier (m1, m2, m3, and m4) of the DS theory, which correspond to the SVM, RF, KNN, and DeepSL classifiers. the decision on the outcomes following fusion is aided by the pignistic transformation of the acquired masses. Consequently, two, three and four classifiers are combined. Obtained results are 99.38% by fusing RF and DeepSL classifiers, 99.93% by fusing KNN, RF and DeepSL and 99.98% by fusing SVM, KNN, RF and DeepSL. Achieved results of classifiers combinations are shown in <xref ref-type="table" rid="table-8">Table 8</xref>.</p>
<table-wrap id="table-8">
<label>Table 8</label>
<caption>
<title>Different fusion classification results using DS theory</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th colspan="2">Combined classifiers</th>
<th>Weakness fusion classifier value (%)</th>
<th>Fusion classification rate (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6">2 combined classifiers</td>
<td>KNN and SVM</td>
<td>0.0069</td>
<td>99.31</td>
</tr>
<tr>
<td>RF and SVM</td>
<td>0.0065</td>
<td>99.35</td>
</tr>
<tr>
<td>RF and KNN</td>
<td>0.0513</td>
<td>94.48</td>
</tr>
<tr>
<td>RF and DEEPSL</td>
<td>0.0062</td>
<td>99.38</td>
</tr>
<tr>
<td>KNN and DEEPSL</td>
<td>0.0072</td>
<td>99.29</td>
</tr>
<tr>
<td>SVM and DEEPSL</td>
<td>0.0067</td>
<td>93.3</td>
</tr>
<tr>
<td rowspan="4">3 combined classifiers</td>
<td>KNN and RF and DEEPSL</td>
<td>0.0007</td>
<td>99.93</td>
</tr>
<tr>
<td>SVM and KNN and RF</td>
<td>0.0008</td>
<td>99.91</td>
</tr>
<tr>
<td>SVM and KNN and DEEPSL</td>
<td>0.0009</td>
<td>99.91</td>
</tr>
<tr>
<td>SVM and RF and DEEPSL</td>
<td>0.0008</td>
<td>99.92</td>
</tr>
<tr>
<td>4 combined classifiers</td>
<td>SVM and RF and KNN and DEEPSL</td>
<td>0.000115</td>
<td>99.98</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3_3_2">
<label>3.3.2</label>
<title>Ensemble Learning Methods</title>
<p>Ensemble Learning (EL) is a ML technique that involves combining multiple classifiers using various methods to produce a more accurate and reliable final decision, aiming to reduce classification errors and minimize the effects of information uncertainty. The different classifiers can be trained on distinct subsets of data or use different learning methods. The most common techniques in EL include bagging, boosting, stacking, and voting [<xref ref-type="bibr" rid="ref-48">48</xref>]. Bagging (Bootstrap Aggregation) creates multiple copies of the same model by training each copy in parallel on a random subset of the dataset using sampling techniques. Boosting involves sequentially training multiple relatively weak models, starting with an underfitting situation, and having each model correct the errors of its predecessor to form a complete and highly reliable final model. Two types of boosting exist: AdaBoost and Gradient Boosting. When two or more base models, or level 0 models, are used in a stacking (blending) process, a meta-model called the level 1 model is created by combining the predictions of the base models. Both hard and soft votes are supported by the voting technique. The class with the most votes, or the class having the highest likelihood of being predicted by each classifier, is the projected output class in a hard voting process. The forecast made for a class based on its average probability is known as the output class in soft voting.</p>
<p>In order to combine the proposed DeepSL model with the three ML classifiers (KNN, Random Forest and SVM), a fusion approach exploiting the characteristics of the input images is applied. Training and test image features are extracted from the pre-trained DeepSL model to capture complex patterns, textures, and relevant structures, resulting in high-quality features for each input image. These extracted features are then used separately to train each classifier independently. The voting method is chosen to merge classifiers. <xref ref-type="table" rid="table-9">Table 9</xref> summarizes the weighted rates of F1 as well as the weakness rates of KNN, RF, and SVM using DeepSL as the feature extractor. By combining the features extracted by DeepSL with the KNN, RF and SVM classifiers, one can exploit the advantages of both approaches (ML and DL) and improve the performance of image classification.</p>
<table-wrap id="table-9">
<label>Table 9</label>
<caption>
<title>Performances of KNN, RF et SVM classifiers</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Classification methods SL</th>
<th>Weighted F1 score (%)</th>
<th>Weakness rate (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>KNN &#x002B; DeepSL</td>
<td>99.88</td>
<td>0.12</td>
</tr>
<tr>
<td>Random Forest &#x002B; DeepSL</td>
<td>99.90</td>
<td>0.1</td>
</tr>
<tr>
<td>SVM &#x002B; DeepSL</td>
<td>99.87</td>
<td>0.13</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>According to <xref ref-type="table" rid="table-10">Table 10</xref>, it is remarkable that merging the ML classifiers using DeepSL as feature extractor improved the F1 score of the validation significantly compared to using each classifier separately. The combination of RF and KNN achieves the best F1 rate of 99.96%.</p>
<table-wrap id="table-10">
<label>Table 10</label>
<caption>
<title>F1 scores of ML classifier fusion using DeepSL as feature extractor</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th colspan="2">Combined classifiers</th>
<th>Hard vote (F1 score %)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3">2 combined classifiers</td>
<td>KNN and SVM</td>
<td>99.90</td>
</tr>
<tr>
<td>RF and SVM</td>
<td>99.87</td>
</tr>
<tr>
<td>RF and KNN</td>
<td>99.96</td>
</tr>
<tr>
<td>3 combined classifiers</td>
<td>SVM and KNN and RF</td>
<td>99.90</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To validate the effectiveness and to confirm the performances of the proposed SLR solution, it is crucial to undergo two validation stages: software and practical validation.</p>
</sec>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Prototype of SLR System</title>
<sec id="s4_1">
<label>4.1</label>
<title>Software Validation</title>
<p>Software validation is an essential step in ensuring that a system works effectively and reliably. There are two types of software validation: simulator validation, which uses interactive virtual environments similar to real life [<xref ref-type="bibr" rid="ref-49">49</xref>], and simulation validation, which tests road scenes full of road signs on a PC. In fact, enhancing systems through simulation-based validation using driving sequences on urban roads or freeways is a widely used method for testing and validating recognition systems, especially those related to traffic signs. In fact, this type of validation simulates different environmental driving conditions and evaluate the performance of the recognition system in a variety of scenarios. Hence, to validate the SLR system by simulation, two video sequences describing two road scenes rich in speed limit signs are used. The simulation is carried out by using a PC (configuration: Intel&#x00AE; Core (TM) i57200 CPU, 64-bit, 8 GB RAM) and Google Colab with 12.4 GB RAM. To evaluate the performance of the SLR system, processing time and the classification rate are calculated. Recognition time is calculated from the detection of the speed limit sign to its classification. The SLR system achieves an average of 0.06 s to identify each detected road sign in the case of computer simulation, and an average of 0.025 s using Google Colab. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> shows examples of speed limit sign images recognized correctly by the SLR system in addition to their prediction rate using Google Colab.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Examples of speed limit signs correctly recognized by the SLR system</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_51562-fig-3.tif"/>
</fig>
<p>All speed limit signs located in the video sequence are well localized by the Haar Cascade detector and subsequently well recognized by applying the new model obtained from the vote fusion. However, two detected road signs are not correctly recognized by the SLR system, as shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Examples of signs incorrectly recognized by the SLR system</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_51562-fig-4.tif"/>
</fig>
<p>In fact, the 40 km/h sign is detected as a speed limit sign, but misidentified because it is not included in the GTSRB learning base and the No passing sign is wrongly recognized once as a speed limit sign out of the three times it appeared.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Practical Validation</title>
<p>A system&#x2019;s hardware architecture can vary according to its specific processing and performance requirements. Indeed, there are different hardware architectures based on CPU (Central Processing Unit), GPU (Graphic Processing Unit), FPGA (Field Programmable Gate Array) or heterogeneous by combining different types of units to take advantage of their specific benefits. For example, SoCs (System on Chip) combine two units in the same circuit to meet the criteria of an embedded system, i.e., dimensions, power consumption (autonomy), heat dissipation and speed of exchange between the two units.</p>
<p>In order to validate an architecture on a hardware target, several factors need to be considered, such as the performance of the core used for image processing (execution time and accuracy), the available memory and its type for efficient use of resources, and the availability of libraries and development tools to facilitate implementation, testing and subsequent improvements to the architecture. Based on the characteristics of the different board types already presented, the validation and evaluation of the SLR system will be carried out on Raspberry Pi 4 and Nano Jetson boards. Indeed, this choice is based on the specific technical characteristics of these two boards, summarized in <xref ref-type="table" rid="table-11">Table 11</xref>, and their adaptability for artificial intelligence applications.</p>
<table-wrap id="table-11">
<label>Table 11</label>
<caption>
<title>Technical specifications of the Raspberry Pi 4 and Nano Jetson boards</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Technical information</th>
<th>Raspberry Pi 4 Model B</th>
<th>Nvidea Jetson Nano</th>
</tr>
</thead>
<tbody>
<tr>
<td>Processor</td>
<td>Broadcom BCM2711, CPU quad-core ARM Cortex-A72; 1.5 GHz</td>
<td>NVIDIA Maxwell GPU: 128 core<break/>CUDA et ARM Cortex-A57 quad-core; 1.43 GHz</td>
</tr>
<tr>
<td>Memory</td>
<td>4 Go de LPDDR4</td>
<td>4 Go de LPDDR4</td>
</tr>
<tr>
<td>Used camera</td>
<td colspan="2">Resolution: 720 p, Speed: 30 fps</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In order to ensure hardware validation, Raspberry Pi 4 and Jetson Nano boards are first configured with the needed software. Indeed, several useful libraries are installed, such as OpenCV, TensorFlow, Keras, as well as others required for the system to function properly. Then, the same driving sequences used previously for software validation are reused to achieve a hardware validation close as much as possible to the real driving environment. In this step, the on-board camera is positioned in front of the PC screen to capture the video sequences. Tests are carried out to evaluate the SLR system performance, such as processing speed and sign recognition rate. The processing speed represents the time required to detect and classify road signs, while the sign recognition rate represents the system&#x2019;s accuracy in correctly recognizing road signs, by giving the number of correctly or incorrectly recognized or unrecognized signs in relation to the total number of signs. Obtained results are summarized in <xref ref-type="table" rid="table-12">Table 12</xref>, which shows the performance of the SLR system on the two hardware platforms.</p>
<table-wrap id="table-12">
<label>Table 12</label>
<caption>
<title>SLR evaluation results on Raspberry Pi 4 and Nano Jetson boards</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th/>
<th align="center" colspan="2">Target materials</th>
</tr>
<tr>
<th>Evaluation results</th>
<th>Raspberry Pi 4 Model B</th>
<th>Nano Jetson</th>
</tr>
</thead>
<tbody>
<tr>
<td>Number of recognized signs</td>
<td>18/20</td>
<td>19/20</td>
</tr>
<tr>
<td>Number of unrecognized signs</td>
<td>0/20</td>
<td>0/20</td>
</tr>
<tr>
<td>Number of wrongly recognized signs</td>
<td>2/20</td>
<td>1/20</td>
</tr>
<tr>
<td>Average speed of sign recognition</td>
<td>0.15 s (6 fps)</td>
<td>0.0815 s (12 fps)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>On the Raspberry Pi board, some experiments with the SLR system have been conducted. The authors&#x2019; first work in [<xref ref-type="bibr" rid="ref-50">50</xref>] is the development of a Raspberry Pi-based recognition system for speed limit signs, with consideration given to the stability of color detection with respect to daylight. According to the results, their system can process data in as little as two seconds and has an accuracy of 80%. In addition, a second study uses a Raspberry Pi 3 board and ML techniques to create a real-time sign recognition system that recognizes five different classes of signs, identifies their type, and notifies the driver. According to the results, the system&#x2019;s maximum average time to identify the type of sign when the vehicle is moving at 50 km/h is 3.44 s, and the average accuracy of sign recognition for the five classes is over 90% [<xref ref-type="bibr" rid="ref-51">51</xref>].</p>
<p>Considering <xref ref-type="table" rid="table-12">Table 12</xref>, the proposed SLR system outperforms the approaches cited in terms of accuracy according to the number of classes, with a score of 90% for 9 classes. In terms of processing time, the proposed SLR system is the fastest, with an average of 0.15 s. Meanwhile, the NVIDIA Nano Jetson outperforms the Raspberry Pi 4 in terms of performance, thanks to its ability to exploit powerful NVIDIA GPUs. Indeed, the Jetson Nano is significantly faster in image processing, with an average recognition speed of 0.0815 s (from 0.071 to 0.092 s), which translates into shorter inference times for speed limit sign recognition. What is more, in terms of accuracy, the recognition rate achieved by Nano Jetson is higher than that achieved by Raspberry, with a value of 95%.</p>

</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion and Perspectives</title>
<p>The automotive industry is constantly developing automated safety technologies with the goal of creating automated driver systems that can perform all driving-related tasks and assist in preventing accidents and saving lives by averting potentially dangerous situations brought on by distracted driving. Driving automation technologies help drivers perceive their surroundings and relieve them of several driving responsibilities. In fact, TSR system is considered as one of the most crucial parts of driving automation. It consists of automatically identifying road signs with the fastest processing time. This paper focus on the recognition of speed limit signs which hold a significant importance in regulating traffic speed, maintaining road safety, and minimizing the risk of accidents. For this reason, a new SLR methodology is proposed based on three modules. First, in the SLD module, the Haar Cascade technique is used to generate a SL detector able to localise the SL sign on the captured image. Then, A new developed CNN model (DeepSL) in addition to several ML classifiers are trained on the SL signs from GTSRB dataset in the SLC module. Compared to KNN, Random Forest and SVM, DeepSL give better performance (98.8%).</p>
<p>To improve the performance of the developed SLR system, a new module SLCF aiming to combine classifiers (DeepSL and ML classifiers) is proposed by using Dempster Shafer theory and the voting technique from the EL. Achieved results are clearly better compared to those using each classifier separately. In fact, 99.98% and 99.96% are the best combinations result found respectively by combining DeepSL, SVM, RF and KNN using the DS theory of belief functions and by merging RF and KNN classifiers using the voting method. To assess the reliability and the effectiveness of the developed system, a software and hardware validation are conducted. In fact, the SLR system achieves an average of 60 ms to identify each detected road sign in the case of computer simulation, and an average of 25 ms using Google Colab. The effectiveness of the proposed methodology is assessed using two hardware targets, achieving processing times of 150 ms for the Raspberry Pi board and 81.5 ms for the Nano Jetson board.</p>
<p>While the proposed SLR methodology has shown promising results in recognizing SL signs compared to other studies, there are some improvements that can be done in future works. Indeed, it is essential to extend the methodology to recognize all traffic sign categories. This extension would make the system more comprehensive and applicable to various driving scenarios. In terms of hardware validation, two popular and accessible hardware targets: the Raspberry Pi board and the Nano Jetson board are used in this paper. Another ongoing aspect of this work is to test and validate the system on other hardware platforms to achieve better performance. For instance, implementing the system on FPGA (Field-Programmable Gate Array) and Jetson Xavier can offer better processing time and accuracy. These proposed improvements would make the system complete, more robust, and adaptable to various driving scenarios and lighting conditions.</p>
</sec>
</body>
<back>
<ack><p>The authors extend their acknowledgment to all the researchers and the reviewers who help in improving the quality of the idea, concept, and the paper overall.</p>
</ack>
<sec><title>Funding Statement</title>
<p>The authors received no specific funding for this study.</p>
</sec>
<sec><title>Author Contributions</title>
<p>The authors confirm contribution to the paper as follows: study conception and design: Mohamed Karray, Nesrine Triki and Mohamed Ksantini; data collection: Nesrine Triki, Mohamed Karray; analysis and interpretation of results: Mohamed Karray, Nesrine Triki and Mohamed Ksantini; draft manuscript preparation: Mohamed Karray, Nesrine Triki. All authors reviewed the results and approved the final version of the manuscript.</p>
</sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>GTSRB dataset is a public dataset used during the study and available on the link above <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign">https://www.kaggle.com/datasets/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign</ext-link> (accessed on 08/08/2023).</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Wintersberger</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Riener</surname></string-name></person-group>, &#x201C;<article-title>Trust in technology as a safety aspect in highly automated driving</article-title>,&#x201D; <source>i-com</source>, vol. <volume>15</volume>, no. <issue>3</issue>, pp. <fpage>297</fpage>&#x2013;<lpage>310</lpage>, <month>Dec</month>. <year>2016</year>. doi: <pub-id pub-id-type="doi">10.1515/icom-2016-0034</pub-id>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J. M.</given-names> <surname>Wood</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Exploring perceptions of advanced driver assistance systems (ADAS) in older drivers with age-related declines</article-title>,&#x201D; <source>Transp. Res. Part F: Traffic Psychol. Behav.</source>, vol. <volume>100</volume>, no. <issue>1</issue>, pp. <fpage>419</fpage>&#x2013;<lpage>430</lpage>, <month>Jan</month>. <year>2024</year>. doi: <pub-id pub-id-type="doi">10.1016/j.trf.2023.12.006</pub-id>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Munster</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Bohlig</surname></string-name></person-group>, &#x201C;<article-title>Auto outlook 2040: The rise of fully autonomous vehicles</article-title>,&#x201D; <comment> Sep. 6, 2017. Accessed: Aug. 6, 2023</comment>. [Online]. Available: <ext-link ext-link-type="uri" xlink:href="https://deepwatermgmt.com/auto-outlook-2040-the-rise-of-fully-autonomous-vehicles/">https://deepwatermgmt.com/auto-outlook-2040-the-rise-of-fully-autonomous-vehicles/</ext-link></mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Triki</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Karray</surname></string-name>, and <string-name><given-names>M.</given-names> <surname>Ksantini</surname></string-name></person-group>, &#x201C;<article-title>A real-time traffic sign recognition method using a new attention-based deep convolutional neural network for smart vehicles</article-title>,&#x201D; <source>Appl. Sci.</source>, vol. <volume>13</volume>, no. <issue>8</issue>, pp. <fpage>4793</fpage>, <month>Apr</month>. <year>2023</year>. doi: <pub-id pub-id-type="doi">10.3390/app13084793</pub-id>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>G&#x00E1;mez Serna</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Ruichek</surname></string-name></person-group>, &#x201C;<article-title>Classification of traffic signs: The European dataset</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>6</volume>, pp. <fpage>78136</fpage>&#x2013;<lpage>78148</lpage>, <year>2018</year>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2884826</pub-id>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Stallkamp</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Schlipsing</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Salmen</surname></string-name>, and <string-name><given-names>C.</given-names> <surname>Igel</surname></string-name></person-group>, &#x201C;<article-title>The german traffic sign recognition benchmark: A multi-class classification competition</article-title>,&#x201D; in <conf-name>2011 Int. Jt. Conf. Neural Netw.</conf-name>, <month>Jul</month>. <year>2011</year>, pp. <fpage>1453</fpage>&#x2013;<lpage>1460</lpage>. doi: <pub-id pub-id-type="doi">10.1109/IJCNN.2011.6033395</pub-id>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Youssef</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Albani</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Nardi</surname></string-name>, and <string-name><given-names>D. D.</given-names> <surname>Bloisi</surname></string-name></person-group>, &#x201C;<article-title>Fast traffic sign recognition using color segmentation and deep convolutional networks</article-title>,&#x201D; in <conf-name>Adv. Concepts Intell. Vis. Syst.</conf-name>, <year>2016</year>, vol. <volume>10016</volume>, pp. <fpage>205</fpage>&#x2013;<lpage>216</lpage>. doi: <pub-id pub-id-type="doi">10.1007/978-3-319-48680-2_19</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Timofte</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Zimmermann</surname></string-name>, and <string-name><given-names>L.</given-names> <surname>Van Gool</surname></string-name></person-group>, &#x201C;<article-title>Multi-view traffic sign detection, recognition, and 3D localisation</article-title>,&#x201D; <source>Mach. Vis. Appl.</source>, vol. <volume>25</volume>, no. <issue>3</issue>, pp. <fpage>633</fpage>&#x2013;<lpage>647</lpage>, <month>Apr</month>. <year>2014</year>. doi: <pub-id pub-id-type="doi">10.1007/s00138-011-0391-3</pub-id>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Shakhuro</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Konushin</surname></string-name>, and <collab>Lomonosov Moscow</collab></person-group>, &#x201C;<article-title>Russian traffic sign images dataset</article-title>,&#x201D; <source>Comput. Opt.</source>, vol. <volume>40</volume>, no. <issue>2</issue>, pp. <fpage>294</fpage>&#x2013;<lpage>300</lpage>, <year>2016</year>. doi: <pub-id pub-id-type="doi">10.18287/2412-6179-2016-40-2-294-300</pub-id>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zhu</surname></string-name> and <string-name><given-names>W. Q.</given-names> <surname>Yan</surname></string-name></person-group>, &#x201C;<article-title>Traffic sign recognition based on deep learning</article-title>,&#x201D; <source>Multimed. Tools Appl.</source>, vol. <volume>81</volume>, no. <issue>13</issue>, pp. <fpage>17779</fpage>&#x2013;<lpage>17791</lpage>, <month>May</month> <year>2022</year>. doi: <pub-id pub-id-type="doi">10.1007/s11042-022-12163-0</pub-id>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>X. R.</given-names> <surname>Lim</surname></string-name>, <string-name><given-names>C. P.</given-names> <surname>Lee</surname></string-name>, <string-name><given-names>K. M.</given-names> <surname>Lim</surname></string-name>, <string-name><given-names>T. S.</given-names> <surname>Ong</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Alqahtani</surname></string-name>, and <string-name><given-names>M.</given-names> <surname>Ali</surname></string-name></person-group>, &#x201C;<article-title>Recent advances in traffic sign recognition: Approaches and datasets</article-title>,&#x201D; <source>Sensors</source>, vol. <volume>23</volume>, no. <issue>10</issue>, pp. <fpage>4674</fpage>, <month>Jan</month>. <year>2023</year>. doi: <pub-id pub-id-type="doi">10.3390/s23104674</pub-id>; <pub-id pub-id-type="pmid">37430587</pub-id></mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Mogelmose</surname></string-name>, <string-name><given-names>M. M.</given-names> <surname>Trivedi</surname></string-name>, and <string-name><given-names>T. B.</given-names> <surname>Moeslund</surname></string-name></person-group>, &#x201C;<article-title>Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey</article-title>,&#x201D; <source>IEEE Trans. Intell. Transp. Syst.</source>, vol. <volume>13</volume>, no. <issue>4</issue>, pp. <fpage>1484</fpage>&#x2013;<lpage>1497</lpage>, <month>D&#x00E9;c</month>. <year>2012</year>. doi: <pub-id pub-id-type="doi">10.1109/TITS.2012.2209421</pub-id>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Boumediene</surname></string-name>, <string-name><given-names>J. P.</given-names> <surname>Lauffenburger</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Daniel</surname></string-name>, and <string-name><given-names>C.</given-names> <surname>Cudel</surname></string-name></person-group>, &#x201C;<article-title>Detection, association and tracking for traffic sign recognition,&#x201D; <italic>Rencontres Francophones sur la Logique Floues et ses Applications</italic></article-title>, <month>Oct</month>. <year>2014</year>. <comment>Accessed: Apr. 6, 2023</comment>. [Online]. Available: <ext-link ext-link-type="uri" xlink:href="https://hal.science/hal-01123472">https://hal.science/hal-01123472</ext-link></mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zeng</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Lan</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Ran</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Wang</surname></string-name>, and <string-name><given-names>J.</given-names> <surname>Gao</surname></string-name></person-group>, &#x201C;<article-title>Restoration of motion-blurred image based on border deformation detection: A traffic sign restoration model</article-title>,&#x201D; <source>PLoS One</source>, vol. <volume>10</volume>, no. <issue>4</issue>, pp. <fpage>e0120885</fpage>, <month>Apr</month>. <year>2015</year>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0120885</pub-id>; <pub-id pub-id-type="pmid">25849350</pub-id></mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Barnes</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Zelinsky</surname></string-name>, and <string-name><given-names>L. S.</given-names> <surname>Fletcher</surname></string-name></person-group>, &#x201C;<article-title>Real-time speed sign detection using the radial symmetry detector</article-title>,&#x201D; <source>IEEE Trans. Intell. Transp. Syst.</source>, vol. <volume>9</volume>, no. <issue>2</issue>, pp. <fpage>322</fpage>, <month>Jun</month>. <year>2008</year>. doi: <pub-id pub-id-type="doi">10.1109/TITS.2008.922935</pub-id>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Chang</surname></string-name>, and <string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Machine vision based traffic sign detection methods: Review, analyses and perspectives</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>7</volume>, pp. <fpage>86578</fpage>&#x2013;<lpage>86596</lpage>, <year>2019</year>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2924947</pub-id>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Belaroussi</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Foucher</surname></string-name>, <string-name><given-names>J. P.</given-names> <surname>Tarel</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Soheilian</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Charbonnier</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Paparoditis</surname></string-name></person-group>, &#x201C;<article-title>Road sign detection in images: A case study</article-title>,&#x201D; in <conf-name>2010 20th Int. Conf. Pattern Recognit.</conf-name>, <publisher-loc>Istanbul, Turkey</publisher-loc>, <month>Aug</month>. <year>2010</year>, pp. <fpage>484</fpage>&#x2013;<lpage>488</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ICPR.2010.1125</pub-id>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Viola</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Jones</surname></string-name></person-group>, &#x201C;<article-title>Rapid object detection using a boosted cascade of simple features</article-title>,&#x201D; in <conf-name>Proc. 2001 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. CVPR 2001</conf-name>, <publisher-loc>Kauai, HI, USA</publisher-loc>, <publisher-name>IEEE Comput. Soc.</publisher-name>, <year>2001</year>, pp. <fpage>I&#x2011;511</fpage>&#x2013;<lpage>I&#x2011;518</lpage>. doi: <pub-id pub-id-type="doi">10.1109/CVPR.2001.990517</pub-id>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Kaplan Berkaya</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Gunduz</surname></string-name>, <string-name><given-names>O.</given-names> <surname>Ozsen</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Akinlar</surname></string-name>, and <string-name><given-names>S.</given-names> <surname>Gunal</surname></string-name></person-group>, &#x201C;<article-title>On circular traffic sign detection and recognition</article-title>,&#x201D; <source>Expert. Syst. Appl.</source>, vol. <volume>48</volume>, pp. <fpage>67</fpage>&#x2013;<lpage>75</lpage>, <month>Apr</month>. <year>2016</year>. doi: <pub-id pub-id-type="doi">10.1016/j.eswa.2015.11.018</pub-id>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. B.</given-names> <surname>Wali</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Vision-based traffic sign detection and recognition systems: Current trends and challenges</article-title>,&#x201D; <source>Sens.</source>, vol. <volume>19</volume>, no. <issue>9</issue>, pp. <fpage>2093</fpage>, <month>Jan</month>. <year>2019</year>. doi: <pub-id pub-id-type="doi">10.3390/s19092093</pub-id>; <pub-id pub-id-type="pmid">31064098</pub-id></mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Jin</surname></string-name>, and <string-name><given-names>X.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>A real-time chinese traffic sign detection algorithm based on modified YOLOv2</article-title>,&#x201D; <source>Algorithms</source>, vol. <volume>10</volume>, no. <issue>4</issue>, pp. <fpage>127</fpage>, <month>D&#x00E9;c</month>. <year>2017</year>. doi: <pub-id pub-id-type="doi">10.3390/a10040127</pub-id>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>W. J.</given-names> <surname>Jeon</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Real-time detection of speed-limit traffic signs on the real road using Haar-like features and boosted cascade</article-title>,&#x201D; in <conf-name>Proc. 8th Int. Conf. Ubiquitous Inf. Manag. Commun.-ICUIMC &#x2018;14, Siem Reap</conf-name>, <publisher-loc>Cambodia</publisher-loc>, <publisher-name>ACM Press</publisher-name>, <year>2014</year>, pp. <fpage>1</fpage>&#x2013;<lpage>5</lpage>. doi: <pub-id pub-id-type="doi">10.1145/2557977.2558091</pub-id>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Co&#x0163;ovanu</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Zet</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Fo&#x015F;al&#x0103;u</surname></string-name>, and <string-name><given-names>M.</given-names> <surname>Skoczylas</surname></string-name></person-group>, &#x201C;<article-title>Detection of traffic signs based on support vector machine classification using HOG Features</article-title>,&#x201D; in <conf-name>2018 Int. Conf. Expo. Electr. Power Eng. (EPE)</conf-name>, <month>Oct</month>. <year>2018</year>, pp. <fpage>518</fpage>&#x2013;<lpage>522</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ICEPE.2018.8559784</pub-id>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>He</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Dai</surname></string-name></person-group>, &#x201C;<article-title>A new traffic signs classification approach based on local and global features extraction</article-title>,&#x201D; in <conf-name>2016 6th Int. Conf. Inf. Commun. Manag. (ICICM)</conf-name>, <month>Oct</month>. <year>2016</year>, pp. <fpage>121</fpage>&#x2013;<lpage>125</lpage>. doi: <pub-id pub-id-type="doi">10.1109/INFOCOMAN.2016.7784227</pub-id>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Sugiharto</surname></string-name> <etal>et al.</etal></person-group>, &#x201C;<article-title>Comparison of SVM, random forest and KNN classification by using HOG on traffic sign detection</article-title>,&#x201D; in <conf-name>2022 6th Int. Conf. Inform. Comput. Sci. (ICICoS)</conf-name>, <month>Sep</month>. <year>2022</year>, pp. <fpage>60</fpage>&#x2013;<lpage>65</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ICICoS56336.2022.9930588</pub-id>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Zaklouta</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Stanciulescu</surname></string-name>, and <string-name><given-names>O.</given-names> <surname>Hamdoun</surname></string-name></person-group>, &#x201C;<article-title>Traffic sign classification using K-d trees and random forests</article-title>,&#x201D; in <conf-name>2011 Int. Joint Conf. Neural Netw.</conf-name>, <month>Jul</month>. <year>2011</year>, pp. <fpage>2151</fpage>&#x2013;<lpage>2155</lpage>. doi: <pub-id pub-id-type="doi">10.1109/IJCNN.2011.6033494</pub-id>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H. H.</given-names> <surname>Aghdam</surname></string-name>, <string-name><given-names>E. J.</given-names> <surname>Heravi</surname></string-name>, and <string-name><given-names>D.</given-names> <surname>Puig</surname></string-name></person-group>, &#x201C;<article-title>A practical approach for detection and classification of traffic signs using convolutional neural networks</article-title>,&#x201D; <source>Robotics Auton. Syst.</source>, vol. <volume>84</volume>, no. <issue>13&#x2013;14</issue>, pp. <fpage>97</fpage>&#x2013;<lpage>112</lpage>, <year>2016</year>. doi: <pub-id pub-id-type="doi">10.1016/j.robot.2016.07.003</pub-id>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Kale</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Panchpor</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Dingore</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Gaikwad</surname></string-name>, and <string-name><given-names>L.</given-names> <surname>Bewoor</surname></string-name></person-group>, &#x201C;<article-title>Traffic sign classification using convolutional neural network</article-title>,&#x201D; <source>IJSRCSEIT</source>, pp. <fpage>1</fpage>&#x2013;<lpage>10</lpage>, <month>Nov</month>. <year>2021</year>. doi: <pub-id pub-id-type="doi">10.32628/CSEIT217545</pub-id>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Babi&#x0107;</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Babi&#x0107;</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Fioli&#x0107;</surname></string-name>, and <string-name><given-names>&#x017D;.</given-names> <surname>&#x0160;ari&#x0107;</surname></string-name></person-group>, &#x201C;<article-title>Analysis of market-ready traffic sign recognition systems in cars: A test field study</article-title>,&#x201D; <source>Energies</source>, vol. <volume>14</volume>, no. <issue>12</issue>, pp. <fpage>3697</fpage>, <month>Jan</month>. <year>2021</year>. doi: <pub-id pub-id-type="doi">10.3390/en14123697</pub-id>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Stallkamp</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Schlipsing</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Salmen</surname></string-name>, and <string-name><given-names>C.</given-names> <surname>Igel</surname></string-name></person-group>, &#x201C;<article-title>The german traffic sign recognition benchmark</article-title>,&#x201D; in <source>IEEE International Joint Conference on Neural Networks</source>, <comment>2013. Accessed Apr. 25, 2023</comment>. [Online]. Available: <ext-link ext-link-type="uri" xlink:href="https://benchmark.ini.rub.de/gtsdb_dataset.html">https://benchmark.ini.rub.de/gtsdb_dataset.html</ext-link></mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Bzdok</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Krzywinski</surname></string-name>, and <string-name><given-names>N.</given-names> <surname>Altman</surname></string-name></person-group>, &#x201C;<article-title>Machine learning: Supervised methods, SVM and kNN</article-title>,&#x201D; <source>Nat. Methods</source>, vol. <volume>15</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <month>Jan</month>. <year>2018</year>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Triki</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Ksantini</surname></string-name>, and <string-name><given-names>M.</given-names> <surname>Karray</surname></string-name></person-group>, &#x201C;<article-title>Traffic sign recognition system based on belief functions theory</article-title>,&#x201D; in <conf-name>Proc. 13th Int. Conf. Agents Artif. Intell., </conf-name> <publisher-name>Science and Technology Publications</publisher-name>, <year>2021</year>, pp. <fpage>775</fpage>&#x2013;<lpage>780</lpage>. doi: <pub-id pub-id-type="doi">10.5220/0010239807750780</pub-id>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Narayana</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Bhavani</surname></string-name></person-group>, &#x201C;<article-title>Detection of traffic signs under various conditions using random forest algorithm comparison with KNN and SVM</article-title>,&#x201D; vol. <volume>48</volume>, pp. <fpage>6</fpage>, <year>2022</year>. doi: <pub-id pub-id-type="doi">10.1109/ICBATS54253.2022.9759067</pub-id>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Qian</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Yue</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Wang</surname></string-name>, and <string-name><given-names>F.</given-names> <surname>Coenen</surname></string-name></person-group>, &#x201C;<article-title>Robust Chinese traffic sign detection and recognition with deep convolutional neural network</article-title>,&#x201D; in <conf-name>2015 11th Int. Conf. Nat. Comput. (ICNC)</conf-name>, <publisher-loc>Zhangjiajie, China</publisher-loc>, <publisher-name>IEEE</publisher-name>, <month>Aug</month>. <year>2015</year>, pp. <fpage>791</fpage>&#x2013;<lpage>796</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ICNC.2015.7378092</pub-id>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Castanedo</surname></string-name></person-group>, &#x201C;<article-title>A review of data fusion techniques</article-title>,&#x201D; <source>Sci. World J.</source>, vol. <volume>2013</volume>, no. <issue>6</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>19</lpage>, <year>2013</year>. doi: <pub-id pub-id-type="doi">10.1155/2013/704504</pub-id>; <pub-id pub-id-type="pmid">24288502</pub-id></mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Pashazadeh</surname></string-name>, <string-name><given-names>F. R.</given-names> <surname>Salmasi</surname></string-name>, and <string-name><given-names>B. N.</given-names> <surname>Araabi</surname></string-name></person-group>, &#x201C;<article-title>Data driven sensor and actuator fault detection and isolation in wind turbine using classifier fusion</article-title>,&#x201D; <source>Renew. Energy</source>, vol. <volume>116</volume>, no. <issue>2</issue>, pp. <fpage>99</fpage>&#x2013;<lpage>106</lpage>, <month>Feb</month>. <year>2018</year>. doi: <pub-id pub-id-type="doi">10.1016/j.renene.2017.03.051</pub-id>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Moradi</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Chaibakhsh</surname></string-name>, and <string-name><given-names>A.</given-names> <surname>Ramezani</surname></string-name></person-group>, &#x201C;<article-title>An intelligent hybrid technique for fault detection and condition monitoring of a thermal power plant</article-title>,&#x201D; <source>Appl. Math. Model.</source>, vol. <volume>60</volume>, pp. <fpage>34</fpage>&#x2013;<lpage>47</lpage>, <month>Aug</month>. <year>2018</year>. doi: <pub-id pub-id-type="doi">10.1016/j.apm.2018.03.002</pub-id>.</mixed-citation></ref>
<ref id="ref-38"><label>[38]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Kannatey-Asibu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Yum</surname></string-name>, and <string-name><given-names>T. H.</given-names> <surname>Kim</surname></string-name></person-group>, &#x201C;<article-title>Monitoring tool wear using classifier fusion</article-title>,&#x201D; <source>Mech. Syst. Signal. Process.</source>, vol. <volume>85</volume>, no. <issue>2</issue>, pp. <fpage>651</fpage>&#x2013;<lpage>661</lpage>, <month>Feb</month>. <year>2017</year>. doi: <pub-id pub-id-type="doi">10.1016/j.ymssp.2016.08.035</pub-id>.</mixed-citation></ref>
<ref id="ref-39"><label>[39]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Seng Ng</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Srinivasan</surname></string-name></person-group>, &#x201C;<article-title>Multi-agent based collaborative fault detection and identification in chemical processes</article-title>,&#x201D; <source>Eng. Appl. Artif. Intell.</source>, vol. <volume>23</volume>, no. <issue>6</issue>, pp. <fpage>934</fpage>&#x2013;<lpage>949</lpage>, <month>Sep</month>. <year>2010</year>. doi: <pub-id pub-id-type="doi">10.1016/j.engappai.2010.01.026</pub-id>.</mixed-citation></ref>
<ref id="ref-40"><label>[40]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>A. B.</given-names> <surname>Cremers</surname></string-name>, and <string-name><given-names>Z.</given-names> <surname>Cao</surname></string-name></person-group>, &#x201C;<article-title>Interactive color image segmentation via iterative evidential labeling</article-title>,&#x201D; <source>Inf. Fusion</source>, vol. <volume>20</volume>, pp. <fpage>292</fpage>&#x2013;<lpage>304</lpage>, <month>Nov</month>. <year>2014</year>. doi: <pub-id pub-id-type="doi">10.1016/j.inffus.2014.03.007</pub-id>.</mixed-citation></ref>
<ref id="ref-41"><label>[41]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Ma</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Dou</surname></string-name>, and <string-name><given-names>X.</given-names> <surname>Ma</surname></string-name></person-group>, &#x201C;<article-title>A new combination method for multisensor conflict information</article-title>,&#x201D; <source>J. Supercomput.</source>, vol. <volume>72</volume>, no. <issue>7</issue>, pp. <fpage>2874</fpage>&#x2013;<lpage>2890</lpage>, <month>Jul</month>. <year>2016</year>. doi: <pub-id pub-id-type="doi">10.1007/s11227-016-1681-3</pub-id>.</mixed-citation></ref>
<ref id="ref-42"><label>[42]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Dempster</surname></string-name></person-group>, &#x201C;<article-title>Upper and lower probabilities induced by a multivalued mapping</article-title>,&#x201D; <source>Ann. Math. Stat.</source>, vol. <volume>38</volume>, pp. <fpage>57</fpage>&#x2013;<lpage>72</lpage>, <year>2008</year>. doi: <pub-id pub-id-type="doi">10.1007/978-3-540-44792-4</pub-id>.</mixed-citation></ref>
<ref id="ref-43"><label>[43]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Krzyzak</surname></string-name>, and <string-name><given-names>C.</given-names> <surname>Suen</surname></string-name></person-group>, &#x201C;<article-title>Methods of combining multiple classifiers and their applications to handwriting recognition</article-title>,&#x201D; <source>IEEE Trans. Syst. Man Cybern. B</source>, vol. <volume>22</volume>, no. <issue>3</issue>, pp. <fpage>418</fpage>&#x2013;<lpage>435</lpage>, <month>Jun</month>. <year>1992</year>. doi: <pub-id pub-id-type="doi">10.1109/21.155943</pub-id>.</mixed-citation></ref>
<ref id="ref-44"><label>[44]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Bi</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Guan</surname></string-name>, and <string-name><given-names>D.</given-names> <surname>Bell</surname></string-name></person-group>, &#x201C;<article-title>The combination of multiple classifiers using an evidential reasoning approach</article-title>,&#x201D; <source>Artif. Intell.</source>, vol. <volume>172</volume>, no. <issue>15</issue>, pp. <fpage>1731</fpage>&#x2013;<lpage>1751</lpage>, <month>Oct</month>. <year>2008</year>. doi: <pub-id pub-id-type="doi">10.1016/j.artint.2008.06.002</pub-id>.</mixed-citation></ref>
<ref id="ref-45"><label>[45]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Bi</surname></string-name></person-group>, &#x201C;<article-title>The impact of diversity on the accuracy of evidential classifier ensembles</article-title>,&#x201D; <source>Int. J. Approx. Reason.</source>, vol. <volume>53</volume>, no. <issue>4</issue>, pp. <fpage>584</fpage>&#x2013;<lpage>607</lpage>, <month>Jun</month>. <year>2012</year>. doi: <pub-id pub-id-type="doi">10.1016/j.ijar.2011.12.011</pub-id>.</mixed-citation></ref>
<ref id="ref-46"><label>[46]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Pan</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Dezert</surname></string-name>, <string-name><given-names>J. W.</given-names> <surname>Han</surname></string-name>, and <string-name><given-names>Y.</given-names> <surname>He</surname></string-name></person-group>, &#x201C;<article-title>Classifier fusion with contextual reliability evaluation</article-title>,&#x201D; <source>IEEE Trans. Cybern.</source>, vol. <volume>48</volume>, no. <issue>5</issue>, pp. <fpage>1605</fpage>&#x2013;<lpage>1618</lpage>, <month>Mai</month> <year>2018</year>. doi: <pub-id pub-id-type="doi">10.1109/TCYB.2017.2710205</pub-id>; <pub-id pub-id-type="pmid">28613193</pub-id></mixed-citation></ref>
<ref id="ref-47"><label>[47]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Davoine</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Zha</surname></string-name>, and <string-name><given-names>T.</given-names> <surname>Den&#x0153;ux</surname></string-name></person-group>, &#x201C;<article-title>Evidential calibration of binary SVM classifiers</article-title>,&#x201D; <source>Int. J. Approx. Reason.</source>, vol. <volume>72</volume>, no. <issue>4</issue>, pp. <fpage>55</fpage>&#x2013;<lpage>70</lpage>, <month>May</month> <year>2016</year>. doi: <pub-id pub-id-type="doi">10.1016/j.ijar.2015.05.002</pub-id>.</mixed-citation></ref>
<ref id="ref-48"><label>[48]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Kalirane</surname></string-name></person-group>, &#x201C;<article-title>Ensemble learning methods: Bagging, boosting and stacking</article-title>,&#x201D; <year>2022</year>. <comment>Accessed: Aug. 8, 2023</comment>. [Online]. Available: <ext-link ext-link-type="uri" xlink:href="https://www.analyticsvidhya.com/blog/2023/01/ensemble-learning-methods-bagging-boosting-and-stacking/">https://www.analyticsvidhya.com/blog/2023/01/ensemble-learning-methods-bagging-boosting-and-stacking/</ext-link></mixed-citation></ref>
<ref id="ref-49"><label>[49]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. N.</given-names> <surname>Tabata</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Zimmer</surname></string-name>, <string-name><given-names>L.</given-names> <surname>dos Santos Coelho</surname></string-name>, and <string-name><given-names>V. C.</given-names> <surname>Mariani</surname></string-name></person-group>, &#x201C;<article-title>Analyzing CARLA &#x2018;s performance for 2D object detection and monocular depth estimation based on deep learning approaches</article-title>,&#x201D; <source>Expert. Syst. Appl.</source>, vol. <volume>227</volume>, no. <issue>22</issue>, pp. <fpage>120200</fpage>, <month>Oct</month>. <year>2023</year>. doi: <pub-id pub-id-type="doi">10.1016/j.eswa.2023.120200</pub-id>.</mixed-citation></ref>
<ref id="ref-50"><label>[50]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Akshay</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Dinesh</surname></string-name>, and <string-name><given-names>U.</given-names> <surname>Scholars</surname></string-name></person-group>, &#x201C;<article-title>Road sign recognition system using raspberry pi</article-title>,&#x201D; <source>Int. J. Pure Appl. Math.</source>, vol. <volume>119</volume>, no. <issue>15</issue>, pp. <fpage>1845</fpage>&#x2013;<lpage>1850</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-51"><label>[51]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>I. S. B. M.</given-names> <surname>Isa</surname></string-name>, <string-name><given-names>C. J.</given-names> <surname>Yeong</surname></string-name>, and <string-name><given-names>N. L. A. bin M. S.</given-names> <surname>Azyze</surname></string-name></person-group>, &#x201C;<article-title>Real-time traffic sign detection and recognition using Raspberry Pi</article-title>,&#x201D; <source>Int. J. Electr. Comput. Eng.</source>, vol. <volume>12</volume>, no. <issue>1</issue>, pp. <fpage>331</fpage>&#x2013;<lpage>338</lpage>, <month>Feb</month>. <year>2022</year>. doi: <pub-id pub-id-type="doi">10.11591/ijece.v12i1.pp331-338</pub-id>.</mixed-citation></ref>
</ref-list>
</back></article>