<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">35266</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2023.035266</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Hybrid Metaheuristics with Deep Learning Enabled Automated Deception Detection and Classification of Facial Expressions</article-title>
<alt-title alt-title-type="left-running-head">Hybrid Metaheuristics with Deep Learning Enabled Automated Deception Detection and Classification of Facial Expressions</alt-title>
<alt-title alt-title-type="right-running-head">Hybrid Metaheuristics with Deep Learning Enabled Automated Deception Detection and Classification of Facial Expressions</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Alaskar</surname><given-names>Haya</given-names></name><email>h.alaskar@psau.edu.sa</email></contrib>
<aff><institution>Department of Computer Sciences, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University</institution>, <addr-line>Al Kharj</addr-line>, <country>Saudi Arabia</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Haya Alaskar. Email: <email>h.alaskar@psau.edu.sa</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic"><year>2023</year></pub-date>
<pub-date date-type="pub" publication-format="electronic"><day>1</day><month>5</month><year>2023</year>
</pub-date>
<volume>75</volume>
<issue>3</issue>
<fpage>5433</fpage>
<lpage>5449</lpage>
<history>
<date date-type="received"><day>14</day><month>8</month><year>2022</year>
</date>
<date date-type="accepted"><day>08</day><month>2</month><year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Alaskar</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Alaskar</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_35266.pdf"></self-uri>
<abstract>
<p>Automatic deception recognition has received considerable attention from the machine learning community due to recent research on its vast application to social media, interviews, law enforcement, and the military. Video analysis-based techniques for automated deception detection have received increasing interest. This study develops a new self-adaptive population-based firefly algorithm with a deep learning-enabled automated deception detection (SAPFF-DLADD) model for analyzing facial cues. Initially, the input video is separated into a set of video frames. Then, the SAPFF-DLADD model applies the MobileNet-based feature extractor to produce a useful set of features. The long short-term memory (LSTM) model is exploited for deception detection and classification. In the final stage, the SAPFF technique is applied to optimally alter the hyperparameter values of the LSTM model, showing the novelty of the work. The experimental validation of the SAPFF-DLADD model is tested using the Miami University Deception Detection Database (MU3D), a database comprised of two classes, namely, truth and deception. An extensive comparative analysis reported a better performance of the SAPFF-DLADD model compared to recent approaches, with a higher accuracy of 99%.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Deception detection</kwd>
<kwd>facial cues</kwd>
<kwd>deep learning</kwd>
<kwd>computer vision</kwd>
<kwd>hyperparameter tuning</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>The detection of human emotions has piqued researchers&#x2019; interest for generations. However, how well humans or machines ultimately perform in detecting deceptive speech is still an ongoing issue for criminal investigations. Generally, deception is integrated into day-to-day interactions, yet it is challenging for untrained people and trained professionals to detect deception accurately without using intrusive measures. Facial expressions, one of the main channels for understanding and interpreting emotions in social interactions, have been studied extensively in recent decades. Deception is sharing or conveying facts, ideas, or concepts transformed for advancement and personal gain. The process might range from fabricating information in a minor disagreement to manipulating the masses [<xref ref-type="bibr" rid="ref-1">1</xref>]. It is very complex to determine whether a statement is deceptive or genuine; therefore, it is important to utilize a deception detection technique to validate critical data. For this reason, many deception detection techniques and systems have been introduced [<xref ref-type="bibr" rid="ref-2">2</xref>].</p>
<p>Facial expressions help to reveal emotions that sometimes words do not sufficiently convey. From this principle, deception recognition based on facial expressions is derived [<xref ref-type="bibr" rid="ref-3">3</xref>]. It is easier to identify actions and emotions such as anger, laughter, and sadness, yet slight modifications can go completely unobserved by the inexperienced eye. Macro-expressions associated with fear, anger, sadness, happiness, and so on are apparent and understandable. They last between 0.5 and 5 s. Micro-expressions display a concealed emotion, occur unconsciously [<xref ref-type="bibr" rid="ref-4">4</xref>] and last less than 0.5 s. Anxiety, amusement, embarrassment, shame, relief, guilt, and pleasure are micro-expressions. While it is easy to categorize macro-expressions since they last longer and occur more frequently, micro-expressions go unobserved by the inexperienced eye for the opposite reasons [<xref ref-type="bibr" rid="ref-5">5</xref>]. Micro-expressions are portrayed by somebody trying to deceive someone else or hide any particular emotion.</p>
<p>A critical aspect of suitably conducting a lie-detection study is the public availability of a satisfactory dataset. This open innovation is one key component of accelerating the present study, as opposed to closed innovation, which relies on a private or closed dataset [<xref ref-type="bibr" rid="ref-6">6</xref>,<xref ref-type="bibr" rid="ref-7">7</xref>]. Despite existing progress, obtaining training and assessment material for lie recognition is a challenge, especially concerning the verification of ground truth to ascertain whether an individual is lying or not [<xref ref-type="bibr" rid="ref-8">8</xref>]. A major problem emerges since the ground truth collection knowledge is not beneficial when the scenario is simulated naively (for example, it is not satisfactory to train an individual to tell a lie merely) [<xref ref-type="bibr" rid="ref-9">9</xref>,<xref ref-type="bibr" rid="ref-10">10</xref>].</p>
<p>In [<xref ref-type="bibr" rid="ref-11">11</xref>], the authors presented a deep learning (DL) technique dependent upon an attentional convolutional network capable of concentrating on essential parts of faces and attaining improvement compared to preceding methods on several datasets, including FER-2013, CK&#x002B;, FERG, and JAFFE. It also utilizes a visualization approach to recognize significant facial regions and identify emotions according to the classifier output. Li et al. [<xref ref-type="bibr" rid="ref-12">12</xref>] introduced a facial expression dataset, the Realistic Affective Face Database (RAF-DB), which comprises approximately 30,000 facial images with varied illumination and unconstrained poses from thousands of people of varied races and ages. An expectation-maximization system designed to evaluate the dependability of emotion labels revealed that real-time faces frequently express compound or mixed emotions. To address the detection of multiple modal expressions, a deep locality-preserving convolutional neural network (DLP-CNN) technique was presented to enhance the discrimination of in-depth features by maintaining the locality of the classes while maximizing interclass scatter.</p>
<p>Xie et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] presented a new technique called deep comprehensive multiple patches aggregation CNN to resolve the facial expression recognition (FER) issue. The suggested technique is a deep-based architecture that mainly comprises 2 fields of CNN. One field extracts local features from the image patch, whereas another extract holistic features from the complete expressional image. Wang et al. [<xref ref-type="bibr" rid="ref-14">14</xref>] developed a facial expression detection methodology based on the CNN method. To simulate a hierarchic mechanism, an activation function is essential in the CNN method since the nonlinear capability of the activation function helps to design reliable AI. Among the activation functions, the rectified linear unit (ReLU) is a better technique; however, it needs improvement. Tsai et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] developed a FER approach which involves a face detection technique that integrates the Haar-like feature approach with the self-quotient image (SQI) filter. Consequently, the FERS approach demonstrates the best detection rate since the face detection technique more precisely discovers the face region of the image.</p>
<p>Though several FER models are available in the literature, there is still required to improve the detection rate. Since manual and trial-and-error hyperparameter tuning is a tedious process, metaheuristic algorithms can be employed. Therefore, this study develops a new self-adaptive population-based firefly algorithm with a deep learning-enabled automated deception detection (SAPFF-DLADD) model for analyzing facial cues. The SAPFF-DLADD model examines facial cues to identify which are associated with truth or deception. Initially, the input video is separated into a set of video frames. Then, the SAPFF-DLADD model applies a MobileNet-based feature extractor to produce a useful set of features. SAPFF with a long short-term memory (LSTM) model is exploited for deception detection and classification. The experimental validation of the SAPFF-DLADD model is tested using the MU3D, a database comprising two classes, namely, truth and deception.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>The Proposed SAPFF-DLADD Model</title>
<p>This study established a new SAPFF-DLADD approach to identify deception from facial cues. The input video is separated into a set of video frames at the primary level. Then, the SAPFF-DLADD model applies a MobileNet-based feature extractor to produce a useful set of features. For deception detection and classification, the LSTM model is exploited. In the final stage, the SAPFF technique is applied to alter the LSTM model&#x2019;s hyperparameter values optimally.</p>
<sec id="s2_1">
<label>2.1</label>
<title>Feature Extractor</title>
<p>In this study, the SAPFF-DLADD model applies a MobileNet-based feature extractor to produce useful features. The MobileNet method is a network model that uses depthwise separable convolution as its elementary component [<xref ref-type="bibr" rid="ref-16">16</xref>], consisting of depthwise and pointwise convolutions. Dense-MobileNet models consider the depthwise convolution and the point convolution layer as two individual convolution layers. Viz., the input feature map of every depthwise convolutional layer in the dense block is the superposition of the output feature map in the preceding convolutional layer. To tune the hyperparameters for the MobileNet approach, root means square propagation (RMSProp) optimization is utilized. RMSprop is an adaptive learning system that drives to increase the AdaGrad rate by taking the exponential moving average as opposed to AdaGrad&#x2019;s cumulative sum of squared gradients.</p>
<p><disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msup></mml:mrow></mml:mfrac><mml:mo>&#x2217;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x03B4;</mml:mi><mml:mi>L</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x03B4;</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>]</mml:mo></mml:mrow></mml:math></disp-formula>where</p>
<p><disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2217;</mml:mo><mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x03B4;</mml:mi><mml:mi>L</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x03B4;</mml:mi><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p><inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mrow><mml:mi>I</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:math></inline-formula> is the weight at time <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>t</mml:mi></mml:math></inline-formula>.</p>
<p><inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> represents the weight at time <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula>.</p>
<p><inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> stands for the learning rate at time <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi>t</mml:mi></mml:math></inline-formula>.</p>
<p><inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mi>L</mml:mi></mml:math></inline-formula> signifies the derivative of the loss function.</p>
<p><inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> denotes the derivative of weight at time <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>t</mml:mi></mml:math></inline-formula>.</p>
<p><inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> indicates the sum of the squares of past gradients.</p>
<p><inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> depicts the moving average parameter (const, 0.9).</p>
<p><inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>&#x03B5;</mml:mi></mml:math></inline-formula> refers to the lesser positive constant <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>8</mml:mn></mml:mrow></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Deception Classification Using the LSTM Model</title>
<p>LSTM is utilized for deception detection and classification. The LSTM model is an alternative form of the recurrent neural network (RNN) model. LSTMs substitute the hidden state computation with distinct gate functions [<xref ref-type="bibr" rid="ref-17">17</xref>]. This technique allows the LSTM network to capture the long-term series dependency in the temporal dataset. The operation of the LSTM mechanism is demonstrated in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. In contrast to traditional RNNs, the LSTM network presents a novel flow, the cell state <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:msub><mml:mi>m</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula>. LSTMs can remove or add data to the cell structure. The cell structure preserves memory in the LSTM network. The three gates, which control the data stream in LSTMs, consist of <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:msub><mml:mn>0</mml:mn><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> (the output gate), <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> (the input gate), and <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> (the forget gate). The input gate alters the data level from the existing <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> input dataset, and the preceding <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> hidden layers are input to the existing state. The forget gate controls the amount of data from the earlier <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:msub><mml:mi>m</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> cell structure to preserve. The output gate controls what amount of data to pass to the existing <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> hidden structure. The operation of those gates is given in the following:</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Structure of an LSTM</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-1.tif"/>
</fig>
<p><disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p><disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p><disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:msub><mml:mi>o</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>From these expressions, the variables <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mrow><mml:mtext>i</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>V</mml:mi><mml:mi>f</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msup><mml:mo>;</mml:mo><mml:mi>W</mml:mi><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mo>;</mml:mo><mml:mrow><mml:mtext>&#xA0;and</mml:mtext></mml:mrow><mml:mtext>&#x00A0;</mml:mtext><mml:mi>b</mml:mi><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>; <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula> represents the sigmoid function. Next, the cell and hidden states are obtained as follows.</p>
<p><disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:msub><mml:mi>g</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>tanh</mml:mi><mml:mspace width="thinmathspace" /><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p><disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:msub><mml:mi>m</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:msub><mml:mi>m</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:msub><mml:mi>g</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p><disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mn>0</mml:mn><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:mi>tanh</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>m</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>In these equations, <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>p</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:mrow><mml:mtext>and</mml:mtext></mml:mrow><mml:mtext>&#x00A0;</mml:mtext><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>; <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>tanh</mml:mi></mml:math></inline-formula> represents the hyperbolic tangent function. <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mn>0</mml:mn></mml:math></inline-formula> is component-wise multiplication. Backpropagation Through Time (BPTT) performs the LSTM training by minimising the objective function on a subset of the trained series. The gradient of weight and bias is evaluated in each time step.</p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Hyperparameter Tuning Using the SAPFF-DLADD Model</title>
<p>In the final stage, the SAPFF technique is applied to alter the LSTM model&#x2019;s hyperparameter values optimally. The FF approach is a metaheuristic algorithm that optimizes using nature-inspired techniques based on the social (flashing) behaviours of lightning bugs, or fireflies, in the tropical temperature region. Insects, fish, and birds can exhibit swarming behaviour [<xref ref-type="bibr" rid="ref-18">18</xref>]. In particular, the FF approach has a collection of features in common with other approaches, but the FF concept is easier to understand and implement. According to a recent study, the approach is very efficient, and it outperforms distinct conventional methodologies, such as genetic algorithms (GA), to solve different optimization issues. The key advantage is that it mainly employs random real numbers and depends on global transmission among the swarming particles (fireflies). As a result, it seems very effective in multiobjective optimizations such as WS composition planning generation. <xref ref-type="fig" rid="fig-2">Fig . 2</xref> illustrated the flowchart of the FF algorithm.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Flowchart of the FF algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-2.tif"/>
</fig>
<p>The FF method comprises 3 guidelines based on idealized flashing features of real fireflies. (1) Each firefly&#x2019;s light intensity or brightness is related to the objective function of a presented challenge. (2) Each firefly is unisex; therefore, the attraction is only based on brightness. (3) The attraction to each firefly corresponds to its brightness, and the brightness decreases with increasing distance from other fireflies since air absorbs light. It moves randomly if there is no brighter firefly compared to a certain firefly.</p>
<p>Furthermore, the brightness decreases with distance due to the inverse square law, as shown below.</p>
<p><disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mi>I</mml:mi><mml:mo>&#x227A;</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msup><mml:mi>r</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mfrac></mml:math></disp-formula></p>
<p>Once the light intensity reduction from traversing a medium with light absorption coefficient <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mi>&#x03B3;</mml:mi></mml:math></inline-formula> is taken into account, then the light concentration at <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi>r</mml:mi></mml:math></inline-formula> distance from the source is presented as</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mrow><mml:msup><mml:mi>r</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p><inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> denotes the light intensity at the source. Similarly, the brightness, <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula>, is given by</p>
<p><disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mi>&#x03B2;</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mrow><mml:msup><mml:mi>r</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>The generalized reduction function for any constant <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mi>&#x03C9;</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> is shown below.</p>
<p><disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:mi>&#x03B2;</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mrow><mml:msup><mml:mi>r</mml:mi><mml:mrow><mml:mi>&#x03C9;</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>An arbitrarily produced feasible potential solution is assigned a brightness based on efficacy in the FF technique. This brightness is used to compute the brightness of each firefly, i.e., each firefly&#x2019;s brightness is directly proportional to the brightness of the solution at that position. Once the brightness or intensity of the solution is assigned, each firefly follows a firefly with optimum brightness. A firefly&#x2019;s brightness serves as the neighbourhood&#x2019;s local subjective search parameter. Thus, for 2 fireflies, <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mi>F</mml:mi><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mi>F</mml:mi><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, with <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mi>F</mml:mi><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> brighter than <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mi>F</mml:mi><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mi>F</mml:mi><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> moves toward <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mi>F</mml:mi><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The position of <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mi>F</mml:mi><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> updates with the following position formula:</p>
<p><disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x003A;</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:munder><mml:mrow><mml:munder><mml:mrow><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x23DF;</mml:mo></mml:munder></mml:mrow><mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03B2;</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>0.5</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-13">Eq. (13)</xref>, <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> indicates the attractiveness of <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> at <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> and is recommended to be set as<inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:msub><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> for implementation, <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mi>&#x03B3;</mml:mi></mml:math></inline-formula> characterizes the variable that determines the degree to which the method depends on the distance squared between 2 fireflies, <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> denotes the variable for step length of arbitrary progression. <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> denotes the arbitrary vector from a uniformly distributed number within the range zero to one.</p>
<p><disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mo>&#x003A;</mml:mo><mml:mo>=</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mtext>a</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B5;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>0.5</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<fig id="fig-13">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-13.tif"/>
</fig>
<p>The location of each firefly is updated by iteration until one of the ending criteria is satisfied. The ending criteria are meeting the maximum iteration count, reaching a tolerance in the best value once it is predictable or obtaining no improvement in successive iterations. To improve the performance of the FF algorithm, the SAPFF is derived. Like other metaheuristic optimized techniques, the FF model is a population-based technique, and its optimization procedure begins with creating the primary population. Thus, it requires a control parameter for determining the population sizes. However, selecting the size of the population is a complex and challenging task. The self-adaptive population method alters the size of the populations from all the iterations. Since this important feature modifies the population size automatically from all the iterations, the user is not required to determine it. In the initial approach, the primary population size is determined by:</p>
<p><disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mrow><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mi>S</mml:mi><mml:mi>i</mml:mi><mml:mi>z</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mn>10</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mi>d</mml:mi></mml:math></disp-formula>where <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mi>d</mml:mi></mml:math></inline-formula> signifies the dimensionality of the problem. In the SAPFF approach, the novel population size is determined by:</p>
<p><disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:mrow><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mi>S</mml:mi><mml:mi>i</mml:mi><mml:mi>z</mml:mi></mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mi>d</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mi>S</mml:mi><mml:mi>i</mml:mi><mml:mi>z</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi>r</mml:mi><mml:mo>&#x2217;</mml:mo><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mi>S</mml:mi><mml:mi>i</mml:mi><mml:mi>z</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mi>r</mml:mi></mml:math></inline-formula> signifies an arbitrary number between &#x2212;0.5 and 0.5 if the size of the population for the following iteration is greater than the population size in the preceding iteration <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub><mml:mo>&#x003E;</mml:mo><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">z</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, each member of the current population is retained, and the other members of the population are formed by the elitism method. Consequently, the optimal solution obtained in the preceding iteration is employed. Suppose the new population size is less than the population size in the preceding iteration <inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub><mml:mo>&#x003C;</mml:mo><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">z</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, the best members of the existing population are reserved, and the failed members are detached. Once the population size does not alter <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mi>S</mml:mi><mml:mi>i</mml:mi><mml:mi>z</mml:mi></mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> &#x003D; PopSize), no population changes occur if the size of the novel population falls below the dimension of the problem <inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">z</mml:mi></mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>w</mml:mi></mml:mrow></mml:msub><mml:mo>&#x003C;</mml:mo><mml:mi>d</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, and the size of the population becomes equivalent to the dimension of the problem.</p>
<p>The SAPFF system resolves a fitness function for accomplishing maximal classifier performances. Here, the minimized classifier error rate is assumed to be the fitness function provided in <xref ref-type="disp-formula" rid="eqn-17">Eq. (17)</xref>.</p>
<p><disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:mrow><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>C</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>E</mml:mi><mml:mi>r</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>R</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mtext>&#x00A0;</mml:mtext><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mtext>&#x00A0;</mml:mtext><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow><mml:mtext>&#x00A0;</mml:mtext><mml:mrow><mml:mi>s</mml:mi><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mtext>&#x00A0;</mml:mtext><mml:mrow><mml:mi>n</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi></mml:mrow><mml:mtext>&#x00A0;</mml:mtext><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mtext>&#x00A0;</mml:mtext><mml:mrow><mml:mi>s</mml:mi><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>p</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mrow></mml:mfrac><mml:mo>&#x2217;</mml:mo><mml:mn>100</mml:mn></mml:math></disp-formula></p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Results and Discussion</title>
<p>The experimental validation of the SAPFF-DLADD model is tested using the MU3D [<xref ref-type="bibr" rid="ref-19">19</xref>], a database that comprises data samples of two classes. <xref ref-type="table" rid="table-1">Table 1</xref> shows a detailed description of the dataset.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Dataset details</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Class</th>
<th>No. of images</th>
</tr>
</thead>
<tbody>
<tr>
<td>Truth</td>
<td>2000</td>
</tr>
<tr>
<td>Deception</td>
<td>2000</td>
</tr>
<tr>
<td>Total Number of Images</td>
<td>4000</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-3">Fig. 3</xref> depicts the confusion matrices formed by the SAPFF-DLADD model, with 90% of the data as training (TR) and 10% as testing (TS) data. With the 90% TR data, the SAPFF-DLADD model recognized 1737 samples in the truth class and 1755 in the deception class. Likewise, with the 10% TS data, the SAPFF-DLADD approach recognized 193 samples in the truth class and 199 in the deception class.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Confusion matrices of the SAPFF-DLADD approach: (a) 90% training data and (b) 10% test data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-3.tif"/>
</fig>
<p><xref ref-type="table" rid="table-2">Table 2</xref> and <xref ref-type="fig" rid="fig-4">Fig. 4</xref> report the comparative classifier results of the SAPFF-DLADD model on the 90% training TR data and 10% TS data. The results indicate that the SAPFF-DLADD model reached enhanced results in both cases. For instance, with the 90% TR data, the SAPFF-DLADD model attained an average <inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97%, <inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97%, <inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 97%, <inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97%, and <inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 97%. Additionally, with the 10% TS data, the SAPFF-DLADD algorithm reached an average <inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98%, <inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98%, <inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98%, <inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98%, and <inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98%.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach with various measures on 90:10 TR/TS data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="center" colspan="6">Training/Testing (90:10)</th>
</tr>
<tr>
<th>Labels</th>
<th>Accuracy</th>
<th>Recall</th>
<th>Specificity</th>
<th>F-score</th>
<th>AUC score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="6">Training phase</td>
</tr>
<tr>
<td>Truth</td>
<td>97.00</td>
<td>96.34</td>
<td>97.66</td>
<td>96.98</td>
<td>97.00</td>
</tr>
<tr>
<td>Deception</td>
<td>97.00</td>
<td>97.66</td>
<td>96.34</td>
<td>97.01</td>
<td>97.00</td>
</tr>
<tr>
<td>Average</td>
<td>97.00</td>
<td>97.00</td>
<td>97.00</td>
<td>97.00</td>
<td>97.00</td>
</tr>
<tr>
<td align="center" colspan="6">Testing phase</td>
</tr>
<tr>
<td>Truth</td>
<td>98.00</td>
<td>97.97</td>
<td>98.03</td>
<td>97.97</td>
<td>98.00</td>
</tr>
<tr>
<td>Deception</td>
<td>98.00</td>
<td>98.03</td>
<td>97.97</td>
<td>98.03</td>
<td>98.00</td>
</tr>
<tr>
<td><bold>Average</bold></td>
<td><bold>98.00</bold></td>
<td><bold>98.00</bold></td>
<td><bold>98.00</bold></td>
<td><bold>98.00</bold></td>
<td><bold>98.00</bold></td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach for 90:10 TR/TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-4.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig-5">Fig. 5</xref> illustrates the confusion matrices formed by the SAPFF-DLADD approach, with 80% of the data as TR data and 20% as TS data. With the 80% TR data, the SAPFF-DLADD system recognized 1583 samples in the truth class and 1597 samples in the deception class. Similarly, with the 20% TS data, the SAPFF-DLADD algorithm recognized 396 samples in the truth class and 396 samples in the deception class.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Confusion matrices of the SAPFF-DLADD approach: (a) 80% TR data and (b) 20% TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-5.tif"/>
</fig>
<p><xref ref-type="table" rid="table-3">Table 3</xref> and <xref ref-type="fig" rid="fig-6">Fig. 6</xref> report the comparative classifier outcome of the SAPFF-DLADD approach on the 90% TR data and 10% TS data. The outcome indicates that the SAPFF-DLADD technique achieved better results in both respects. For example, with the 80% TR data, the SAPFF-DLADD system attained an average <inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.38%, <inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.37%, <inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99.37%, <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99.37%, and <inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99.37%. In addition, with the 20% TS data, the SAPFF-DLADD methodology reached an average <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99%, <inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99%, <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99%, <inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99%, and <inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 99%.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach with various measures on 80:20 TR/TS data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="center" colspan="6">Training/Testing (80:20)</th>
</tr>
<tr>
<th>Labels</th>
<th>Accuracy</th>
<th>Recall</th>
<th>Specificity</th>
<th>F-score</th>
<th>AUC score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="6">Training phase</td>
</tr>
<tr>
<td>Truth</td>
<td>99.38</td>
<td>99.06</td>
<td>99.69</td>
<td>99.37</td>
<td>99.37</td>
</tr>
<tr>
<td>Deception</td>
<td>99.38</td>
<td>99.69</td>
<td>99.06</td>
<td>99.38</td>
<td>99.37</td>
</tr>
<tr>
<td><bold>Average</bold></td>
<td><bold>99.38</bold></td>
<td><bold>99.38</bold></td>
<td><bold>99.38</bold></td>
<td><bold>99.38</bold></td>
<td><bold>99.38</bold></td>
</tr>
<tr>
<td align="center" colspan="6">Testing phase</td>
</tr>
<tr>
<td>Truth</td>
<td>99.00</td>
<td>98.51</td>
<td>99.50</td>
<td>99.00</td>
<td>99.00</td>
</tr>
<tr>
<td>Deception</td>
<td>99.00</td>
<td>99.50</td>
<td>98.51</td>
<td>99.00</td>
<td>99.00</td>
</tr>
<tr>
<td>Average</td>
<td>99.00</td>
<td>99.00</td>
<td>99.00</td>
<td>99.00</td>
<td>99.00</td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach for 80:20 TR/TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-6.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig-7">Fig. 7</xref> illustrates the confusion matrices formed by the SAPFF-DLADD technique using 70% of the data as TR data and 30% of the data as TS data. With the 70% TR data, the SAPFF-DLADD approach recognized 1387 samples in the truth class and 1382 in the deception class. At the same time, with the 30% TS data, the SAPFF-DLADD system recognized 595 samples in the truth class and 584 samples in the deception class.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Confusion matrices of the SAPFF-DLADD approach: (a) 70% TR data and (b) 30% TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-7.tif"/>
</fig>
<p><xref ref-type="table" rid="table-4">Table 4</xref> and <xref ref-type="fig" rid="fig-8">Fig. 8</xref> depict the comparative classifier outcome of the SAPFF-DLADD algorithm on the 70% TR data and 30% TS data. The outcomes of this SAPFF-DLADD system attained superior outcomes in both sets. With the 70% TR data, the SAPFF-DLADD algorithm attained an average <inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.89%, <inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.89%, <inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.89%, <inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.89%, and <inline-formula id="ieqn-89"><mml:math id="mml-ieqn-89"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.89%. With the 30% TS data, the SAPFF-DLADD methodology attained an average <inline-formula id="ieqn-90"><mml:math id="mml-ieqn-90"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.25%, <inline-formula id="ieqn-91"><mml:math id="mml-ieqn-91"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.25%, <inline-formula id="ieqn-92"><mml:math id="mml-ieqn-92"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.25%, <inline-formula id="ieqn-93"><mml:math id="mml-ieqn-93"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.25%, and <inline-formula id="ieqn-94"><mml:math id="mml-ieqn-94"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.25%.</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach with various measures on 70:30 TR/TS data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="center" colspan="6">Training/Testing (70:30)</th>
</tr>
<tr>
<th>Labels</th>
<th>Accuracy</th>
<th>Recall</th>
<th>Specificity</th>
<th>F-score</th>
<th>AUC score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="6">Training phase</td>
</tr>
<tr>
<td>Truth</td>
<td>98.89</td>
<td>99.07</td>
<td>98.71</td>
<td>98.89</td>
<td>98.89</td>
</tr>
<tr>
<td>Deception</td>
<td>98.89</td>
<td>98.71</td>
<td>99.07</td>
<td>98.89</td>
<td>98.89</td>
</tr>
<tr>
<td><bold>Average</bold></td>
<td><bold>98.89</bold></td>
<td><bold>98.89</bold></td>
<td><bold>98.89</bold></td>
<td><bold>98.89</bold></td>
<td><bold>98.89</bold></td>
</tr>
<tr>
<td align="center" colspan="6">Testing phase</td>
</tr>
<tr>
<td>Truth</td>
<td>98.25</td>
<td>99.17</td>
<td>97.33</td>
<td>98.27</td>
<td>98.25</td>
</tr>
<tr>
<td>Deception</td>
<td>98.25</td>
<td>97.33</td>
<td>99.17</td>
<td>98.23</td>
<td>98.25</td>
</tr>
<tr>
<td>Average</td>
<td>98.25</td>
<td>98.25</td>
<td>98.25</td>
<td>98.25</td>
<td>98.25</td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach for 70:30 TR/TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-8.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig-9">Fig. 9</xref> shows the confusion matrices formed by the SAPFF-DLADD algorithm using 60% of the data as TR and 40% as TS data. With the 60% TR data, the SAPFF-DLADD technique recognized 1197 samples in the truth class and 1159 samples in the deception class. Moreover, with the 40% TS data, the SAPFF-DLADD methodology recognized 778 samples in the truth class and 798 samples in the deception class.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Confusion matrices of the SAPFF-DLADD approach: (a) 60% TR data and (b) 40% TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-9.tif"/>
</fig>
<p><xref ref-type="table" rid="table-5">Table 5</xref> and <xref ref-type="fig" rid="fig-10">Fig. 10</xref> report the comparative classifier outcome of the SAPFF-DLADD technique on the 60% TR data and 40% TS data. The results show that the SAPFF-DLADD approach gained maximal outcomes in both respects. With the 60% TR data, the SAPFF-DLADD system attained an average <inline-formula id="ieqn-95"><mml:math id="mml-ieqn-95"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.17%, <inline-formula id="ieqn-96"><mml:math id="mml-ieqn-96"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.16%, <inline-formula id="ieqn-97"><mml:math id="mml-ieqn-97"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.16%, <inline-formula id="ieqn-98"><mml:math id="mml-ieqn-98"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.17%, and <inline-formula id="ieqn-99"><mml:math id="mml-ieqn-99"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.16%. In addition, with the 40% TS data, the SAPFF-DLADD methodology obtained an average <inline-formula id="ieqn-100"><mml:math id="mml-ieqn-100"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.50%, <inline-formula id="ieqn-101"><mml:math id="mml-ieqn-101"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.52%, <inline-formula id="ieqn-102"><mml:math id="mml-ieqn-102"><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:msub><mml:mi>c</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 98.52%, <inline-formula id="ieqn-103"><mml:math id="mml-ieqn-103"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.50%, and <inline-formula id="ieqn-104"><mml:math id="mml-ieqn-104"><mml:mi>A</mml:mi><mml:mi>U</mml:mi><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> of 98.52.</p>
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach with various measures on 60:40 TR/TS data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th align="center" colspan="6">Training/Testing (60:40)</th>
</tr>
<tr>
<th>Labels</th>
<th>Accuracy</th>
<th>Recall</th>
<th>Specificity</th>
<th>F-score</th>
<th>AUC score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" colspan="6">Training phase</td>
</tr>
<tr>
<td>Truth</td>
<td>98.17</td>
<td>98.36</td>
<td>97.97</td>
<td>98.20</td>
<td>98.16</td>
</tr>
<tr>
<td>Deception</td>
<td>98.17</td>
<td>97.97</td>
<td>98.36</td>
<td>98.14</td>
<td>98.16</td>
</tr>
<tr>
<td>Average</td>
<td>98.17</td>
<td>98.16</td>
<td>98.16</td>
<td>98.17</td>
<td>98.16</td>
</tr>
<tr>
<td align="center" colspan="6">Testing phase</td>
</tr>
<tr>
<td>Truth</td>
<td>98.50</td>
<td>99.36</td>
<td>97.67</td>
<td>98.48</td>
<td>98.52</td>
</tr>
<tr>
<td>Deception</td>
<td>98.50</td>
<td>97.67</td>
<td>99.36</td>
<td>98.52</td>
<td>98.52</td>
</tr>
<tr>
<td><bold>Average</bold></td>
<td><bold>98.50</bold></td>
<td><bold>98.50</bold></td>
<td><bold>98.50</bold></td>
<td><bold>98.50</bold></td>
<td><bold>98.50</bold></td>
</tr>
</tbody>
</table>
</table-wrap><fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Result analysis of the SAPFF-DLADD approach for 60:40 TR/TS data</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-10.tif"/>
</fig>
<p>The training accuracy (TA) and validation accuracy (VA) achieved by the SAPFF-DLADD methodology on the test dataset is illustrated in <xref ref-type="fig" rid="fig-11">Fig. 11</xref>. The experimental outcome reveals that the SAPFF-DLADD technique attained higher values of TA and VA. In particular, VA outperformed TA.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>TA and VA analysis of the SAPFF-DLADD algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-11.tif"/>
</fig>
<p>The training loss (TL) and validation loss (VL) gained by the SAPFF-DLADD system on the test dataset are depicted in <xref ref-type="fig" rid="fig-12">Fig. 12</xref>. The experimental outcome reveals the TL and VL values for the SAPFF-DLADD algorithm decreased. Specifically, VL is less than TL.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>TL and VL analysis of the SAPFF-DLADD algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_35266-fig-12.tif"/>
</fig>
<p>To demonstrate the accuracy of the SAPFF-DLADD model, a brief comparative <inline-formula id="ieqn-105"><mml:math id="mml-ieqn-105"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> examination is made in <xref ref-type="table" rid="table-6">Table 6</xref> [<xref ref-type="bibr" rid="ref-20">20</xref>&#x2013;<xref ref-type="bibr" rid="ref-23">23</xref>]. The results imply that the deception in the eyes of the deceiver (DED) and identity unbiased deception detection (IUDD) models have poor performance with lower <inline-formula id="ieqn-106"><mml:math id="mml-ieqn-106"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> values of 77.76% and 67.98%, respectively. At the same time, the DLDMF model and LieNet demonstrate slightly improved <inline-formula id="ieqn-107"><mml:math id="mml-ieqn-107"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> values of 96.59% and 98.12%, respectively.</p>
<table-wrap id="table-6">
<label>Table 6</label>
<caption>
<title>Comparative analysis of the SAPFF-DLADD approach with existing methodologies</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Methods</th>
<th>Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>SAPFF-DLADD</bold></td>
<td><bold>99.00</bold></td>
</tr>
<tr>
<td>DLDMF [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td>96.59</td>
</tr>
<tr>
<td>IUDD Model [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td>77.76</td>
</tr>
<tr>
<td>DED Model [<xref ref-type="bibr" rid="ref-22">22</xref>]</td>
<td>67.98</td>
</tr>
<tr>
<td>LieNet [<xref ref-type="bibr" rid="ref-23">23</xref>]</td>
<td>98.12</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>However, the SAPFF-DLADD model results in a higher <inline-formula id="ieqn-108"><mml:math id="mml-ieqn-108"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:msub><mml:mi>u</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of 99%. Therefore, the experimental outcomes show that the SAPFF-DLADD technique attains maximal performance compared to other models.</p>
</sec>
<sec id="s4">
<label>4</label>
<title>Conclusion</title>
<p>This study established a novel SAPFF-DLADD approach to identify deception from facial cues. First, the input video was separated into a set of video frames. Then, the SAPFF-DLADD model applied a MobileNet-based feature extractor to produce a useful set of features. For deception detection and classification, the LSTM model was exploited. In the final stage, the SAPFF technique was executed to alter the LSTM model&#x2019;s hyperparameter values optimally. The experimental validation of the SAPFF-DLADD system was tested utilizing the MU3D, a database comprising two classes, namely, truth and deception. The extensive comparative analysis reported better performance of the SAPFF-DLADD model compared to recent approaches with a higher accuracy of 99%. In the future, an ensemble of DL-based fusion techniques will be designed to improve detection performance.</p>
</sec>
</body>
<back>
<ack>
<p>I would like to thank Prince Sattam Bin Abdulaziz University for supporting me during this research.</p>
</ack>
<sec><title>Funding Statement</title>
<p>The author received no specific funding for this study.</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The author declares that they have no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Kottursamy</surname></string-name></person-group>, &#x201C;<article-title>A review on finding efficient approach to detect customer emotion analysis using deep learning analysis</article-title>,&#x201D; <source>Journal of Trends in Computer Science and Smart Technology</source>, vol. <volume>3</volume>, no. <issue>2</issue>, pp. <fpage>95</fpage>&#x2013;<lpage>113</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W. B.</given-names> <surname>Shahid</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Aslam</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Abbas</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Afzal</surname></string-name> and <string-name><given-names>S. B.</given-names> <surname>Khalid</surname></string-name></person-group>, &#x201C;<article-title>A deep learning assisted personalized deception system for countering web application attacks</article-title>,&#x201D; <source>Journal of Information Security and Applications</source>, vol. <volume>67</volume>, no. <issue>1</issue>, pp. <fpage>103169</fpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Meng</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Xiang</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Ma</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Sun</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Adversarial deception against SAR target recognition network</article-title>,&#x201D; <source>Journal of Selected Topics in Applied Earth Observations and Remote Sensing</source>, vol. <volume>15</volume>, pp. <fpage>4507</fpage>&#x2013;<lpage>4520</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Bingol</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Alatas</surname></string-name></person-group>, &#x201C;<article-title>Chaos enhanced intelligent optimization-based novel deception detection system</article-title>,&#x201D; <source>Chaos, Solitons &#x0026; Fractals</source>, vol. <volume>166</volume>, no. <issue>8</issue>, pp. <fpage>112896</fpage>, <year>2023</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Lv</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Facial expression recognition: A survey</article-title>,&#x201D; <source>Symmetry</source>, vol. <volume>11</volume>, no. <issue>10</issue>, pp. <fpage>1189</fpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Sonawane</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Sharma</surname></string-name></person-group>, &#x201C;<article-title>Review of automated emotion-based quantification of facial expression in Parkinson&#x2019;s patients</article-title>,&#x201D; <source>The Visual Computer</source>, vol. <volume>37</volume>, no. <issue>5</issue>, pp. <fpage>1151</fpage>&#x2013;<lpage>1167</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Kaur</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Kumar</surname></string-name></person-group>, &#x201C;<article-title>Face detection techniques: A review</article-title>,&#x201D; <source>Artificial Intelligence Review</source>, vol. <volume>52</volume>, no. <issue>2</issue>, pp. <fpage>927</fpage>&#x2013;<lpage>948</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Leng</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>M. K.</given-names> <surname>Khan</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Alghathbar</surname></string-name></person-group>, &#x201C;<article-title>Two-directional two-dimensional random projection and its variations for face and palmprint recognition</article-title>,&#x201D; in <conf-name>Int. Conf. on Computational Science and Its Applications</conf-name>, <publisher-loc>Berlin, Heidelberg</publisher-loc>, <publisher-name>Springer</publisher-name>, pp. <fpage>458</fpage>&#x2013;<lpage>470</lpage>, <year>2021</year>. </mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Leng</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Jing</surname></string-name>, <string-name><given-names>M. K.</given-names> <surname>Khan</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Alghathbar</surname></string-name></person-group>, &#x201C;<article-title>Dynamic weighted discrimination power analysis in DCT domain for face and palmprint recognition</article-title>,&#x201D; in <conf-name>2010 Int. Conf. on Information and Communication Technology Convergence (ICTC)</conf-name>, <publisher-loc>Jeju</publisher-loc>, pp. <fpage>467</fpage>&#x2013;<lpage>471</lpage>, <year>2010</year>. </mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y. S.</given-names> <surname>Su</surname></string-name>, <string-name><given-names>H. Y.</given-names> <surname>Suen</surname></string-name> and <string-name><given-names>K. E.</given-names> <surname>Hung</surname></string-name></person-group>, &#x201C;<article-title>Predicting behavioral competencies automatically from facial expressions in real-time video-recorded interviews</article-title>,&#x201D; <source>Journal of Real-Time Image Processing</source>, vol. <volume>18</volume>, no. <issue>4</issue>, pp. <fpage>1011</fpage>&#x2013;<lpage>1021</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Minaee</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Minaei</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Abdolrashidi</surname></string-name></person-group>, &#x201C;<article-title>Deep-emotion: Facial expression recognition using attentional convolutional network</article-title>,&#x201D; <source>Sensors</source>, vol. <volume>21</volume>, no. <issue>9</issue>, pp. <fpage>3046</fpage>, <year>2021</year>; <pub-id pub-id-type="pmid">33925371</pub-id></mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Li</surname></string-name> and <string-name><given-names>W.</given-names> <surname>Deng</surname></string-name></person-group>, &#x201C;<article-title>Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>28</volume>, no. <issue>1</issue>, pp. <fpage>356</fpage>&#x2013;<lpage>370</lpage>, <year>2019</year>; <pub-id pub-id-type="pmid">30183631</pub-id></mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Xie</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Hu</surname></string-name></person-group>, &#x201C;<article-title>Facial expression recognition using hierarchical features with deep comprehensive multipatches aggregation convolutional neural networks</article-title>,&#x201D; <source>IEEE Transactions on Multimedia</source>, vol. <volume>21</volume>, no. <issue>1</issue>, pp. <fpage>211</fpage>&#x2013;<lpage>220</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Song</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Rong</surname></string-name></person-group>, &#x201C;<article-title>The influence of the activation function in a convolution neural network model of facial expression recognition</article-title>,&#x201D; <source>Applied Sciences</source>, vol. <volume>10</volume>, no. <issue>5</issue>, pp. <fpage>1897</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H. H.</given-names> <surname>Tsai</surname></string-name> and <string-name><given-names>Y. C.</given-names> <surname>Chang</surname></string-name></person-group>, &#x201C;<article-title>Facial expression recognition using a combination of multiple facial features and support vector machine</article-title>,&#x201D; <source>Soft Computing</source>, vol. <volume>22</volume>, no. <issue>13</issue>, pp. <fpage>4389</fpage>&#x2013;<lpage>4405</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Udayakumar</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Rajagopalan</surname></string-name></person-group>, &#x201C;<article-title>Blockchain enabled secure image transmission and diagnosis scheme in medical cyber-physical systems</article-title>,&#x201D; <source>Journal of Electronic Imaging</source>, vol. <volume>31</volume>, no. <issue>6</issue>, pp. <fpage>062002</fpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>Yuan</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Sampling-interval-aware lstm for industrial process soft sensing of dynamic time sequences with irregular sampling measurements</article-title>,&#x201D; <source>IEEE Sensors Journal</source>, vol. <volume>21</volume>, no. <issue>9</issue>, pp. <fpage>10787</fpage>&#x2013;<lpage>10795</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Mandal</surname></string-name>, <string-name><given-names>A. U.</given-names> <surname>Haque</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Meng</surname></string-name>, <string-name><given-names>A. K.</given-names> <surname>Srivastava</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Martinez</surname></string-name></person-group>, &#x201C;<article-title>A novel hybrid approach using wavelet, firefly algorithm, and fuzzy ARTMAP for day-ahead electricity price forecasting</article-title>,&#x201D; <source>IEEE Transactions on Power Systems</source>, vol. <volume>28</volume>, no. <issue>2</issue>, pp. <fpage>1041</fpage>&#x2013;<lpage>1051</lpage>, <year>2013</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>Miami University Deception Detection Database</collab></person-group>. [Online]. Available: <ext-link ext-link-type="uri" xlink:href="http://hdl.handle.net/2374.MIA/6067">http://hdl.handle.net/2374.MIA/6067</ext-link></mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Gogate</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Adeel</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Hussain</surname></string-name></person-group>, &#x201C;<article-title>Deep learning driven multimodal fusion for automated deception detection</article-title>,&#x201D; in <conf-name>2017 IEEE Symp. Series on Computational Intelligence (SSCI)</conf-name>, <publisher-loc>Honolulu, HI, United States</publisher-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2017</year>. </mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L. M.</given-names> <surname>Ngo</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Mandira</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Karaoglu</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Bouma</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Identity unbiased deception detection by 2D-to-3D face reconstruction</article-title>,&#x201D; in <conf-name>2021 IEEE Winter Conf. on Applications of Computer Vision (WACV)</conf-name>, <publisher-loc>Waikoloa, HI, USA</publisher-loc>, pp. <fpage>145</fpage>&#x2013;<lpage>154</lpage>, <year>2021</year>. </mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Khan</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Crockett</surname></string-name>, <string-name><given-names>J.</given-names> <surname>O&#x2019;Shea</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Hussain</surname></string-name> and <string-name><given-names>B. M.</given-names> <surname>Khan</surname></string-name></person-group>, &#x201C;<article-title>Deception in the eyes of deceiver: A computer vision and machine learning based automated deception detection</article-title>,&#x201D; <source>Expert Systems with Applications</source>, vol. <volume>169</volume>, no. <issue>5</issue>, pp. <fpage>114341</fpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Karnati</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Seal</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Yazidi</surname></string-name> and <string-name><given-names>O.</given-names> <surname>Krejcar</surname></string-name></person-group>, &#x201C;<article-title>LieNet: A deep convolution neural networks framework for detecting deception</article-title>,&#x201D; <source>IEEE Transactions on Cognitive and Developmental Systems</source>, pp. <fpage>1</fpage>, <year>2021</year>. <pub-id pub-id-type="doi">10.1109/TCDS.2021.3086011</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>