<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CSSE</journal-id>
<journal-id journal-id-type="nlm-ta">CSSE</journal-id>
<journal-id journal-id-type="publisher-id">CSSE</journal-id>
<journal-title-group>
<journal-title>Computer Systems Science &#x0026; Engineering</journal-title>
</journal-title-group>
<issn pub-type="ppub">0267-6192</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">29603</article-id>
<article-id pub-id-type="doi">10.32604/csse.2023.029603</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Deep Learning with Natural Language Processing Enabled Sentimental Analysis on Sarcasm Classification</article-title><alt-title alt-title-type="left-running-head">Deep Learning with Natural Language Processing Enabled Sentimental Analysis on Sarcasm Classification</alt-title><alt-title alt-title-type="right-running-head">Deep Learning with Natural Language Processing Enabled Sentimental Analysis on Sarcasm Classification</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Sait</surname><given-names>Abdul Rahaman Wahab</given-names></name>
<xref ref-type="aff" rid="aff-1">1</xref><email>asait@kfu.edu.sa</email>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Ishak</surname><given-names>Mohamad Khairi</given-names></name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<aff id="aff-1"><label>1</label><institution>Department of Documents and Archive, Center of Documents and Administrative Communication, King Faisal University</institution>, <addr-line>Al Hofuf, Al-Ahsa, 31982</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-2"><label>2</label><institution>School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM)</institution>, <addr-line>Nibong Tebal, Penang, 14300</addr-line>, <country>Malaysia</country></aff>
</contrib-group><author-notes><corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Abdul Rahaman Wahab Sait. Email: <email>asait@kfu.edu.sa</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-06-20"><day>20</day>
<month>06</month>
<year>2022</year></pub-date>
<volume>44</volume>
<issue>3</issue>
<fpage>2553</fpage>
<lpage>2567</lpage>
<history>
<date date-type="received"><day>07</day><month>3</month><year>2022</year></date>
<date date-type="accepted"><day>07</day><month>4</month><year>2022</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Sait and Ishak</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Sait and Ishak</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CSSE_29603.pdf"></self-uri>
<abstract>
<p>Sentiment analysis (SA) is the procedure of recognizing the emotions related to the data that exist in social networking. The existence of sarcasm in textual data is a major challenge in the efficiency of the SA. Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection, punctuations, and sentiment shift that are vital indicators of sarcasm. With the advent of deep-learning, recent works, leveraging neural networks in learning lexical and contextual features, removing the need for handcrafted feature. In this aspect, this study designs a deep learning with natural language processing enabled SA (DLNLP-SA) technique for sarcasm classification. The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data. Besides, the DLNLP-SA technique holds various sub-processes namely preprocessing, feature vector conversion, and classification. Initially, the pre-processing is performed in diverse ways such as single character removal, multi-spaces removal, URL removal, stopword removal, and tokenization. Secondly, the transformation of feature vectors takes place using the N-gram feature vector technique. Finally, mayfly optimization (MFO) with multi-head self-attention based gated recurrent unit (MHSA-GRU) model is employed for the detection and classification of sarcasm. To verify the enhanced outcomes of the DLNLP-SA model, a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Sentiment analysis</kwd>
<kwd>sarcasm detection</kwd>
<kwd>deep learning</kwd>
<kwd>natural language processing</kwd>
<kwd>n-grams</kwd>
<kwd>hyperparameter tuning</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Sarcasm is a rhetorical way of exposing dislike or negative emotion through exaggerated language construct. It is a variety of mockery and false politeness for intensifying hostility without plainly accordingly [<xref ref-type="bibr" rid="ref-1">1</xref>]. In face-to-face discussion, sarcasm is effortlessly identified through gestures, tone of the speaker, and facial expressions. But identifying sarcasm in written messages is not an insignificant process since none of these cues is easily accessible [<xref ref-type="bibr" rid="ref-2">2</xref>]. Through internet, sarcasm recognition in online communication from discussion forums, e-commerce websites, and social media platforms has turned out to be critical for online trolls, sentiment analysis, identifying cyberbullies, and opinion mining [<xref ref-type="bibr" rid="ref-3">3</xref>]. The topic of sarcasm gained considerable attention from Neuropsychology to Linguistics [<xref ref-type="bibr" rid="ref-4">4</xref>]. During this case, the feature is handcrafted and could generalize in the incidence of figurative slang and informal language that is extensively utilized in online conversation [<xref ref-type="bibr" rid="ref-5">5</xref>].</p>
<p>Sarcasm Detection (SD) from Twitter is modelled as a binary text classification process [<xref ref-type="bibr" rid="ref-6">6</xref>]. Detecting Sarcasm in text classification is a major process with various suggestions for several fields namely fitness, safety, and marketing. The SD method could assist companies to examine customer sentiment regarding the good [<xref ref-type="bibr" rid="ref-7">7</xref>]. It leverages the company to promote the quality of the product. In sentimental analysis, classification of sentiment is the main subfunction, particularly to categorize twitters, comprising hidden data in the message that a person shares with others. Also, one could utilize the composition of Twitter for predicting sarcasm. Executing machine learning (ML) algorithm could produce effective outcomes for detecting sarcasm [<xref ref-type="bibr" rid="ref-8">8</xref>]. Constructing an efficient classification method based on several factors. The major factor is the attribute utilized and the sovereign attribute in the learning model that is effortlessly integrated into the class example [<xref ref-type="bibr" rid="ref-9">9</xref>]. With the emergence of deep learning (DL), current studies [<xref ref-type="bibr" rid="ref-10">10</xref>], leverage NNs for learning contextual and lexical features, eliminate the necessity for handcrafted features. When DL based methods accomplish remarkable results, it can be lack interpretability.</p>
<p>This study designs a deep learning with natural language processing enabled SA (DLNLP-SA) technique for sarcasm classification. The proposed DLNLP-SA technique holds various sub-processes namely pre-processing, feature vector conversion, and classification. Initially, the pre-processing is performed in diverse ways such as single character removal, multi-spaces removal, URL removal, stopword removal, and tokenization. Secondly, the transformation of feature vectors takes place using the N-gram feature vector technique. Finally, mayfly optimization (MFO) with multi-head self-attention based gated recurrent unit (MHSA-GRU) model is employed for the detection and classification of sarcasm. To verify the enhanced outcomes of the DLNLP-SA model, a comprehensive experimental investigation is performed on the benchmark dataset.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related works</title>
<p>Wen et al. [<xref ref-type="bibr" rid="ref-11">11</xref>] presented a sememe and auxiliary improved attention neural method, SAAG. On the word level, it can be present sememe knowledge for enhancing the representation learning of Chinese words. The sememe is the minimal unit of meaning that is fine-grained representation of words. Bedi et al. [<xref ref-type="bibr" rid="ref-12">12</xref>] established a Hindi-English code-mixed data set, MaSaC for multi-modal sarcasm recognition and humor classifier from conversational dialog that to skill is primary data set of their kind; (2) it can be present MSH-COMICS, a new attention-rich neural infrastructure to utterance classifier. Ren et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] introduced a multi-level memory network utilizing sentiment semantics (SS) for taking the feature of sarcasm expression. During this method, it utilizes the 1st-level memory network for taking SS and utilizes the 2nd-level memory network for taking the contrast amongst SS and the condition from all the sentences. In addition, it can be utilized an improved CNN for increasing the memory network from the absence of local information.</p>
<p>Zhang et al. [<xref ref-type="bibr" rid="ref-14">14</xref>] presented the complex-valued fuzzy network by leveraging the mathematical formalism of quantum model and fuzzy logic (FL). The contextual interface amongst neighboring utterances are explained as the interaction amongst a quantum model and its surrounding environment, creating the quantum composite method, whereas the weight of interface was defined as a fuzzy membership function. Nayak et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] estimated different vectorization and ML techniques for detecting sarcastic headlines. In experiments illustrate that pre-trained transformer based embedded integrated with LSTM network offer optimum outcomes. The author in [<xref ref-type="bibr" rid="ref-16">16</xref>] examined negative sentiment tweets with occurrence of hyperboles for SD. In 6000 and 600 preprocessing negative sentiment tweets containing #Kungflu, #Coronavirus, #Chinesevirus, #COVID19, and #Hantavirus are collected to SD. In 5 hyperbole features like capital letter, elongated word, interjection, intensifier, and punctuation mark are analyzed utilizing 3 well-known ML techniques.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>The Proposed Model</title>
<p>In this study, a new DLNLP-SA technique aims for detecting and classifying the occurrence of sarcasm in the input data. The proposed DLNLP-SA technique undergoes distinct set of processes such as pre-processing, N-gram feature extraction, MHSA-GRU based classification, and MFO based hyperparameter optimization. <xref ref-type="fig" rid="fig-1">Fig. 1</xref> demonstrates the block diagram of DLNLP-SA technique.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Block diagram of DLNLP-SA technique</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-1.png"/>
</fig>
<sec id="s3_1">
<label>3.1</label>
<title>Pre-Processing</title>
<p>When the dataset is recognized, a primary step is to pre-process the text data. The text pre-processed is the method of cleaning the new text data. The strong text pre-processed method is indispensable to application on NLP tasks. Sine every textual element is attained then pre-processing serves as an important component of input that is offering text data application. The pre-process comprises distinct techniques to translate the novel text as to well-defined procedure: lemmatization, removing of stopwords, lexical analysis (removing of punctuations, special character or symbol, word tokenization, and ignore case sensitivity). The various sub processes limited in data pre-processed are:<list list-type="bullet"><list-item>
<p>Removing numerals</p></list-item><list-item>
<p>Removing stop words</p></list-item><list-item>
<p>Removing multiple spaces</p></list-item><list-item>
<p>Removing punctuation marks</p></list-item><list-item>
<p>Removing single letter words and</p></list-item><list-item>
<p>Change uppercase letters into lowercase.</p></list-item></list></p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>N-Gram Feature Extraction</title>
<p>The pattern is created with the concatenation of neighboring tokens to n-grams whereas <inline-formula id="ieqn-1">
<mml:math id="mml-ieqn-1"><mml:mi>n</mml:mi><mml:mspace width="thickmathspace" /><mml:mo>=</mml:mo><mml:mspace width="thickmathspace" /><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mn>3...</mml:mn></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math>
</inline-formula>. During the easy manner, in which, <inline-formula id="ieqn-2">
<mml:math id="mml-ieqn-2"><mml:mi>n</mml:mi></mml:math>
</inline-formula> refers the similar to one is named as unigram. Employing n-grams, it takes the possibility of taking that word to be possible to develop visible from the text in terms of other words. Usually, <inline-formula id="ieqn-3">
<mml:math id="mml-ieqn-3"><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>g</mml:mi><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>m</mml:mi><mml:mi>s</mml:mi><mml:mspace width="thickmathspace" /></mml:math>
</inline-formula> are certainly not larger than <inline-formula id="ieqn-4">
<mml:math id="mml-ieqn-4"><mml:mi>n</mml:mi><mml:mspace width="thickmathspace" /><mml:mo>=</mml:mo><mml:mspace width="thickmathspace" /><mml:mn>3</mml:mn></mml:math>
</inline-formula>. The maximum values are the probability for generating complex pattern that equals rarely.</p>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>MHSA-GRU Based Sarcasm Classification</title>
<p>At the time of SD and classification, the MHSA-GRU model has been employed to it. The GRU offers a basic unit comprised of two control gates [<xref ref-type="bibr" rid="ref-17">17</xref>], reset gate and the update gate, rather than the absence of the cell state and three in the LSTM cell. The reset and update gates are parallel to forget gate an input gate from the LSTM cell however the variance is in how the output of this gate is utilized within the GRU cell. Consequently, a GRU cell has lesser training parameters when compared to LSTM, making the training very fast. Especially, GRU was proposed for capturing dependency of distinct time scales in machine translation tasks.</p>
<p><disp-formula id="eqn-1"><label>(1)</label>
<mml:math id="mml-eqn-1" display="block"><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>W</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula></p>
<p><disp-formula id="eqn-2"><label>(2)</label>
<mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>W</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
</disp-formula></p>
<p><disp-formula id="eqn-3"><label>(3)</label>
<mml:math id="mml-eqn-3" display="block"><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2299;</mml:mo><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2299;</mml:mo><mml:mi>&#x03C8;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>W</mml:mi><mml:mi>h</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>h</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2299;</mml:mo><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>h</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
</disp-formula></p>
<p>whereas <inline-formula id="ieqn-5">
<mml:math id="mml-ieqn-5"><mml:mrow><mml:msub><mml:mi>W</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>W</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>W</mml:mi><mml:mi>h</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>R</mml:mi><mml:mi>h</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>N</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
</inline-formula> and <inline-formula id="ieqn-6">
<mml:math id="mml-ieqn-6"><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>h</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mi>N</mml:mi></mml:msup></mml:mrow></mml:math>
</inline-formula> denotes weight matrices and bias vectors <inline-formula id="ieqn-7">
<mml:math id="mml-ieqn-7"><mml:mrow><mml:mspace width="thickmathspace" /><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">d</mml:mi><mml:mspace width="thickmathspace" /></mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mi>N</mml:mi></mml:msup></mml:mrow></mml:math>
</inline-formula> denotes input, update gate, reset gate, and output vector, correspondingly.</p>
<p>A GRU cell integrates the input and forgets gates of LSTM into one single update gate. This gate decides how much data from the preceding hidden state need to be passed through following hidden states. The reset gate <inline-formula id="ieqn-8">
<mml:math id="mml-ieqn-8"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula> enables to drop data unrelated for the future. Subsequently, GRU simplifies LSTM by removing specific parameters however, it maintains its important property.</p>
<p>Assume a sentence <inline-formula id="ieqn-9">
<mml:math id="mml-ieqn-9"><mml:mi>S</mml:mi></mml:math>
</inline-formula>, we employ typical tokenizer and utilize <inline-formula id="ieqn-10">
<mml:math id="mml-ieqn-10"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>t</mml:mi><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mrow><mml:mover><mml:mi>m</mml:mi><mml:mo>&#x02D9;</mml:mo></mml:mover></mml:mrow></mml:math>
</inline-formula> ed methods for obtaining <inline-formula id="ieqn-11">
<mml:math id="mml-ieqn-11"><mml:mi>D</mml:mi></mml:math>
</inline-formula> dimension embedding for single word in the sentence. This embedding <inline-formula id="ieqn-12">
<mml:math id="mml-ieqn-12"><mml:mi>S</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mspace width="thickmathspace" /></mml:mrow><mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:mspace width="thickmathspace" /></mml:mrow><mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mi>N</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math>
</inline-formula> <inline-formula id="ieqn-13">
<mml:math id="mml-ieqn-13"><mml:mi>S</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>D</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
</inline-formula> from the input to this algorithm. For detecting sarcasm in sentence <inline-formula id="ieqn-14">
<mml:math id="mml-ieqn-14"><mml:mi>S</mml:mi></mml:math>
</inline-formula>, it is critical for identifying certain words that offer important cues namely sarcastic connotation and negative emotion. The significance of this cue-word is based on various aspects according to the distinct different contexts. In this presented method, we leverage multi-head self-attention model for recognizing this cue-word from the input text. <xref ref-type="fig" rid="fig-2">Fig. 2</xref> depicts the framework of GRU.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Framework of GRU</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-2.png"/>
</fig>
<p>Attention mechanism finds pattern in the input that is critical for resolving the provided process. In DL method, self-attention [<xref ref-type="bibr" rid="ref-18">18</xref>] is an attention method for sequence that assists in learning the task-specific relationships among distinct components of a provided sequence for producing a good sequence representation. During the self-attention model, 3 linear predictions: Key (K), Value (V), and Query (Q) of the provided input sequence is produced in which, <inline-formula id="ieqn-15">
<mml:math id="mml-ieqn-15"><mml:mi>Q</mml:mi><mml:mo>,</mml:mo><mml:mi>V</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>D</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
</inline-formula>.</p>
<p><disp-formula id="eqn-4"><label>(4)</label>
<mml:math id="mml-eqn-4" display="block"><mml:mi>A</mml:mi><mml:mo>=</mml:mo><mml:mi>s</mml:mi><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>t</mml:mi><mml:mrow><mml:mi mathvariant="normal">m</mml:mi><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">x</mml:mi><mml:mspace width="thickmathspace" /></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mrow><mml:mfrac><mml:mrow><mml:mi>Q</mml:mi><mml:mrow><mml:msup><mml:mi>K</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:msqrt><mml:mi>D</mml:mi></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow></mml:mstyle></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>V</mml:mi><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula></p>
<p>In multiple-head self-attention, various duplicates of the self-attention model are parallelly utilized. All the head takes distinct relations among the words in the input text and recognizes the keyword that assistance in classification.</p>
</sec>
<sec id="s3_4">
<label>3.4</label>
<title>MFO Based Hyperparameter Optimization</title>
<p>During hyperparameter tuning process, the MFO algorithm has been applied to properly tune the hyperparameters involved in it [<xref ref-type="bibr" rid="ref-19">19</xref>]. An optimization is utilized for determining the particular and maximum accurate solution to these problems. During this case, MO was utilized for finding particular and accurate solutions. The mayflies (MFs) are insects which are anciently named Palaeoptera. This MO is simulated as the social mate performance of MFs. The male mayflies (MMFs) appeal to female mayflies (FMFs) by carrying out a nuptial dance on the water by creating up and down movements for procedure a design. This technique was generated by observing 3 procedures of these MFs and it can be movement of MMFs, movement of FMFs, and matting method of MFs. These are described under:</p>
<p>As before mentioned the males, group, and dance on some meters of water. The MFs are incapable of moving at maximum speed and their velocity for reaching that level was computed by under in <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref>.</p>
<p><disp-formula id="eqn-5"><label>(5)</label>
<mml:math id="mml-eqn-5" display="block"><mml:msubsup><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:msubsup><mml:mi>r</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:msubsup><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>p</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula></p>
<p>whereas, <inline-formula id="ieqn-16">
<mml:math id="mml-ieqn-16"><mml:msubsup><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup></mml:math>
</inline-formula> implies the velocity of MMF, <inline-formula id="ieqn-17">
<mml:math id="mml-ieqn-17"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup></mml:math>
</inline-formula> refers the place of MMF, <inline-formula id="ieqn-18">
<mml:math id="mml-ieqn-18"><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mi>n</mml:mi></mml:math>
</inline-formula> represents the space dimensional, <inline-formula id="ieqn-19">
<mml:math id="mml-ieqn-19"><mml:mi>t</mml:mi></mml:math>
</inline-formula> stands for the time step and <inline-formula id="ieqn-20">
<mml:math id="mml-ieqn-20"><mml:mi>i</mml:mi></mml:math>
</inline-formula> signifies the MF number. However, <inline-formula id="ieqn-21">
<mml:math id="mml-ieqn-21"><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math>
</inline-formula> and <inline-formula id="ieqn-22">
<mml:math id="mml-ieqn-22"><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math>
</inline-formula> are constants that are executed to scale for the contributions of social element and cognitive element. Besides, <inline-formula id="ieqn-23">
<mml:math id="mml-ieqn-23"><mml:mi>p</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula> signifies the optimum place visited by MF, <inline-formula id="ieqn-24">
<mml:math id="mml-ieqn-24"><mml:mi>N</mml:mi></mml:math>
</inline-formula> denotes the entire amount of MMFs. Finally, <inline-formula id="ieqn-25">
<mml:math id="mml-ieqn-25"><mml:mi>&#x03B2;</mml:mi></mml:math>
</inline-formula> indicates the visibility co-efficient that border the visibility of MF to other MFs, <inline-formula id="ieqn-26">
<mml:math id="mml-ieqn-26"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>p</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula> and <inline-formula id="ieqn-27">
<mml:math id="mml-ieqn-27"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mi>g</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula> refers the space amongst and <inline-formula id="ieqn-28">
<mml:math id="mml-ieqn-28"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula> and <inline-formula id="ieqn-29">
<mml:math id="mml-ieqn-29"><mml:mi>p</mml:mi><mml:mi>b</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:msub><mml:mi>t</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula> and gbest.</p>
<p><disp-formula id="eqn-6"><label>(6)</label>
<mml:math id="mml-eqn-6" display="block"><mml:msubsup><mml:mi>&#x03C7;</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>X</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math>
</disp-formula></p>
<p><disp-formula id="eqn-7"><label>(7)</label>
<mml:math id="mml-eqn-7" display="block"><mml:msubsup><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup><mml:mo>+</mml:mo><mml:mi>d</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>r</mml:mi><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula></p>
<p>The fresh place of MMF is estimated with <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref> by summing up velocity <inline-formula id="ieqn-30">
<mml:math id="mml-ieqn-30"><mml:mspace width="thickmathspace" /><mml:msubsup><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math>
</inline-formula> with existing place <inline-formula id="ieqn-31">
<mml:math id="mml-ieqn-31"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi></mml:msubsup></mml:math>
</inline-formula>. An optimum MFs retain on going with its nuptial dance by altering its velocity permitting to <xref ref-type="disp-formula" rid="eqn-7">Eq. (7)</xref>. Whereas, <inline-formula id="ieqn-32">
<mml:math id="mml-ieqn-32"><mml:mi>d</mml:mi></mml:math>
</inline-formula> denotes the co-efficient of nuptial dance and <inline-formula id="ieqn-33">
<mml:math id="mml-ieqn-33"><mml:mi>r</mml:mi></mml:math>
</inline-formula> implies the arbitrary number with range of &#x2212;1 and 1.</p>
<p>The female does not swarm on its individual instead the FMFs are concerned by males to breed. The magnetism procedure was demonstrated as method with determination as the velocity of all MFs both males as well as female are computed with employing its FF. Specifically, the FMF that is having maximum fitness value is magnetized by MMF is maximum fitness value. Therefore, the velocity of FMFs are computed as illustrated in <xref ref-type="disp-formula" rid="eqn-8">Eq. (8)</xref>:</p>
<p><disp-formula id="eqn-8"><label>(8)</label>
<mml:math id="mml-eqn-8" display="block"><mml:msubsup><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd columnalign="left"><mml:mrow><mml:msubsup><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup><mml:mo>+</mml:mo><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mn>3</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:msubsup><mml:mi>r</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>f</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup><mml:mo>&#x2212;</mml:mo><mml:msubsup><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mspace width="thickmathspace" /><mml:mi>f</mml:mi><mml:mspace width="thickmathspace" /><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x003E;</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd columnalign="left"><mml:mrow><mml:msubsup><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup><mml:mo>+</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:msup><mml:mi>l</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mspace width="thickmathspace" /><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>y</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2264;</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula></p>
<p>In which, <inline-formula id="ieqn-34">
<mml:math id="mml-ieqn-34"><mml:msubsup><mml:mi>v</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup></mml:math>
</inline-formula> refers the velocity of FMF, <inline-formula id="ieqn-35">
<mml:math id="mml-ieqn-35"><mml:msubsup><mml:mi>y</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow><mml:mi>t</mml:mi></mml:msubsup></mml:math>
</inline-formula> represents the place of FMF, <inline-formula id="ieqn-36">
<mml:math id="mml-ieqn-36"><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:math>
</inline-formula> denotes the space dimensional, <inline-formula id="ieqn-37">
<mml:math id="mml-ieqn-37"><mml:mi>t</mml:mi></mml:math>
</inline-formula> implies the time step, and <inline-formula id="ieqn-38">
<mml:math id="mml-ieqn-38"><mml:mi>i</mml:mi></mml:math>
</inline-formula> stands for the MF number. In addition, <inline-formula id="ieqn-39">
<mml:math id="mml-ieqn-39"><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mn>3</mml:mn></mml:msub></mml:mrow></mml:math>
</inline-formula> refers the constant practice for scaling the contribution of social elements and cognitive components. But, <inline-formula id="ieqn-40">
<mml:math id="mml-ieqn-40"><mml:mi>&#x03B2;</mml:mi></mml:math>
</inline-formula> signifies the visibility co-efficient, but <inline-formula id="ieqn-41">
<mml:math id="mml-ieqn-41"><mml:mrow><mml:msub><mml:mi>r</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>f</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula> refers the space amongst FMFs and MMFs.</p>
<p><disp-formula id="eqn-9"><label>(9)</label>
<mml:math id="mml-eqn-9" display="block"><mml:msubsup><mml:mi>y</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:msubsup><mml:mi>y</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula></p>
<p>Lastly, <inline-formula id="ieqn-42">
<mml:math id="mml-ieqn-42"><mml:mi>f</mml:mi><mml:mi>l</mml:mi></mml:math>
</inline-formula> implies the co-efficient of arbitrary walk that is trained in event of the magnetization amongst FMF as well as MMF are failed, and <inline-formula id="ieqn-43">
<mml:math id="mml-ieqn-43"><mml:mi>r</mml:mi></mml:math>
</inline-formula> indicates the arbitrary number with &#x2212;1 and 1 range. A novel place <inline-formula id="ieqn-44">
<mml:math id="mml-ieqn-44"><mml:msubsup><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup></mml:math>
</inline-formula> of FMF is established by adding their place for computing formula as in <xref ref-type="disp-formula" rid="eqn-9">Eq. (9)</xref>.</p>
<p>The procedure of mating amongst MMFs as well as FMFs are implemented with function is named crossover operator. As mentioned previously, fitness value was employed for choosing the partner for mating, and that outcomes in 2 offspring that are created as declared in <xref ref-type="disp-formula" rid="eqn-10">Eqs. (10)</xref> and <xref ref-type="disp-formula" rid="eqn-11">(11)</xref> under as offspring1 and offspring2.</p>
<p><disp-formula id="eqn-10"><label>(10)</label>
<mml:math id="mml-eqn-10" display="block"><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi><mml:mn>1</mml:mn><mml:mspace width="thickmathspace" /><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>L</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>L</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mi>f</mml:mi><mml:mi>e</mml:mi><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi></mml:math>
</disp-formula></p>
<p><disp-formula id="eqn-11"><label>(11)</label>
<mml:math id="mml-eqn-11" display="block"><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi><mml:mn>2</mml:mn><mml:mspace width="thickmathspace" /><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>L</mml:mi><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mi>f</mml:mi><mml:mi>e</mml:mi><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>L</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x2217;</mml:mo></mml:msup></mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula></p>
<p>In which, <inline-formula id="ieqn-45">
<mml:math id="mml-ieqn-45"><mml:mi>L</mml:mi></mml:math>
</inline-formula> represents the arbitrary number in the existing range, <inline-formula id="ieqn-46">
<mml:math id="mml-ieqn-46"><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi></mml:math>
</inline-formula> imply the male parent and <inline-formula id="ieqn-47">
<mml:math id="mml-ieqn-47"><mml:mi>f</mml:mi><mml:mi>e</mml:mi><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi></mml:math>
</inline-formula> signifies the female parent. Primary velocities of <inline-formula id="ieqn-48">
<mml:math id="mml-ieqn-48"><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi><mml:mn>1</mml:mn></mml:math>
</inline-formula> and <inline-formula id="ieqn-49">
<mml:math id="mml-ieqn-49"><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mi>s</mml:mi><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi><mml:mn>2</mml:mn></mml:math>
</inline-formula> are assumed that zero.</p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Performance Validation</title>
<p>The performance of the DLNLP-SA model is validated using two benchmark datasets namely Twitter dataset and Dialogues dataset [<xref ref-type="bibr" rid="ref-20">20</xref>,<xref ref-type="bibr" rid="ref-21">21</xref>]. The first Twitter dataset includes 308 samples under sarcastic class and 1648 samples under non-sarcastic class. The latter Dialogues dataset includes 2346 samples into sarcastic class and 2346 samples into non-sarcastic class.</p>
<p><xref ref-type="fig" rid="fig-3">Fig. 3</xref> offers a set of confusion matrices by the DLNLP-SA model on the training/testing data of Twitter dataset and Dialogues dataset. <xref ref-type="fig" rid="fig-3">Fig. 3a</xref> indicates that the DLNLP-SA model has recognized 166 images into sarcastic class and 1145 images into non-sarcastic class on 70% of training data on Twitter dataset. Also, <xref ref-type="fig" rid="fig-3">Fig. 3b</xref> represents that the DLNLP-SA algorithm has recognized 74 images into sarcastic class and 499 images into non-sarcastic class on 30% of testing data on Twitter dataset. Similarly, <xref ref-type="fig" rid="fig-3">Fig. 3c</xref> refers that the DLNLP-SA method has recognized 1627 images into sarcastic class and 1478 images into non-sarcastic class on 70% of training data on Dialogue dataset. Likewise, <xref ref-type="fig" rid="fig-3">Fig. 3d</xref> signifies that the DLNLP-SA methodology has recognized 674 images into sarcastic class and 654 images into non-sarcastic class on 30% of testing data on Dialogue dataset.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Confusion matrix of DLNLP-SA technique on training/testing data of Twitter dataset and Dialogues datasets</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-3.png"/>
</fig>
<p><xref ref-type="table" rid="table-1">Tab. 1</xref> provides detailed classification outcomes of the DLNLP-SA model on the Twitter dataset. The experimental results indicated that the DLNLP-SA model has resulted in effective outcomes on training as well as testing datasets.</p>
<table-wrap id="table-1"><label>Table 1</label>
<caption>
<title>Result analysis of DLNLP-SA model on the Twitter dataset</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th colspan="6">Twitter Dataset</th>
</tr>
<tr>
<th>Class labels</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>F-Score</th>
<th>MCC</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="6">Training (70%)</td>
</tr>
<tr>
<td>&#x2003;Sarcastic</td>
<td>95.76</td>
<td>98.22</td>
<td>75.11</td>
<td>85.13</td>
<td>83.72</td>
</tr>
<tr>
<td>&#x2003;Non Sarcastic</td>
<td>95.76</td>
<td>95.42</td>
<td>99.74</td>
<td>97.53</td>
<td>83.72</td>
</tr>
<tr>
<td>&#x2003;Average</td>
<td>95.76</td>
<td>96.82</td>
<td>87.43</td>
<td>91.33</td>
<td>83.72</td>
</tr>
<tr>
<td colspan="6">Testing (30%)</td>
</tr>
<tr>
<td>&#x2003;Sarcastic</td>
<td>97.61</td>
<td>98.67</td>
<td>85.06</td>
<td>91.36</td>
<td>90.32</td>
</tr>
<tr>
<td>&#x2003;Non Sarcastic</td>
<td>97.61</td>
<td>97.46</td>
<td>99.80</td>
<td>98.62</td>
<td>90.32</td>
</tr>
<tr>
<td>&#x2003;Average</td>
<td>97.61</td>
<td>98.06</td>
<td>92.43</td>
<td>94.99</td>
<td>90.32</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-4">Fig. 4</xref> demonstrates the overall classification outcomes of the DLNLP-SA model on Twitter dataset. The DLNLP-SA model has classified the sarcastic samples with <inline-formula id="ieqn-50">
<mml:math id="mml-ieqn-50"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-51">
<mml:math id="mml-ieqn-51"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-52">
<mml:math id="mml-ieqn-52"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-53">
<mml:math id="mml-ieqn-53"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 95.76%, 98.22%, 75.11%, 85.13%, and 83.72% respectively. In addition, the DLNLP-SA model has categorized the non-sarcastic samples with <inline-formula id="ieqn-54">
<mml:math id="mml-ieqn-54"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-55">
<mml:math id="mml-ieqn-55"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-56">
<mml:math id="mml-ieqn-56"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-57">
<mml:math id="mml-ieqn-57"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 95.76%, 95.42%, 99.74%, 97.53%, and 83.72% respectively.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Result analysis of DLNLP-SA model on 70% of training data on Twitter dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-4.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-5">Fig. 5</xref> showcases an overall classification outcomes of the DLNLP-SA model on Twitter dataset. The DLNLP-SA method has classified the sarcastic samples with <inline-formula id="ieqn-58">
<mml:math id="mml-ieqn-58"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-59">
<mml:math id="mml-ieqn-59"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-60">
<mml:math id="mml-ieqn-60"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-61">
<mml:math id="mml-ieqn-61"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 97.61%, 98.67%, 85.06%, 91.36%, and 90.32% respectively. Eventually, the DLNLP-SA methodology has categorized the non-sarcastic samples with <inline-formula id="ieqn-62">
<mml:math id="mml-ieqn-62"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-63">
<mml:math id="mml-ieqn-63"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-64">
<mml:math id="mml-ieqn-64"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-65">
<mml:math id="mml-ieqn-65"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 97.61%, 97.46%, 99.80%, 98.62%, and 90.32% correspondingly.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Result analysis of DLNLP-SA model on 30% of testing data on Twitter dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-5.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-6">Fig. 6</xref> reports the precision-recall curve analysis of the DLNLP-SA on the testing 30% of the Twitter dataset. The figures indicated that the DLNLP-SA model has resulted in effectual outcomes under Twitter dataset.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Precision-recall analysis of DLNLP-SA model on 30% of testing data on Twitter dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-6.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-7">Fig. 7</xref> demonstrates the ROC inspection of the DLNLP-SA model on the testing 30% of the Twitter dataset. The results indicated that the DLNLP-SA model has resulted in maximum performance on the testing dataset over the other ones.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>ROC analysis of DLNLP-SA model on 30% of testing data on Twitter dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-7.png"/>
</fig>
<p>A brief comparative study of the DLNLP-SA model with recent models on Twitter dataset is portrayed in <xref ref-type="table" rid="table-2">Tab. 2</xref> and <xref ref-type="fig" rid="fig-8">Fig. 8</xref>. The experimental results indicated that the NBOW and ELMo-BiLSTM models have obtained lower classification outcomes over the other methods. At the same time, the Fracking sarcasm and A2Text-Net models have reached slightly improved performance. Followed by, the IMHSAA model has accomplished reasonable outcome. Finally, the proposed DLNLP-SA technique demonstrates the enhance result with <inline-formula id="ieqn-66">
<mml:math id="mml-ieqn-66"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-67">
<mml:math id="mml-ieqn-67"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-68">
<mml:math id="mml-ieqn-68"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, and <inline-formula id="ieqn-69">
<mml:math id="mml-ieqn-69"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula> of 98.06%, 92.43%, 97.61%, and 94.99% respectively.</p>
<table-wrap id="table-2"><label>Table 2</label>
<caption>
<title>Comparative analysis of DLNLP-SA technique with recent approaches on Twitter dataset</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Methods</th>
<th>Precision</th>
<th>Recall</th>
<th>Accuracy</th>
<th>F-Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>NBOW Model</td>
<td>70.87</td>
<td>62.03</td>
<td>64.26</td>
<td>64.18</td>
</tr>
<tr>
<td>ELMo-BiLSTM Model</td>
<td>78.04</td>
<td>73.72</td>
<td>77.68</td>
<td>75.05</td>
</tr>
<tr>
<td>Fracking Sarcasm Model</td>
<td>88.04</td>
<td>87.63</td>
<td>88.57</td>
<td>88.29</td>
</tr>
<tr>
<td>A2Text-Net Model</td>
<td>91.59</td>
<td>91.06</td>
<td>91.49</td>
<td>89.65</td>
</tr>
<tr>
<td>IMHSAA</td>
<td>95.19</td>
<td>94.47</td>
<td>95.40</td>
<td>94.76</td>
</tr>
<tr>
<td>DLNLP-SA</td>
<td>98.06</td>
<td>92.43</td>
<td>97.61</td>
<td>94.99</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Comparative analysis of DLNLP-SA technique on Twitter dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-8.png"/>
</fig>
<p><xref ref-type="table" rid="table-3">Tab. 3</xref> offers detailed classification outcomes of the DLNLP-SA method on the Dialogues dataset. The experimental results indicated that the DLNLP-SA algorithm has resulted in effective outcomes on training as well as testing datasets.</p>
<table-wrap id="table-3"><label>Table 3</label>
<caption>
<title>Result analysis of DLNLP-SA model on the Dialogues dataset</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th colspan="6">Dialogues Dataset</th>
</tr>
<tr>
<th>Class labels</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>F-Score</th>
<th>MCC</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="6">Training (70%)</td>
</tr>
<tr>
<td>&#x2003;Sarcastic</td>
<td>94.55</td>
<td>91.82</td>
<td>97.95</td>
<td>94.79</td>
<td>89.29</td>
</tr>
<tr>
<td>&#x2003;Non Sarcastic</td>
<td>94.55</td>
<td>97.75</td>
<td>91.07</td>
<td>94.29</td>
<td>89.29</td>
</tr>
<tr>
<td>&#x2003;Average</td>
<td>94.55</td>
<td>94.78</td>
<td>94.51</td>
<td>94.54</td>
<td>89.29</td>
</tr>
<tr>
<td colspan="6">Testing (30%)</td>
</tr>
<tr>
<td>&#x2003;Sarcastic</td>
<td>94.32</td>
<td>90.71</td>
<td>98.39</td>
<td>94.40</td>
<td>88.95</td>
</tr>
<tr>
<td>&#x2003;Non Sarcastic</td>
<td>94.32</td>
<td>98.35</td>
<td>90.46</td>
<td>94.24</td>
<td>88.95</td>
</tr>
<tr>
<td>&#x2003;Average</td>
<td>94.32</td>
<td>94.53</td>
<td>94.43</td>
<td>94.32</td>
<td>88.95</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-9">Fig. 9</xref> depicts an overall classification outcome of the DLNLP-SA approach on Dialogues dataset. The DLNLP-SA model has classified the sarcastic samples with <inline-formula id="ieqn-70">
<mml:math id="mml-ieqn-70"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-71">
<mml:math id="mml-ieqn-71"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-72">
<mml:math id="mml-ieqn-72"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-73">
<mml:math id="mml-ieqn-73"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 94.55%, 91.82%, 97.95%, 94.79%, and 89.29% respectively. Moreover, the DLNLP-SA model has categorized the non-sarcastic samples with <inline-formula id="ieqn-74">
<mml:math id="mml-ieqn-74"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-75">
<mml:math id="mml-ieqn-75"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-76">
<mml:math id="mml-ieqn-76"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-77">
<mml:math id="mml-ieqn-77"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 94.55%, 97.75%, 91.07%, 94.29%, and 89.29% correspondingly.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Result analysis of DLNLP-SA model on 70% of training data on Dialogues dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-9.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-10">Fig. 10</xref> illustrates the overall classification outcomes of the DLNLP-SA technique on Dialogues dataset. The DLNLP-SA model has classified the sarcastic samples with <inline-formula id="ieqn-78">
<mml:math id="mml-ieqn-78"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-79">
<mml:math id="mml-ieqn-79"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-80">
<mml:math id="mml-ieqn-80"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-81">
<mml:math id="mml-ieqn-81"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 94.32%, 90.71%, 98.39%, 94.40%, and 88.95% correspondingly. Besides, the DLNLP-SA model has categorized the non-sarcastic samples with <inline-formula id="ieqn-82">
<mml:math id="mml-ieqn-82"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-83">
<mml:math id="mml-ieqn-83"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-84">
<mml:math id="mml-ieqn-84"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-85">
<mml:math id="mml-ieqn-85"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula>, and MCC of 94.32%, 98.35%, 90.46%, 94.24%, and 88.95% respectively.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Result analysis of DLNLP-SA model on 30% of testing data on Dialogues dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-10.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-11">Fig. 11</xref> demonstrates the precision-recall curve analysis of the DLNLP-SA system on the testing 30% of the Dialogues dataset. The figures indicated that the DLNLP-SA approach has resulted in effectual outcomes under Dialogues dataset.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Precision-recall analysis of DLNLP-SA model on 30% of testing data on Dialogues dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-11.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-12">Fig. 12</xref> demonstrates the ROC inspection of the DLNLP-SA approach on the testing 30% of the Dialogues dataset. The results indicated that the DLNLP-SA technique has resulted to maximum performance on the testing dataset over the other ones.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>ROC analysis of DLNLP-SA model on 30% of testing data on Dialogues dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-12.png"/>
</fig>
<p>A detailed comparison study of the DLNLP-SA model with recent methods on Dialogue dataset is portrayed in <xref ref-type="table" rid="table-4">Tab. 4</xref> and <xref ref-type="fig" rid="fig-13">Fig. 13</xref>. The experimental results indicated that the Attention-LSTM and SIARN models have obtained lower classification outcomes over the other methods. Also, the MIARN and ELMo-BiLSTM methods have reached slightly improved performance. Next, the IMHSAA technique has accomplished reasonable outcome. At last, the proposed DLNLP-SA technique demonstrates the enhanced result with <inline-formula id="ieqn-86">
<mml:math id="mml-ieqn-86"><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-87">
<mml:math id="mml-ieqn-87"><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mi>l</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, <inline-formula id="ieqn-88">
<mml:math id="mml-ieqn-88"><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math>
</inline-formula>, and <inline-formula id="ieqn-89">
<mml:math id="mml-ieqn-89"><mml:mrow><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
</inline-formula> of 94.53%, 94.43%, 94.32%, and 94.32% correspondingly.</p>
<table-wrap id="table-4"><label>Table 4</label>
<caption>
<title>Comparative analysis of DLNLP-SA technique with recent approaches on Dialogues dataset</title></caption>
<table><colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Methods</th>
<th>Precision</th>
<th>Recall</th>
<th>Accuracy</th>
<th>F-Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Attention-LSTM Model</td>
<td>69.93</td>
<td>69.31</td>
<td>70.18</td>
<td>69.92</td>
</tr>
<tr>
<td>SIARN Model</td>
<td>72.62</td>
<td>72.18</td>
<td>71.74</td>
<td>72.19</td>
</tr>
<tr>
<td>MIARN Model</td>
<td>73.10</td>
<td>73.12</td>
<td>72.78</td>
<td>72.62</td>
</tr>
<tr>
<td>ELMo-BiLSTM Model</td>
<td>75.68</td>
<td>75.62</td>
<td>76.02</td>
<td>75.82</td>
</tr>
<tr>
<td>IMHSAA</td>
<td>77.92</td>
<td>77.57</td>
<td>77.22</td>
<td>77.66</td>
</tr>
<tr>
<td>DLNLP-SA</td>
<td>94.53</td>
<td>94.43</td>
<td>94.32</td>
<td>94.32</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>Comparative analysis of DLNLP-SA technique on Dialogues dataset</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CSSE_29603-fig-13.png"/>
</fig>
<p>By investigating these results and discussion, it can be concluded that the DLNLP-SA model has accomplished maximum performance on the test Twitter dataset and Dialogues dataset.</p>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion</title>
<p>In this study, a novel DLNLP-SA technique aims for detecting and classifying the occurrence of sarcasm in the input data. The proposed DLNLP-SA technique undergoes distinct set of processes such as pre-processing, N-gram feature extraction, MHSA-GRU based classification, and MFO based hyperparameter optimization. The application of the MFO algorithm aids in the effectual selection of hyperparameters involved in the MHSA-GRU model. For investigating the improved performance of the DLNLP-SA model, a comprehensive set of simulations were executed using benchmark dataset and the outcomes signified the supremacy over the existing approaches. Therefore, the DLNLP-SA model has been utilized as a proficient tool for SD and classification. In future, the detection efficiency can be improved by hybrid DL models.</p>
</sec>
</body>
<back>
<ack>
<p>This work was supported through the Annual Funding track by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Project No. AN000685].</p>
</ack><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> This work was supported through the Annual Funding track by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Project No. AN000685].</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Vinoth</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Prabhavathy</surname></string-name></person-group>, &#x201C;<article-title>An intelligent machine learning-based sarcasm detection and classification model on social networks</article-title>,&#x201D; <source>The Journal of Supercomputing</source>, vol. <volume>19</volume>, no. <issue>1&#x2013;2</issue>, pp. <fpage>288</fpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>L. K.</given-names> <surname>Ahire</surname></string-name>, <string-name><given-names>S. D.</given-names> <surname>Babar</surname></string-name> and <string-name><given-names>G. R.</given-names> <surname>Shinde</surname></string-name></person-group>, <chapter-title>Sarcasm detection in online social network: Myths, realities, and issues</chapter-title>. In: <source>Security Issues and Privacy Threats in Smart Ubiquitous Computing, Studies in Systems, Decision and Control Book Series</source>. Vol. <volume>341</volume>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer</publisher-name>, pp. <fpage>227</fpage>&#x2013;<lpage>238</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L. H.</given-names> <surname>Son</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>S. R.</given-names> <surname>Sangwan</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Arora</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Nayyar</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Sarcasm detection using soft attention-based bidirectional long short-term memory model with convolution network</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>7</volume>, pp. <fpage>23319</fpage>&#x2013;<lpage>23328</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Jain</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Garg</surname></string-name></person-group>, &#x201C;<article-title>Sarcasm detection in mash-up language using soft-attention based bi-directional LSTM and feature-rich CNN</article-title>,&#x201D; <source>Applied Soft Computing</source>, vol. <volume>91</volume>, no. <issue>1&#x2013;2</issue>, pp. <fpage>106198</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Pawar</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Bhingarkar</surname></string-name></person-group>, &#x201C;<article-title>Machine learning based sarcasm detection on twitter data</article-title>,&#x201D; in <conf-name>2020 5th Int. Conf. on Communication and Electronics Systems (ICCES)</conf-name>, <conf-loc>Coimbatore, India</conf-loc>, pp. <fpage>957</fpage>&#x2013;<lpage>961</lpage>, <year>2020</year>. </mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>V. T.</given-names> <surname>Narapareddy</surname></string-name>, <string-name><given-names>V. A.</given-names> <surname>Srikanth</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Malapati</surname></string-name> and <string-name><given-names>L. B. M.</given-names> <surname>Neti</surname></string-name></person-group>, &#x201C;<article-title>Sarcasm detection using multi-head attention based bidirectional lstm</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>8</volume>, pp. <fpage>6388</fpage>&#x2013;<lpage>6397</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R. A.</given-names> <surname>Potamias</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Siolas</surname></string-name> and <string-name><given-names>A. G.</given-names> <surname>Stafylopatis</surname></string-name></person-group>, &#x201C;<article-title>A transformer-based approach to irony and sarcasm detection</article-title>,&#x201D; <source>Neural Computing and Applications</source>, vol. <volume>32</volume>, no. <issue>23</issue>, pp. <fpage>17309</fpage>&#x2013;<lpage>17320</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. M.</given-names> <surname>Sarsam</surname></string-name>, <string-name><given-names>H. A.</given-names> <surname>Samarraie</surname></string-name>, <string-name><given-names>A. I.</given-names> <surname>Alzahrani</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Wright</surname></string-name></person-group>, &#x201C;<article-title>Sarcasm detection using machine learning algorithms in Twitter: A systematic review</article-title>,&#x201D; <source>International Journal of Market Research</source>, vol. <volume>62</volume>, no. <issue>5</issue>, pp. <fpage>578</fpage>&#x2013;<lpage>598</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Du</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Pathan</surname></string-name>, <string-name><given-names>H. K.</given-names> <surname>Teklehaimanot</surname></string-name> and <string-name><given-names>Z.</given-names> <surname>Yang</surname></string-name></person-group>, &#x201C;<article-title>An effective sarcasm detection approach based on sentimental context and individual expression habits</article-title>,&#x201D; <source>Cognitive Computation</source>, vol. <volume>14</volume>, no. <issue>1</issue>, pp. <fpage>78</fpage>&#x2013;<lpage>90</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Tiwari</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Wang</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>CFN: A complex-valued fuzzy network for sarcasm detection in conversations</article-title>,&#x201D; <source>IEEE Transactions on Fuzzy Systems</source>, vol. <volume>29</volume>, no. <issue>12</issue>, pp. <fpage>3696</fpage>&#x2013;<lpage>3710</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Z.</given-names> <surname>Wen</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Gui</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Guo</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Yu</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Sememe knowledge and auxiliary information enhanced approach for sarcasm detection</article-title>,&#x201D; <source>Information Processing &#x0026; Management</source>, vol. <volume>59</volume>, no. <issue>3</issue>, pp. <fpage>102883</fpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Bedi</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Akhtar</surname></string-name> and <string-name><given-names>T.</given-names> <surname>Chakraborty</surname></string-name></person-group>, &#x201C;<article-title>Multi-modal sarcasm detection and humor classification in code-mixed conversations</article-title>,&#x201D; <source>IEEE Transactions on Affective Computing</source>, pp. <fpage>1</fpage>, <year>2021</year>. <uri>http://dx.doi.org/10.1109/TAFFC.2021.3083522</uri>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Ren</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Xu</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Liu</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Yang</surname></string-name></person-group>, &#x201C;<article-title>Sarcasm detection with sentiment semantics enhanced multi-level memory network</article-title>,&#x201D; <source>Neurocomputing</source>, vol. <volume>401</volume>, no. <issue>1&#x2013;2</issue>, pp. <fpage>320</fpage>&#x2013;<lpage>326</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>Q.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Tiwari</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Wang</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>CFN: A complex-valued fuzzy network for sarcasm detection in conversations</article-title>,&#x201D; <source>IEEE Transactions on Fuzzy Systems</source>, vol. <volume>29</volume>, no. <issue>12</issue>, pp. <fpage>3696</fpage>&#x2013;<lpage>3710</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>D. K.</given-names> <surname>Nayak</surname></string-name> and <string-name><given-names>B. K.</given-names> <surname>Bolla</surname></string-name></person-group>, <chapter-title>Efficient deep learning methods for sarcasm detection of news headlines</chapter-title>. In: <source>Machine Learning and Autonomous Systems, Smart Innovation, Systems and Technologies book series</source>. Vol. <volume>269</volume>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer</publisher-name>, pp. <fpage>371</fpage>&#x2013;<lpage>382</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Govindan</surname></string-name> and <string-name><given-names>V.</given-names> <surname>Balakrishnan</surname></string-name></person-group>, &#x201C;<article-title>A machine learning approach in analysing the effect of hyperboles using negative sentiment tweets for sarcasm detection</article-title>,&#x201D; <source>Journal of King Saud University-Computer and Information Sciences</source>, vol. <volume>124</volume>, no. <issue>1</issue>, pp. <fpage>109781</fpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Zhao</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Yan</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Mao</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Shen</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Machine health monitoring using local feature-based gated recurrent unit networks</article-title>,&#x201D; <source>IEEE Transactions on Industrial Electronics</source>, vol. <volume>65</volume>, no. <issue>2</issue>, pp. <fpage>1539</fpage>&#x2013;<lpage>1548</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Akula</surname></string-name> and <string-name><given-names>I.</given-names> <surname>Garibay</surname></string-name></person-group>, &#x201C;<article-title>Interpretable multi-head self-attention architecture for sarcasm detection in social media</article-title>,&#x201D; <source>Entropy</source>, vol. <volume>23</volume>, no. <issue>4</issue>, pp. <fpage>394</fpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Zervoudakis</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Tsafarakis</surname></string-name></person-group>, &#x201C;<article-title>A mayfly optimization algorithm</article-title>,&#x201D; <source>Computers &#x0026; Industrial Engineering</source>, vol. <volume>145</volume>, no. <issue>5</issue>, pp. <fpage>106559</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Riloff</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Qadir</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Surve</surname></string-name>, <string-name><given-names>L. D.</given-names> <surname>Silva</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Gilbert</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Sarcasm as contrast between a positive sentiment and negative situation</article-title>,&#x201D; in <conf-name>Proc. of the 2013 Conf. on Empirical Methods in Natural Language Processing</conf-name>, <conf-loc>Seattle, Washington, USA</conf-loc>, pp. <fpage>704</fpage>&#x2013;<lpage>714</lpage>, <year>2013</year>. </mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Oraby</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Harrison</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Reed</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Hernandez</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Riloff</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Creating and characterizing a diverse corpus of sarcasm in dialogue</article-title>,&#x201D; in <conf-name>Proc. of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue</conf-name>, <conf-loc>Los Angeles, CA, USA</conf-loc>, pp. <fpage>31</fpage>&#x2013;<lpage>41</lpage>, <year>2016</year>. </mixed-citation></ref>
</ref-list>
</back>
</article>