<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">66898</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2025.066898</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>FedCognis: An Adaptive Federated Learning Framework for Secure Anomaly Detection in Industrial IoT-Enabled Cognitive Cities</article-title>
<alt-title alt-title-type="left-running-head">FedCognis: An Adaptive Federated Learning Framework for Secure Anomaly Detection in Industrial IoT-Enabled Cognitive Cities</alt-title>
<alt-title alt-title-type="right-running-head">FedCognis: An Adaptive Federated Learning Framework for Secure Anomaly Detection in Industrial IoT-Enabled Cognitive Cities</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Alabdulatif</surname><given-names>Abdulatif</given-names></name><email>ab.alabdulatif@qu.edu.sa</email></contrib>
<aff id="aff-1">
<institution>Department of Computer Science, College of Computer, Qassim University</institution>, <addr-line>Buraidah, 52571</addr-line>, <country>Saudi Arabia</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Abdulatif Alabdulatif. Email: <email>ab.alabdulatif@qu.edu.sa</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2025</year>
</pub-date>
<pub-date date-type="pub" publication-format="electronic">
<day>29</day><month>08</month><year>2025</year>
</pub-date>
<volume>85</volume>
<issue>1</issue>
<fpage>1185</fpage>
<lpage>1220</lpage>
<history>
<date date-type="received">
<day>19</day>
<month>4</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>09</day>
<month>6</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2025 The Author.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Published by Tech Science Press.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_66898.pdf"></self-uri>
<abstract>
<p>FedCognis is a secure and scalable federated learning framework designed for continuous anomaly detection in Industrial Internet of Things-enabled Cognitive Cities (IIoTCC). It introduces two key innovations: a Quantum Secure Authentication (QSA) mechanism for adversarial defense and integrity validation, and a Self-Attention Long Short-Term Memory (SALSTM) model for high-accuracy spatiotemporal anomaly detection. Addressing core challenges in traditional Federated Learning (FL)&#x2014;such as model poisoning, communication overhead, and concept drift&#x2014;FedCognis integrates dynamic trust-based aggregation and lightweight cryptographic verification to ensure secure, real-time operation across heterogeneous IIoT domains including utilities, public safety, and traffic systems. Evaluated on the WUSTL-IIoTCC-2021 dataset, FedCognis achieves 94.5% accuracy, 0.941 AUC for precision-recall, and 0.896 ROC-AUC, while reducing bandwidth consumption by 72%. The framework demonstrates sublinear computational complexity and a resilience score of 96.56% across six security dimensions. These results confirm FedCognis as a robust and adaptive anomaly detection solution suitable for deployment in large-scale cognitive urban infrastructures.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Cognitive cities</kwd>
<kwd>federated learning</kwd>
<kwd>industrial IoT</kwd>
<kwd>anomaly detection</kwd>
<kwd>trust management</kwd>
<kwd>smart infrastructure</kwd>
<kwd>security</kwd>
</kwd-group>
<funding-group>
<award-group id="awg1">
<funding-source>Qassim University</funding-source>
<award-id>QU-APC-2025</award-id>
</award-group>
</funding-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Modern cities are rapidly evolving into intelligent urban environments known as Cognitive Cities. These cities rely on data, artificial intelligence (AI), and real-time decision-making to optimize services and infrastructure. At the core of this transformation is the Industrial Internet of Things (IIoT), which supports sectors like transportation, energy, public safety, manufacturing, and utilities through decentralized, adaptive, and secure infrastructures.</p>
<p>A foundational layer in these systems continuously generates massive streams of sensor data crucial to managing urban operations. However, these data sources form large-scale, distributed networks that are difficult to secure and must remain resilient to evolving threats and operational changes. Forecasts for 2025 estimate that IIoT devices within Cognitive Cities will exceed 75 billion, generating more than 79 zettabytes of data annually [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>]. This data supports intelligent features like predictive maintenance and real-time responsiveness, but also introduces significant challenges related to scalability, security, and system performance. Traditional centralized anomaly detection methods fall short in such environments due to privacy concerns, scalability limits, and high computational demands.</p>
<p>To address these limitations, federated learning (FL) has emerged as a promising alternative. FL enables distributed IIoT nodes to collaboratively train models without sharing raw data, preserving privacy while supporting decentralization [<xref ref-type="bibr" rid="ref-3">3</xref>]. However, FL still faces security threats that can degrade detection performance by as much as 30% [<xref ref-type="bibr" rid="ref-4">4</xref>], and concept drift due to time-varying operating conditions can reduce accuracy by 15&#x2013;25% [<xref ref-type="bibr" rid="ref-5">5</xref>]. Additionally, frequent model updates in bandwidth-constrained networks can increase communication costs by a factor of five compared to centralized learning [<xref ref-type="bibr" rid="ref-6">6</xref>].</p>
<p>Given the diversity of IIoT environments in Cognitive Cities, effective trust management is essential. Without it, compromised nodes could introduce adversarial noise that undermines the learning process. To overcome these challenges, we propose FedCognis, a secure and scalable anomaly detection framework tailored for distributed Cognitive City infrastructures. FedCognis incorporates a trust-based update mechanism, Quantum Secure Authentication (QSA), and a Self-Attention LSTM (SALSTM) model to enable accurate, real-time, privacy-preserving anomaly detection across diverse IIoT domains.</p>
<p><xref ref-type="fig" rid="fig-1">Fig. 1</xref> provides a high-level overview of the FedCognis framework within a Cognitive City environment. It illustrates federated IIoT devices deployed across smart domains such as traffic and energy, where each device performs local anomaly detection, transmits secure updates using Quantum Secure Authentication (QSA), and contributes to a centralized global model powered by a Self-Attention LSTM (SALSTM). The system represents a decentralized yet coordinated infrastructure that addresses key challenges related to privacy, adversarial threats, and real-time anomaly detection.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Layered architecture of FedCognis illustrating distributed IIoT nodes, edge-level trust evaluators, Quantum Secure Authentication (QSA), and centralized SALSTM-based global aggregation across Cognitive City domains</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-1.tif"/>
</fig>
<p>Despite these advances, security in IIoT-enabled Cognitive Cities (IIoTCC) remains a critical concern, as conventional federated learning (FL) and cryptographic security models are often too restrictive. Adversarial attacks on FL are feasible, and current quantum-secure authentication methods [<xref ref-type="bibr" rid="ref-7">7</xref>,<xref ref-type="bibr" rid="ref-8">8</xref>] involve significant computational overhead. To ensure model robustness, a real-time and adaptive security approach is essential. The core motivation behind FedCognis is to develop a scalable, efficient, and secure federated AI model that continuously learns from IIoTCC data while staying resilient to evolving threats. This research aims to reduce security risks, enhance anomaly detection accuracy, and improve the scalability of FL in IIoTCC networks.</p>
<p>As IIoTCC adoption expands, issues such as adversarial attacks, model poisoning, and data integrity breaches have become increasingly severe. Traditional FL models suffer from vulnerabilities to these threats, suffer communication inefficiencies, and struggle to adapt to dynamic industrial conditions [<xref ref-type="bibr" rid="ref-9">9</xref>,<xref ref-type="bibr" rid="ref-10">10</xref>]. While QSA enhances security, its high computational cost makes it unsuitable for real-time anomaly detection in resource-constrained IIoTCC environments. In contrast to academic settings, industrial deployments often lack the resilience provided by scalable, adaptive federated AI models and robust authentication mechanisms. This research proposes a robust, secure, and continuous anomaly detection solution for IIoTCC, integrating QSA and a self-attention LSTM architecture.</p>
<p>The proposed solution addresses both the security threats and adaptability challenges associated with FL-based anomaly detection in IIoTCC. It employs QSA and assigns dynamic trust scores to filter out adversarial updates and mitigate poisoning attacks. An adaptive trust-weighted aggregation mechanism is incorporated to combine local model updates while minimizing divergence from the global model, ensuring continuous adaptation to evolving IIoTCC conditions. The anomaly detection model is first formulated to be economically scalable, secure, and capable of real-time performance. A multi-objective optimization function is then introduced to enhance its security, scalability, and real-time efficiency.</p>
<p>This paper proposes FedCognis, a secure and adaptive federated AI model for continuous anomaly detection in Industrial IoT-enabled Cognitive Cities (IIoTCC). While federated learning frameworks have advanced, they still face critical limitations, including security vulnerabilities, concept drift, adversarial attacks, and communication inefficiencies. FedCognis addresses these challenges by integrating Quantum Secure Authentication (QSA) and Self-Attention LSTM (SALSTM) networks to enhance model integrity, scalability, and real-time responsiveness. The framework also introduces a multi-objective optimization strategy to balance security, computational efficiency, and communication overhead. Validated on real-world IIoTCC data, FedCognis aims to deliver a resilient, high-accuracy solution tailored for dynamic industrial environments.</p>
<p><bold>Contributions</bold></p>
<p>FedCognis, the adaptive federated AI model proposed in this study, integrates Quantum Secure Authentication (QSA) and Self-Attention LSTM (SALSTM) to enable secure and continuous anomaly detection in IIoT-enabled Cognitive Cities (IIoTCC). The main contributions of this research are:
<list list-type="bullet">
<list-item>
<p>The design of an adaptive federated learning framework combined with a quantum-secure authentication mechanism to defend against adversarial attacks, mitigate concept drift, and ensure model integrity in decentralized IIoTCC networks.</p></list-item>
<list-item>
<p>The integration of a self-attention-based LSTM network that improves anomaly detection accuracy and reduces false positives in large-scale IIoTCC environments.</p></list-item>
<list-item>
<p>A multi-objective optimization approach and empirical evaluation on real-world datasets, demonstrating FedCognis&#x2019;s superior resilience, scalability, and communication efficiency compared to existing solutions.</p></list-item>
</list></p>
<p>The rest of the paper is organized as follows. <xref ref-type="sec" rid="s2">Section 2</xref> provides a comprehensive literature review on anomaly detection in IIoT-enabled Cognitive Cities (IIoTCC), federated learning, and quantum-secure authentication. <xref ref-type="sec" rid="s3">Section 3</xref> presents the research methodology, including the proposed adaptive federated AI framework, security mechanisms, and model formulation. <xref ref-type="sec" rid="s4">Section 4</xref> discusses the experimental results and evaluates the performance of FedCognis in terms of anomaly detection accuracy, security resilience, and communication efficiency. <xref ref-type="sec" rid="s5">Section 5</xref> concludes the study by summarizing the key findings and contributions and outlines future directions for advancing secure and adaptive federated learning in IIoTCC environments.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Literature Review</title>
<sec id="s2_1">
<label>2.1</label>
<title>Adaptive Federated Learning for Anomaly Detection in IIoTCC</title>
<p>Federated Learning (FL) is increasingly being integrated into Internet of Things (IoT) infrastructures, particularly within smart city environments, as a promising approach for enhancing data privacy and security. Pandya et al. provide a comprehensive survey on how FL supports collaborative model training while preserving sensitive data [<xref ref-type="bibr" rid="ref-11">11</xref>]. Ullah and Kim propose an IoT-enabled anomaly detection system based on a hybrid architecture of 2D Convolutional Neural Networks (CNN) and Echo State Networks (ESN), demonstrating how AIoT can process vast amounts of surveillance data [<xref ref-type="bibr" rid="ref-12">12</xref>]. In another study, the same authors examine the integration of FL and IoT, identifying key challenges and proposing solutions to secure FL-IoT convergence in smart city applications [<xref ref-type="bibr" rid="ref-13">13</xref>]. Jiang and Kantarci discuss the applicability of FL for distributed sensing in urban environments, outlining both challenges and opportunities [<xref ref-type="bibr" rid="ref-14">14</xref>]. Prabowo et al. review various anomaly detection methods in smart cities and emphasize the need for robust mechanisms to preserve system integrity [<xref ref-type="bibr" rid="ref-15">15</xref>]. Additionally, Rani et al. present a modern survey on IoT technologies and practices that form the foundation of intelligent urban infrastructure [<xref ref-type="bibr" rid="ref-16">16</xref>]. Together, these studies highlight the vital link between FL and IoT integration in the development of secure, efficient, and intelligent Smart City systems.</p>
<p>For anomaly detection in Industrial IoT-enabled Cognitive Cities (IIoTCC) using adaptive federated learning (AFL), extensive research has focused on improving security, efficiency, and privacy preservation. Wang et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] enhanced anomaly detection accuracy by aggregating multi-layered sensor data and reducing detection latency, though their approach remained susceptible to concept drift. Liu et al. [<xref ref-type="bibr" rid="ref-6">6</xref>] introduced a communication-efficient FL model that lowered bandwidth consumption by 20%, though it struggled with heterogeneous IIoTCC sensor data. Huong et al. [<xref ref-type="bibr" rid="ref-18">18</xref>] applied FL for cyberattack detection in industrial control systems, demonstrating strong resilience against poisoning attacks at the expense of reduced model sensitivity. Mothukuri et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] proposed an FL-based IoT security framework that improved detection rates by 18%, yet remained vulnerable to adversarial sample injections. Rashid et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] combined FL with deep learning for intrusion detection, achieving a 92.3% detection rate and showcasing the value of adaptive mechanisms in countering evolving cyber threats.</p>
<p>Despite reducing the success rate of poisoning attacks by 37%, the approach by Weinger et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] incurred high computational overhead, which limited the integration of lightweight encryption into FL systems. Li et al. [<xref ref-type="bibr" rid="ref-22">22</xref>] proposed a multi-tentacle FL model to mitigate the impact of adaptive poisoning attacks, achieving similar resilience but with greater computational demands. Poorazad et al. [<xref ref-type="bibr" rid="ref-15">15</xref>] introduced a buffered FL framework that effectively minimized privacy risks, though it introduced synchronization delays. To improve security at the cost of increased detection latency, Taheri et al. [<xref ref-type="bibr" rid="ref-23">23</xref>] developed a federated malware detection system tailored for IIoTCC. Truong et al. [<xref ref-type="bibr" rid="ref-24">24</xref>] offered a lightweight FL model for real-time anomaly detection with high accuracy; however, it becomes less feasible in environments with a large number of servers. Collectively, these studies highlight the potential of AFL to enhance IIoTCC security, while also revealing trade-offs in scalability, security, and computational efficiency.</p>
<p>In parallel, self-attention mechanisms have proven effective in enhancing anomaly detection. Jiang et al. [<xref ref-type="bibr" rid="ref-5">5</xref>] proposed the ALAE model, leveraging a self-attention reconstruction network for multivariate time-series anomaly detection. Mishra et al. [<xref ref-type="bibr" rid="ref-4">4</xref>] introduced an attention-powered Bi-LSTM model that improved temporal anomaly detection in IoT traffic. Rong et al. [<xref ref-type="bibr" rid="ref-1">1</xref>] and Xie et al. [<xref ref-type="bibr" rid="ref-2">2</xref>] also demonstrated the effectiveness of self-attention-based architectures in domains such as QAR data and pump systems, respectively, showing notable improvements in detection precision and robustness. These findings support the use of SALSTM in FedCognis to capture long-term dependencies in complex IIoTCC environments.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Quantum Secure Authentication for Federated Learning Security</title>
<p>Federated Learning (FL) is targeted as a promising solution for security enhancement with Quantum Secure Authentication (QSA) to prevent unauthorized access and to deal with adversarial threats. QFDSA [<xref ref-type="bibr" rid="ref-25">25</xref>] is a quantum secured FL system for dynamic security assessment in smart grids, which enhances the authentication robustness while decreasing the adversarial attack success rates by 42%. While their model was introduced, they were with high computational costs, and made their model infeasible for large scale industrial internat of thing applications. In 6G networks, Javeed et al. [<xref ref-type="bibr" rid="ref-26">26</xref>] studied quantum empowered FL with enhanced privacy protection in IoT security but is not yet practical in real time because of high latency. Kannan et al. [<xref ref-type="bibr" rid="ref-27">27</xref>] had proposed a quantum safe FL framework that incorporates the lattice based encryption techniques to provide privacy against quantum attacks but it requires high computational resources. In the work from Aljrees et al. [<xref ref-type="bibr" rid="ref-28">28</xref>], they proposed a sustainable FL model based on the Quondam Signature Algorithm that allows to reduce the computational overhead by 30 percent with preserving encryption efficiency. Although these advancements, quantum authentication mechanisms in federated settings remained one of the important concerns to scale. Qiao et al. [<xref ref-type="bibr" rid="ref-29">29</xref>] also come up with a comprehensive survey of switching from classical FL to Quantum Federated Learning (QFL) and one of the major breaks from the current studies is the requirement for post quantum cryptographic techniques in future IoT security stacks. Yamany et al. [<xref ref-type="bibr" rid="ref-30">30</xref>] develop an optimized quantum based FL framework (OQFL) for intelligent transportation systems that impact adversary effectiveness by 45% but face challenge of deployment due to the infrastructure requirements. Zhang et al. [<xref ref-type="bibr" rid="ref-31">31</xref>] presented a post-quantum secure federated learning (PQSF) model which is more secure resilient but have longer model convergence time. Veeramachaneni [<xref ref-type="bibr" rid="ref-32">32</xref>] proposed a dynamic resource allocation framework for quantum cryptography-based FL that can improve the resilience of secure IoT communications at a cost of higher computational complexity. Collectively, these studies have shown the power of QSA to enhance FL security, but offer remained in scalability, computational efficiency and real time implementation [<xref ref-type="bibr" rid="ref-33">33</xref>]. <xref ref-type="table" rid="table-1">Table 1</xref> presents a comparative analysis of federated learning and quantum secure authentication techniques adaption in IIoTCC.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Comparative analysis of adaptive federated learning and quantum secure authentication techniques in IIoTCC</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th align="center">Reference</th>
<th align="center">Technique</th>
<th align="center">Key findings</th>
<th align="center">Limitations</th>
<th align="center">Relevance to FedCognis</th>
</tr>
</thead>
<tbody>
<tr>
<td>Liu et al. (2020) [<xref ref-type="bibr" rid="ref-6">6</xref>]</td>
<td>Communication-efficient Federated Learning with model compression</td>
<td>Reduced bandwidth usage by 20%, ensuring efficient anomaly detection</td>
<td>Struggled with heterogeneous sensor data, limiting generalization</td>
<td>Demonstrates the necessity of efficient communication strategies for federated IIoTCC systems</td>
</tr>
<tr>
<td>Wang et al. (2021) [<xref ref-type="bibr" rid="ref-17">17</xref>]</td>
<td>Hierarchical Federated Learning for IIoTCC anomaly detection</td>
<td>Improved detection latency and accuracy in large-scale IIoTCC networks</td>
<td>Concept drift led to long-term accuracy degradation</td>
<td>Highlights the need for adaptive learning mechanisms to maintain model accuracy over time</td>
</tr>
<tr>
<td>Mothukuri et al. (2021) [<xref ref-type="bibr" rid="ref-19">19</xref>]</td>
<td>Federated Learning-based anomaly detection for IoT security</td>
<td>Improved anomaly detection rate by 18% over conventional models</td>
<td>Susceptible to adversarial sample injection</td>
<td>Reinforces the importance of integrating security mechanisms within FL for IIoTCC</td>
</tr>
<tr>
<td>Ren et al. (2023) [<xref ref-type="bibr" rid="ref-25">25</xref>]</td>
<td>Quantum-Secured Federated Learning (QFDSA) for secure model updates</td>
<td>Reduced adversarial attack success rates by 42%</td>
<td>High computational overhead for large-scale IIoTCC deployment</td>
<td>Supports the integration of quantum-secured authentication for model integrity in FedCognis</td>
</tr>
<tr>
<td>Javeed et al. (2024) [<xref ref-type="bibr" rid="ref-26">26</xref>]</td>
<td>Quantum-empowered FL for privacy-preserving IIoTCC security in 6G networks</td>
<td>Enhanced privacy protection and federated model resilience</td>
<td>High latency limited real-time applicability</td>
<td>Demonstrates the necessity of balancing security and real-time processing in federated IIoTCC systems</td>
</tr>
<tr>
<td>Zhang et al. (2024) [<xref ref-type="bibr" rid="ref-31">31</xref>]</td>
<td>Post-Quantum Secure Federated Learning (PQSF) with cryptographic enhancements</td>
<td>Strengthened FL model resilience against quantum threats</td>
<td>Increased training time due to cryptographic overhead</td>
<td>Validates the need for post-quantum security measures to enhance federated model robustness</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Research Gap</title>
<p>Current federated learning (FL) frameworks for anomaly detection in IIoT-enabled Cognitive Cities (IIoTCC) often struggle to secure data effectively. They are vulnerable to poisoning attacks, leading to a gradual decline in model accuracy. While Quantum Secure Authentication (QSA) offers improved security, it introduces significant computational overhead, making it unsuitable for real-time applications. Similarly, post-quantum cryptographic methods enhance protection but result in longer training times. Moreover, existing solutions lack adaptive mechanisms capable of learning continuously from evolving IIoTCC data while maintaining communication efficiency. The scalability of these systems is hindered by the ongoing trade-off between security and performance. Despite these challenges, a unified framework that combines adaptive federated AI, strong authentication, and real-time anomaly detection remains largely unexplored.</p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Methodology</title>
<p>The FedCognis framework, designed for anomaly detection in Industrial IoT-enabled Cognitive Cities (IIoTCC), is developed and evaluated following the methodology described earlier. This section provides a detailed overview of the dataset used for training and testing, including how it was collected and preprocessed. We also outline the system model and the underlying assumptions. The architecture of the proposed model is explained in depth, highlighting its core components: federated learning, Quantum Secure Authentication (QSA), and Self-Attention Long Short-Term Memory (SALSTM). In addition, we present the algorithm that drives the model, illustrating each step of the anomaly detection process and how integrated security measures enhance the system&#x2019;s resilience and reliability.</p>
<sec id="s3_1">
<label>3.1</label>
<title> Problem Formulation: Security and Adaptability in Federated Learning for IIoTCC</title>
<p>In the case of Federated Learning (FL) in IIoTCC networks, there are security threats in the form of adversarial attacks, model poisoning, Byzantine failures, which results in compromised anomaly detection. Secondly, FL models suffer from concept drift since the IIoTCC data streams are dynamic. Therefore, this problem needs to design a secure and adaptive FL framework to combat adversarial threats, efficient communication, and maintain high anomaly detection accuracy while preserving model integrity.</p>
<p>Consider an IIoTCC system consisting of <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>N</mml:mi></mml:math></inline-formula> federated nodes, each denoted as <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow></mml:math></inline-formula>, where <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula>. Each node maintains a local model <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> trained on a private dataset <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, and updates are aggregated via weighted averaging:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:math></disp-formula>where <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> represents the trust score of node <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi>i</mml:mi></mml:math></inline-formula> at iteration <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>t</mml:mi></mml:math></inline-formula>, dynamically updated as:
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x2225;</mml:mo><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:math></disp-formula>where <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mi>&#x03B1;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> are decay parameters controlling trust adaptation. Adversarial nodes attempt to maximize divergence:
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:munder><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy="false">&#x007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>d</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy="false">&#x007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup><mml:mrow><mml:mi mathvariant="double-struck">I</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03C4;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>where <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> controls adversarial impact and <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi>&#x03C4;</mml:mi></mml:math></inline-formula> is a threshold for adversarial detection. The adaptive optimization framework minimizes the following objective:
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi></mml:mi><mml:munder><mml:mo movablelimits="true" form="prefix">min</mml:mo><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:munder><mml:mspace width="1em" /><mml:msub><mml:mrow><mml:mi mathvariant="double-struck">E</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mi mathvariant="normal">&#x2216;</mml:mi><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>d</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:msub><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>&#x02112;</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mo>&#x2225;</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:munder><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mi>d</mml:mi><mml:mi>v</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mover><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy="false">&#x007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mrow><mml:mtext>s</mml:mtext></mml:mrow><mml:mo>.</mml:mo><mml:mrow><mml:mtext>t</mml:mtext></mml:mrow><mml:mo>.</mml:mo><mml:mspace width="1em" /><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>&#x2264;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2264;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mi mathvariant="normal">&#x2200;</mml:mi><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi></mml:mi><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B4;</mml:mi><mml:mo>&#x2225;</mml:mo><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup><mml:mo>&#x003E;</mml:mo><mml:msub><mml:mi>&#x03C4;</mml:mi><mml:mrow><mml:mrow><mml:mtext>safe</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mi>&#x03B4;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula> are regularization parameters and <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:msub><mml:mi>&#x03C4;</mml:mi><mml:mrow><mml:mrow><mml:mtext>safe</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> is the minimum trust threshold.
<list list-type="bullet">
<list-item>
<p><inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow></mml:math></inline-formula>: Set of federated IIoTCC nodes.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>: Local model parameters at node <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mi>i</mml:mi></mml:math></inline-formula> and iteration <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mi>t</mml:mi></mml:math></inline-formula>.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula>: Global model parameters at iteration <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mi>t</mml:mi></mml:math></inline-formula>.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>: Private dataset of node <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mi>i</mml:mi></mml:math></inline-formula>.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:msub><mml:mrow><mml:mover><mml:mi>&#x03B8;</mml:mi><mml:mo stretchy="false">&#x007E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>: Malicious updates from adversarial nodes.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>: Trust score of node <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mi>i</mml:mi></mml:math></inline-formula> at iteration <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>t</mml:mi></mml:math></inline-formula>.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>: Regularization parameters for anomaly detection robustness.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mi>&#x03B3;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B4;</mml:mi></mml:math></inline-formula>: Control parameters for adversarial and trust behavior.</p></list-item>
<list-item>
<p><inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mrow><mml:mi mathvariant="double-struck">I</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>: Indicator function for adversarial detection.</p></list-item>
</list></p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Dataset Collection and Description</title>
<p>This research utilizes the WUSTL-IIoTCC-2021 dataset, developed by Washington University in St. Louis, specifically for cybersecurity studies in Industrial IoT-enabled Cognitive Cities (IIoTCC) environments. The dataset is derived from a realistic IIoTCC testbed designed to emulate industrial systems, capturing network traffic from simulated scenarios involving industrial control systems and IoT devices. It is widely used in academic research and has become a standard benchmark for evaluating the security and effectiveness of anomaly detection models in IIoTCC settings.</p>
<p>Collected over 53 continuous hours, the dataset includes both normal operation data and multiple types of cyberattacks. This diversity makes it a valuable resource for testing the robustness of detection frameworks. Key features of the WUSTL-IIoTCC-2021 dataset include:
<list list-type="bullet">
<list-item>
<p><bold>Size:</bold> Approximately 2.7 GB of data collected during real-time operation.</p></list-item>
<list-item>
<p><bold>Observations:</bold> A total of 1,194,464 samples, including 1,107,448 normal traffic instances and 87,016 attack instances. The significant class imbalance is ideal for evaluating the sensitivity of anomaly detection models.</p></list-item>
<list-item>
<p><bold>Features:</bold> The dataset comprises 41 attributes, including device identifiers, IP addresses, packet sizes, and timestamps. These features are critical for detecting irregular patterns in IIoTCC traffic.</p></list-item>
<list-item>
<p><bold>Attack Scenarios:</bold> It simulates various cyber threats such as Denial of Service (DoS), Command Injection, Reconnaissance, and Backdoor attacks&#x2014;reflecting real-world vulnerabilities in IIoTCC systems.</p></list-item>
<list-item>
<p><bold>Environment Simulation:</bold> Generated from a dedicated IIoTCC testbed, the dataset replicates industrial environments with sensors, actuators, and protocols commonly used in smart factories and critical infrastructure.</p></list-item>
</list></p>
<p>The choice of the WUSTL-IIoTCC-2021 dataset is strongly justified. The selection of the WUSTL-IIoTCC-2021 dataset is supported by the attribute overview provided in <xref ref-type="table" rid="table-2">Table 2</xref>. The dataset&#x2019;s direct relevance to IIoTCC domains makes it particularly well-suited for evaluating the FedCognis framework, which targets anomaly detection and security enhancement. Additionally, its diverse attack coverage aligns with the security focus of our work&#x2014;especially the integration of Quantum Secure Authentication (QSA) to guard against model poisoning and data tampering. Importantly, the realism of the testbed environment supports practical assessment, confirming the safety, scalability, and applicability of FedCognis in real-world IIoTCC deployments.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Attributes of the WUSTL-IIoTCC-2021 dataset</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th align="center">Attribute</th>
<th align="center">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Device ID</td>
<td>Unique identifier for each device in the IIoTCC network.</td>
</tr>
<tr>
<td>IP Address</td>
<td>The IP address associated with each device or node in the network.</td>
</tr>
<tr>
<td>Flow ID</td>
<td>A unique identifier for each communication flow between devices.</td>
</tr>
<tr>
<td>Timestamp</td>
<td>The time at which the data packet or event occurred.</td>
</tr>
<tr>
<td>Packet Size</td>
<td>Size of the data packet sent over the network.</td>
</tr>
<tr>
<td>Protocol Type</td>
<td>The network protocol used (e.g., TCP, UDP).</td>
</tr>
<tr>
<td>Source Port</td>
<td>The source port number for the communication.</td>
</tr>
<tr>
<td>Destination Port</td>
<td>The destination port number for the communication.</td>
</tr>
<tr>
<td>Source Bytes</td>
<td>The number of bytes sent from the source device.</td>
</tr>
<tr>
<td>Destination Bytes</td>
<td>The number of bytes sent to the destination device.</td>
</tr>
<tr>
<td>Flow Duration</td>
<td>The duration of the communication flow.</td>
</tr>
<tr>
<td>Flow Bytes</td>
<td>The total number of bytes in the communication flow.</td>
</tr>
<tr>
<td>Packet Count</td>
<td>The total number of packets in the communication flow.</td>
</tr>
<tr>
<td>Flow IAT Mean</td>
<td>The mean inter-arrival time between packets in a flow.</td>
</tr>
<tr>
<td>Flow IAT Std</td>
<td>The standard deviation of the inter-arrival time.</td>
</tr>
<tr>
<td>Attack Type</td>
<td>The type of attack (if any) during the communication (e.g., DoS, Command Injection).</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Dataset Preprocessing</title>
<p>Preprocessing of the dataset is an important step before training the FedCognis model so that the data is well prepared. To begin with, it involves some of the things like handling the missing data, normalize the data, pick the relevant features, and split the dataset. These steps reduce the model&#x2019;s performance in detecting anomalies and also aid the model learn well from the data.</p>
<sec id="s3_3_1">
<label>3.3.1</label>
<title>Handling Missing Data</title>
<p>This mechanism is common in real datasets, as there will always be missing values. For the <bold>WUSTL-</bold>IIoTCC-2021 dataset, we take care of missing values using Mean Imputation. In this technique it replaces a missing value with mean of the respective feature. Given a feature <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> with missing values, the imputed value for the missing data point <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is computed as:
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>g</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>where <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mi>n</mml:mi></mml:math></inline-formula> is the number of available values for feature <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. The benefit of this approach is that the dataset is not changed or modified in anyway and the data points are not removed as this would introduce bias to the dataset.</p>
</sec>
<sec id="s3_3_2">
<label>3.3.2</label>
<title>Normalization</title>
<p>This is done to be sure the features are on the same scale and do not unfairly affect the model. The data is scaled using this technique so that each feature lies between [0, 1]. For a feature <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> with minimum value <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> and maximum value <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>, the normalized value <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is calculated as:
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>m</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2212;</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msubsup></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>This normalization makes sure any one feature does not overpower others and contribute more to the learning process by its scale.</p>
</sec>
<sec id="s3_3_3">
<label>3.3.3</label>
<title>Feature Extraction</title>
<p>In order to enhance the performance of this anomaly detection model, we apply the Principal Component Analysis (PCA) in feature extraction. PCA is a method to reduce dimensionality of data such that as much variance as possible is retained. Let <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mi>X</mml:mi><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>m</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> be the original data matrix with <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mi>n</mml:mi></mml:math></inline-formula> samples and <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mi>m</mml:mi></mml:math></inline-formula> features. The principal components are the eigenvectors of the covariance matrix <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mrow><mml:mi mathvariant="normal">&#x03A3;</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mfrac><mml:msup><mml:mi>X</mml:mi><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:mi>X</mml:mi></mml:math></inline-formula>, where the eigenvectors correspond to the directions of maximum variance in the dataset. The first few principal components, <inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>v</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>, are used to project the data into a lower-dimensional space. The dimensionality-reduced data <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>u</mml:mi><mml:mi>c</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is computed as:
<disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>u</mml:mi><mml:mi>c</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>X</mml:mi><mml:mo>&#x22C5;</mml:mo><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula>where <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:msub><mml:mi>V</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the matrix of the top <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>k</mml:mi></mml:math></inline-formula> eigenvectors. In this dimensionality reduction, the most important features for anomaly detection are the focus, which improves the computational efficiency of the FedCognis model.</p>
</sec>
<sec id="s3_3_4">
<label>3.3.4</label>
<title>Data Splitting</title>
<p>We split the dataset into training, validation and test sets for training and evaluating the FedCognis model. Usually, the data is split in 80-10-10 ratio randomly. Assuming the dataset is represented as <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mi>D</mml:mi><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula>. The training set <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mrow><mml:mtext>train</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula>, validation set <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mrow><mml:mtext>val</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula>, and test set <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mrow><mml:mtext>test</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> are defined as follows:
<disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mrow><mml:mtext>train</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mrow><mml:mo>&#x230A;</mml:mo><mml:mn>0.8</mml:mn><mml:mi>n</mml:mi><mml:mo>&#x230B;</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mrow><mml:mtext>val</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mrow><mml:mo>&#x230A;</mml:mo><mml:mn>0.8</mml:mn><mml:mi>n</mml:mi><mml:mo>&#x230B;</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mrow><mml:mo>&#x230A;</mml:mo><mml:mn>0.9</mml:mn><mml:mi>n</mml:mi><mml:mo>&#x230B;</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:msub><mml:mi>D</mml:mi><mml:mrow><mml:mrow><mml:mtext>test</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mrow><mml:mo>&#x230A;</mml:mo><mml:mn>0.9</mml:mn><mml:mi>n</mml:mi><mml:mo>&#x230B;</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>}</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Such evaluation of the model and its performance ensures that the model is tested on data that it hasn&#x2019;t seen during training, resulting in a more fairer and more realistic assessment of its performance.</p>
</sec>
<sec id="s3_3_5">
<label>3.3.5</label>
<title>Feature Scaling for Model Convergence</title>
<p>As we are using Self Attention Long short term memory (SALSTM) networks as an anomaly detection, we need to be sure that all features are properly scaled for training process. Standardization is applied to each feature, which means our data will have mean of 0 and standard deviation of 1. The standardized value <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> of a feature <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is calculated as:
<disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:msubsup><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>s</mml:mi><mml:mi>t</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:msub><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mfrac></mml:math></disp-formula>where <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:msub><mml:mi>&#x03BC;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the mean of feature <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:msub><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is its standard deviation. This transformation helps to speed up convergence of the model as the learning rates are identical for all the features and all the data is centered around 0.</p>
<p>Finally the FedCognis model is trained on high quality well prepared dataset by performing the steps like handling missing data, normalization, feature extraction, data splitting as well as feature scaling. The performance of the model should be optimized in order for the model to detect anomalies in IIoTCC environments. By preprocessing in the correct manner, the model is capable of generalizing well to unseen data and adapt to dynamic nature of IIoTCC systems, resulting in higher accuracy and robustness of anomaly detection for real world applications.</p>
</sec>
</sec>
<sec id="s3_4">
<label>3.4</label>
<title>System Model and Assumptions</title>
<p>This section presents the system model underlying the FedCognis framework for anomaly detection in IIoTCC environments. It outlines the structural design of the network, the federated learning process, and the assumptions made during development. These include data availability at each node, communication constraints, potential adversarial behavior, and the need for model adaptability in response to concept drift.</p>
<sec id="s3_4_1">
<label>3.4.1</label>
<title>System Model</title>
<p>Each node in the system model of the network of IIoTCC devices (nodes) generates sensor data and transfers them. The architecture of such a Federated Learning (FL) of the type described above is each IIoTCC device training its own local model with its own private data. Periodically, the local models are aggregated and the global models are taken to detect anomalies in real time.</p>
<p>It is possible to represent the IIoTCC network as a set of nodes <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula>, where each node <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow></mml:math></inline-formula> collects sensor data <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> and trains a local anomaly detection model <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> based on its data. The local models are aggregated using a weighted average approach to update the global model <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>, as shown below:
<disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula>where <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is the trust score of node <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mi>i</mml:mi></mml:math></inline-formula> at iteration <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:mi>t</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:math></inline-formula> is the size of the local dataset at node <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:mi>i</mml:mi></mml:math></inline-formula>. The purpose of this aggregation process is to allow the global model to incorporate the learned knowledge from all participating nodes without sharing sensitive data.</p>
</sec>
<sec id="s3_4_2">
<label>3.4.2</label>
<title>Key Assumptions</title>
<p>The development of the system model takes the following assumptions.
<list list-type="bullet">
<list-item>
<p><bold>Data Availability:</bold> Each IIoTCC node has access to its own sensing data for training of the local model. It will also assume that the dataset is sufficient for training a meaningful anomaly detection model.</p></list-item>
<list-item>
<p><bold>Federated Learning Setup:</bold> In such federated learning setup, nodes take part in collaborating in the training of a global model without the need to share raw data. The model updates, that is the model parameters are only shared between each node and the central server (or aggregator).</p></list-item>
<list-item>
<p><bold>Security Threats:</bold> Malicious updates can be introduced by the adversarial nodes to corrupt the model updates. Quantum Secure Authentication (QSA), along with a trust based aggregation mechanism is used to filter out malicious contributions to these threats.</p></list-item>
<list-item>
<p><bold>Communication Constraints:</bold> We assume the resource constrained communication network between the nodes. This requires the use of lightweight communication protocols and the updates of the model overhead.</p></list-item>
<list-item>
<p><bold>Concept Drift:</bold> The anomaly detection model may fail due to the change of data distribution over time due to concept drift, so the data distribution would be assumed to change over time. The model is meant to be able to adapt to changes.</p></list-item>
<list-item>
<p><bold>Anomaly Detection Goal:</bold> The main goal of the system is to find anomalous data in the sensor data of IIoTCC nodes. The model seeks to accurately and timely detect anomalies that are either faults, attacks or unusual events.</p></list-item>
<list-item>
<p><bold>Local Processing:</bold> Each IIoTCC node processes data locally and trains the model according to each. This helps reduce the burden on the central server, and also improves the privacy by having sensitive data on the local nodes.</p></list-item>
</list></p>
<p>Assumptions such as these form the base of the design and implementation of the FedCognis framework and the system model is derived from these. Details of the proposed model are described in the following sections and include how local models are trained, how global model is updated, how security and adaptability are maintained in the system.</p>
</sec>
</sec>
<sec id="s3_5">
<label>3.5</label>
<title>Proposed FedCognis Framework</title>
<p>FedCognis is the proposed adaptive federated learning framework designed to address anomaly detection challenges in Industrial IoT-enabled Cognitive Cities (IIoTCC). It offers a continuous and intelligent approach to identifying anomalies in dynamic industrial environments. By integrating advanced technologies&#x2014;namely Federated Learning, Quantum Secure Authentication (QSA), and Self-Attention Long Short-Term Memory (SALSTM)&#x2014;FedCognis provides a robust and secure mechanism for real-time anomaly detection. The synergy of these components enables the framework to learn adaptively from distributed IIoTCC data while maintaining high levels of security and resilience against evolving threats.</p>
<p><xref ref-type="fig" rid="fig-2">Fig. 2</xref> presents the layered architecture of FedCognis, beginning with local IIoT nodes that perform edge-level inference. These are followed by a secure model authentication layer utilizing Quantum Secure Authentication (QSA), and ultimately lead to centralized aggregation powered by Self-Attention LSTM (SALSTM). The seamless integration between the edge and core layers enables privacy-preserving learning while maintaining adaptability and high detection accuracy. Each component in the architecture&#x2014;such as the trust evaluator, cryptographic verifier, and anomaly predictor&#x2014;represents a distinct functional module, aligning with realistic deployment scenarios in IIoTCC environments.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Layered architecture of FedCognis, illustrating edge IIoT devices, federated learning, and a security module with Quantum Secure Authentication (QSA) for secure and adaptive model aggregation</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-2.tif"/>
</fig>
<sec id="s3_5_1">
<label>3.5.1</label>
<title>Federated Learning Framework</title>
<p>In FedCognis, the anomaly detection model is trained in a federated manner. Each IIoTCC node in the network <inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mrow><mml:mi>&#x1D4A9;</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>N</mml:mi><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></inline-formula> maintains a local model <inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> trained on its own data <inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>. Instead of sending raw data to a central server, each node sends only its model updates (i.e., the parameters) to the central server, which aggregates them to form a global model. This ensures data privacy and reduces the communication overhead.</p>
<p>At each iteration <inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:mi>t</mml:mi></mml:math></inline-formula>, the global model <inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula> is updated based on the weighted average of the local models:
<disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula>where <inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> is the trust score of node <inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:mi>i</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:math></inline-formula> is the size of the local dataset at node <inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:mi>i</mml:mi></mml:math></inline-formula>. This weighted aggregation ensures that nodes with more data or higher trust have a greater influence on the global model.</p>
</sec>
<sec id="s3_5_2">
<label>3.5.2</label>
<title>Quantum Secure Authentication (QSA)</title>
<p>To protect the federated learning process against malicious updates, such as the model poisoning (e.g., QSA is introduced for Quantum Secure Authentication (QSA) in FedCognis. Using quantum resistant cryptographic techniques, the nodes are authenticated and model updates are authenticated. In a quantum secure authentication protocol, each node&#x2019;s update is verified before being added in the aggregation process.</p>
<p>We utilize a lattice-based post-quantum digital signature algorithm with 256-bit keys. The average signature size is 2.3 KB per update, with a verification time of 1.7 ms. This lightweight overhead ensures real-time validation under constrained IIoTCC bandwidth without significantly delaying model updates.</p>
<p>Let <inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:msubsup><mml:mrow><mml:mi>&#x1D4AE;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> represent the authentication signature of node <inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:mi>i</mml:mi></mml:math></inline-formula> at iteration <inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mi>t</mml:mi></mml:math></inline-formula>, it is generated using a quantum resistant cryptographic algorithm. A verification step is introduced into the global model update process:
<disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:mfrac><mml:mrow><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>&#x1D49F;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>|</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mrow><mml:mi mathvariant="double-struck">I</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>V</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x1D4AE;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mtext>True</mml:mtext></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></disp-formula>where <inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:mi>V</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mrow><mml:mi>&#x1D4AE;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> denotes the verification function, which checks whether the update from node <inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mi>i</mml:mi></mml:math></inline-formula> is valid based on the QSA protocol. If the signature is valid, the update is included; otherwise, it is discarded.</p>
</sec>
<sec id="s3_5_3">
<label>3.5.3</label>
<title>Self-Attention Long Short-Term Memory (SALSTM) Network</title>
<p>FedCognis employs Self-Attention Long Short Term Memory (SALSTM) for improving the performance of anomaly detection. This class of models attains long range dependencies over time in context of time series sensor data, for example they can be used to capture anomalies (in particular long lagged anomalies) that are difficult to discover using conventional machine learning.</p>
<p>Let <inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> represent the input sequence at time step <inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:mi>t</mml:mi></mml:math></inline-formula>, where <inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:mi>d</mml:mi></mml:math></inline-formula> is the feature dimension. The SALSTM model consists of two main components: the self-attention mechanism and the LSTM layer.</p>
<p>The self-attention mechanism computes the attention weights <inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> for each input sequence <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> based on its relevance to previous inputs:
<disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:msub><mml:mi>&#x03B1;</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mtext>softmax</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> is &#x03B1;
. The softmax function enforces the sum of the attention weights to be 1, and the learned weight matrix.</p>
<p>The inputs are weighted and then passed through an LSTM layer which captures temporal dependencies:
<disp-formula id="eqn-18"><label>(18)</label><mml:math id="mml-eqn-18" display="block"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mtext>LSTM</mml:mtext></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where <inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the hidden state at time step <inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:mi>t</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is the hidden state from the previous time step.</p>
<p>Finally, a SALSTM model is used to output which data point is an anomaly. The difference between expected and observed values is used in this prediction:
<disp-formula id="eqn-19"><label>(19)</label><mml:math id="mml-eqn-19" display="block"><mml:mrow><mml:mtext>Anomaly Score</mml:mtext></mml:mrow><mml:mo>=&#x2225;</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi>h</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></disp-formula>where <inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:msub><mml:mrow><mml:mover><mml:mi>h</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is the predicted hidden state from the model, and <inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mo>&#x22C5;</mml:mo><mml:msub><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> represents the L2 norm.</p>
</sec>
<sec id="s3_5_4">
<label>3.5.4</label>
<title>Adaptation to Concept Drift</title>
<p>As the environment for dynamic IIoTCC is dynamic, and the data distribution can change over time, this is referred to as concept drift. In order to adapt to these changes FedCognis is fed with recent data and the global model is updated continuously. To handle concept drift, we propose a method to recomputed the trust scores <inline-formula id="ieqn-89"><mml:math id="mml-ieqn-89"><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula> as a function of the difference between a node&#x2019;s model and the overall model.</p>
<p>The trust score of node <inline-formula id="ieqn-90"><mml:math id="mml-ieqn-90"><mml:mi>i</mml:mi></mml:math></inline-formula> is updated at each iteration <inline-formula id="ieqn-91"><mml:math id="mml-ieqn-91"><mml:mi>t</mml:mi></mml:math></inline-formula> as follows:
<disp-formula id="eqn-20"><label>(20)</label><mml:math id="mml-eqn-20" display="block"><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:msubsup><mml:mi>T</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x2225;</mml:mo><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mo>&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:msup></mml:math></disp-formula>where <inline-formula id="ieqn-92"><mml:math id="mml-ieqn-92"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-93"><mml:math id="mml-ieqn-93"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula> are decay parameters that control how quickly trust is updated, and <inline-formula id="ieqn-94"><mml:math id="mml-ieqn-94"><mml:mo stretchy="false">&#x2225;</mml:mo><mml:msubsup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mo stretchy="false">&#x2225;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> is the squared Euclidean distance between the local model and the previous global model. This mechanism prevents nodes with models far away from the global model (as a result of concept drift) to have their trust scores reduced and nodes with models close to the global model to have higher trust scores.</p>
<p>Security Mechanisms for Adversarial Protection</p>
<p>In addition to model poisoning and Byzantine failures, which are further mechanisms used to enhance the security of FedCognis, the system provides the model poisoning and Byzantine failures. This adaptive trust mechanism described earlier will help in detecting and filtering out malicious updates, and only valid model updates shall be incorporated into the global model. Furthermore, QSA integration offers convincing defense against unauthorized updates to the model.</p>
<p>Summary of the workflow of FedCognis mentioned below (Algorithm 1). In each node, the local model is trained, trust scores are computed, updates are authenticated, and the model updates are sent to the central server. The server aggregates the updates with weighted average and updates the global model. The global model is secure and adaptive to the concept drift and the convergence takes place until an anomaly is detected by the global model.</p>
<fig id="fig-15">
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-15.tif"/>
</fig>
</sec>
<sec id="s3_5_5">
<label>3.5.5</label>
<title>Security Mechanisms for Adversarial Protection</title>
<p>Additionally, in order to enhance the security of FedCognis against adversarial attacks, both model poisoning and Byzantine failures are accounted for. Earlier, we have described that the adaptive trust mechanism can help to identify and filter out the malicious updates and can only contribute to the global model, if they are valid model updates. In addition, the integration of QSA offers strong protection against malicious model update.</p>
</sec>
<sec id="s3_5_6">
<label>3.5.6</label>
<title>Algorithm: FedCognis Workflow</title>
<p>Initialize global model <inline-formula id="ieqn-113"><mml:math id="mml-ieqn-113"><mml:mi>&#x03B8;</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula></p>
<p>Thus, the workflow of the above algorithm is described above in a nutshell, which is what FedCognis is. Each node trains local model, computes trust scores, authenticates updates and send them to the central server with model updates. The local model is updated according to the weighted average and the global model is updated in terms of the weighted average. Thus, instead of repeating many times over the same training data, it is repeated until convergence to obtain a very good detection of anomalies while ensuring security and the ability to adapt to concept drift.</p>
</sec>
</sec>
<sec id="s3_6">
<label>3.6</label>
<title>Evaluation Metrics</title>
<p>Evaluating the performance of the FedCognis framework with respect to anomaly detection is the aim of this section where evaluation metrics are described. In particular, these metrics are accuracy, robustness, and efficiency of the model during face adversarial attack and concept drift.</p>
<sec id="s3_6_1">
<label>3.6.1</label>
<title>Accuracy</title>
<p>The main metric, to evaluate how good the classification model is, is accuracy. The ratio of correctly classified instances to the total instances is called its definition. Let <inline-formula id="ieqn-114"><mml:math id="mml-ieqn-114"><mml:mi>T</mml:mi><mml:mi>P</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-115"><mml:math id="mml-ieqn-115"><mml:mi>T</mml:mi><mml:mi>N</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-116"><mml:math id="mml-ieqn-116"><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-117"><mml:math id="mml-ieqn-117"><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:math></inline-formula>. The number of true positives, true negatives, false positives, and false negatives, respectively, are represented by these values. It is given by the accuracy &#x201C;Acc&#x201D;:
<disp-formula id="eqn-21"><label>(21)</label><mml:math id="mml-eqn-21" display="block"><mml:mrow><mml:mtext>Acc</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>where: <inline-formula id="ieqn-118"><mml:math id="mml-ieqn-118"><mml:mi>T</mml:mi><mml:mi>P</mml:mi></mml:math></inline-formula> is the number of correctly classified anomalies, <inline-formula id="ieqn-119"><mml:math id="mml-ieqn-119"><mml:mi>T</mml:mi><mml:mi>N</mml:mi></mml:math></inline-formula> is the number of correctly classified normal instances, <inline-formula id="ieqn-120"><mml:math id="mml-ieqn-120"><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:math></inline-formula> is the number of normal instances incorrectly classified as anomalies, and <inline-formula id="ieqn-121"><mml:math id="mml-ieqn-121"><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:math></inline-formula> is the number of anomalies incorrectly classified as normal instances.</p>
</sec>
<sec id="s3_6_2">
<label>3.6.2</label>
<title>Precision</title>
<p>Precision is the proportion of true positive predictions out of all the instances predicted as anomalies. Especially when the cost of false positives is high, it is a key metric. The precision <inline-formula id="ieqn-122"><mml:math id="mml-ieqn-122"><mml:mi>P</mml:mi></mml:math></inline-formula> is calculated as:
<disp-formula id="eqn-22"><label>(22)</label><mml:math id="mml-eqn-22" display="block"><mml:mi>P</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>A high precision means that the model does not usually classify normal instances as anomalies.</p>
</sec>
<sec id="s3_6_3">
<label>3.6.3</label>
<title>Recall (Sensitivity)</title>
<p>Sensitivity or recall measures the proportion of the true positives identified correctly by the model. This matters when the cost of missing an anomaly (false negatives) is high. The recall <inline-formula id="ieqn-123"><mml:math id="mml-ieqn-123"><mml:mi>R</mml:mi></mml:math></inline-formula> is given by:
<disp-formula id="eqn-23"><label>(23)</label><mml:math id="mml-eqn-23" display="block"><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>A high recall means that the model can well identify most of the anomalies in the data.</p>
</sec>
<sec id="s3_6_4">
<label>3.6.4</label>
<title>F1-Score</title>
<p>The F1-score is the harmonic mean of precision and recall, which gives a balanced metric between precision and recall. It is especially useful for imbalanced datasets. The F1-score <inline-formula id="ieqn-124"><mml:math id="mml-ieqn-124"><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is defined as:
<disp-formula id="eqn-24"><label>(24)</label><mml:math id="mml-eqn-24" display="block"><mml:mi>F</mml:mi><mml:mn>1</mml:mn><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x22C5;</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mo>&#x22C5;</mml:mo><mml:mi>R</mml:mi></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>R</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>where <inline-formula id="ieqn-125"><mml:math id="mml-ieqn-125"><mml:mi>P</mml:mi></mml:math></inline-formula> is precision and <inline-formula id="ieqn-126"><mml:math id="mml-ieqn-126"><mml:mi>R</mml:mi></mml:math></inline-formula> is recall. The higher F1-score means the overall model performance is better.</p>
</sec>
<sec id="s3_6_5">
<label>3.6.5</label>
<title>Area under the Receiver Operating Characteristic Curve (AUC-ROC)</title>
<p>An evaluation based on AUC ROC curve is done to determine how well the model can distinguish normal and anomalous instances at different thresholds. The ROC curve plots the true positive rate (recall) against the false positive rate (FPR), where:
<disp-formula id="eqn-25"><label>(25)</label><mml:math id="mml-eqn-25" display="block"><mml:mrow><mml:mtext>FPR</mml:mtext></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>F</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>The area under this ROC curve is the AUC score. The higher the AUC, the better the model will distinguish anomalies from normal instances.</p>
</sec>
<sec id="s3_6_6">
<label>3.6.6</label>
<title>Computational Efficiency</title>
<p>Computational efficiency is crucial for the FedCognis framework as it is designed for IIoTCC environments. Model training time is the primary metric of computational efficiency, which denotes the time it takes to train the model over all involved nodes. Let <inline-formula id="ieqn-127"><mml:math id="mml-ieqn-127"><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mrow><mml:mtext>train</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> represent the total training time, including both local training at nodes and the aggregation process:
<disp-formula id="eqn-26"><label>(26)</label><mml:math id="mml-eqn-26" display="block"><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mrow><mml:mtext>train</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mrow><mml:mtext>local</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mrow><mml:mtext>aggregation</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></disp-formula>where: <inline-formula id="ieqn-128"><mml:math id="mml-ieqn-128"><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mrow><mml:mtext>local</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> is the average training time for each node, <inline-formula id="ieqn-129"><mml:math id="mml-ieqn-129"><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mrow><mml:mtext>aggregation</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> is aggregating model updates and updating the global model requires time, time that I&#x2019;ll refer to as the time to aggregate model updates or simply the time to aggregate.</p>
<p>Especially for real time anomaly detection systems with high demand of quick decision making, a lower training time is desirable.</p>
</sec>
<sec id="s3_6_7">
<label>3.6.7</label>
<title>Security Evaluation</title>
<p>We also define the Security Score of the FedCognis framework, which is used to evaluate the model&#x2019;s resistance to adversarial attacks, e.g., model poisoning. The Quantum Secure Authentication (QSA) and trust-based aggregation mechanisms are measured for the percentage of successful adversarial attacks that are prevented and the security score is calculated. Let <inline-formula id="ieqn-130"><mml:math id="mml-ieqn-130"><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mrow><mml:mtext>attacked</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> represent the number of successful adversarial attacks and <inline-formula id="ieqn-131"><mml:math id="mml-ieqn-131"><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mrow><mml:mtext>prevented</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> represent the number of attacks prevented by the security mechanisms. The security score <inline-formula id="ieqn-132"><mml:math id="mml-ieqn-132"><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mrow><mml:mtext>security</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> is given by:
<disp-formula id="eqn-27"><label>(27)</label><mml:math id="mml-eqn-27" display="block"><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mrow><mml:mtext>security</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mrow><mml:mtext>prevented</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mrow><mml:mtext>attacked</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:mfrac></mml:math></disp-formula></p>
<p>A higher value of <inline-formula id="ieqn-133"><mml:math id="mml-ieqn-133"><mml:msub><mml:mi>S</mml:mi><mml:mrow><mml:mrow><mml:mtext>security</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> indicates better protection against adversarial manipulation of the model.</p>
</sec>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Results and Discussion</title>
<p>In this section, we evaluate a complete framework of the proposed FedCognis. Specifically, it analyzes whether it can detect anomalies, tolerate concept drift, be resistant to adversarial nodes, be efficient in terms of bandwidth and computational, as well as achieving robust security performance against baselines.</p>
<sec id="s4_1">
<label>4.1</label>
<title>Anomaly Detection and Concept Drift Analysis</title>
<sec id="s4_1_1">
<label>4.1.1</label>
<title>Temporal Anomaly Score Patterns</title>
<p><xref ref-type="fig" rid="fig-3">Fig. 3</xref> displays the evolution of anomaly scores over time, with a highlighted concept drift event occurring at round 50. A significant spike in anomaly scores indicates model responsiveness to environmental changes, confirming FedCognis&#x2019; ability to detect and adapt to evolving IIoTCC patterns.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Temporal evolution of anomaly scores highlighting concept drift</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-3.tif"/>
</fig>
<p>The above figure shows the probability distribution of anomaly scores before and after a simulated concept drift at round 50. Before the drift, all the scores are below the decision threshold (2.4), which shows that the system operates stably. However, after the drift event, the distribution moves far to the right and a large amount of scores exceeds the threshold. The change in the distribution of the nodes demonstrates the model&#x2019;s capability to detect temporal drifts in the behavior of nodes, which is a crucial property for the anomaly detection in dynamic IoT environments.</p>
</sec>
<sec id="s4_1_2">
<label>4.1.2</label>
<title>Score Distribution before and after Drift</title>
<p><xref ref-type="fig" rid="fig-4">Fig. 4</xref> contrasts anomaly score distributions pre- and post-drift, showing a rightward shift after drift. This validates the SALSTM&#x2019;s sensitivity to time-dependent behavioral changes, further enhanced by the trust-aware model updates.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Anomaly score distribution before and after concept drift at round 50</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-4.tif"/>
</fig>
<p>After this, we depict the pairwise cosine similarity between the update vectors of 20 nodes in the federated setup using this heatmap. In the bottom right quadrant we observe a distinct adversarial cluster formed by nodes 15 to 19 that are highly mutually similar and at the same time are not similar to the rest. This implies that the action is coordinated and therefore indicates adversarial intent. It is critical for FedCognis to be able to visualize and detect divergence of this type in order to isolate colluding or compromised nodes, strengthening the system&#x2019;s robustness.</p>
</sec>
<sec id="s4_1_3">
<label>4.1.3</label>
<title>Performance Degradation and Recovery</title>
<p>In <xref ref-type="fig" rid="fig-5">Fig. 5</xref>, FedCognis&#x2019; F1-score is shown across sequential rounds. Two visible dips correspond to simulated drift events. The gradual recovery post-event illustrates the framework&#x2019;s resilience and ability to restore performance autonomously. The system recovers gradually after both positive and negative drift events, with a post-drift recovery of 37%.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>F1-score dynamics showing resilience to drift events</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-5.tif"/>
</fig>
<p>A hexbin plot showing the node by node joint distribution between trust scores and model update magnitudes (L2 norm) of participating nodes is shown. It is obvious that the nodes with higher update magnitudes have lower trust scores. Clearly visible in the bottom right region labeled &#x201C;Low Trust&#x2013;High Divergence&#x201D; are nodes that diverges significantly from normal training behavior. However, for adapting trust scoring to anomalous participants in real time, this is essential.</p>
</sec>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Trust Evaluation and Node Behavior Monitoring</title>
<sec id="s4_2_1">
<label>4.2.1</label>
<title>Trust Score vs. Update Magnitude</title>
<p><xref ref-type="fig" rid="fig-6">Fig. 6</xref> reveals a consistent pattern: nodes with high update divergence are penalized with lower trust scores. This correlation supports the use of L2-norm-based filtering for isolating adversarial participants.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Hexbin plot of trust scores vs. model update magnitudes</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-6.tif"/>
</fig>
<p>In the WUSTL-IIoTCC-2021 dataset, the anomaly score timeline indicates real-time concept drift detection. Around 5 January, the anomaly score goes up significantly, significantly above the detection threshold and within a shaded alert region. This indicates that such a system can adapt to distributional shifts with time and in an autonomous manner, which is crucial for smart factory and critical infrastructure systems to operate without manual intervention.</p>
</sec>
<sec id="s4_2_2">
<label>4.2.2</label>
<title>Cosine Similarity Matrix</title>
<p><xref ref-type="fig" rid="fig-7">Fig. 7</xref> visualizes inter-node similarities using cosine similarity. The adversarial cluster forms a coherent block, validating the trust system&#x2019;s ability to isolate coordinated attacks in the network.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Cosine similarity matrix revealing adversarial node clusters</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-7.tif"/>
</fig>
<p>This figure shows the daily F1-score over a three month period with two large drifts. A 15% performance dip occurs from Drift Event 1 and a 12% reduction from Drift Event 2. Despite this, the system is extremely resilient as it recovers post-drift as indicated by a moving average. This confirms that FedCognis can indeed detect (and not just indicate) performance degradation due to drift as well as recover quickly due to trust aware updates and the adaptive filtering.</p>
</sec>
<sec id="s4_2_3">
<label>4.2.3</label>
<title>Trust Dynamics across Rounds</title>
<p><xref ref-type="fig" rid="fig-8">Fig. 8</xref> demonstrates the dynamic adjustment of trust. The adversarial node&#x2019;s sharp drop indicates active detection, while gradual recovery reflects the fairness mechanism allowing reintegration of recovered nodes. A drop in trust is observed during malicious activity, followed by partial recovery.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Trust score dynamics showing adversarial activity and recovery rate</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-8.tif"/>
</fig>
<p>This dual distribution plot represents benign and adversarial nodes update magnitudes. Results on empirical distributions show that adversarial nodes generate much bigger update magnitudes. The dotted vertical line corresponds to the decision threshold (&#x03B8; &#x003D; 0.8), and the plot shows the FPR to be 6.4% and the FNR to be 26%. It is shown that the observed distribution gap (&#x0394;&#x03BC; &#x003D; 1.20) verifies the filtering to separate benign from malicious nodes in the trust pipeline using L2 norm based filtering.</p>
</sec>
<sec id="s4_2_4">
<label>4.2.4</label>
<title>Ablation Study</title>
<p>To evaluate the individual contributions of the Quantum Secure Authentication (QSA) module and the Self-Attention Long Short-Term Memory (SALSTM) network to the overall performance of FedCognis, we conducted an ablation study. Three configurations of the model were tested using the WUSTL-IIoTCC-2021 dataset under the same experimental conditions:
<list list-type="simple">
<list-item><label>1.</label><p><bold>FedCognis without QSA</bold>&#x2014;In this setting, model updates are aggregated without quantum authentication. Only trust-based filtering is applied, removing the authentication layer used to validate model integrity.</p></list-item>
<list-item><label>2.</label><p><bold>FedCognis without SALSTM</bold>&#x2014;Here, the SALSTM model is replaced with a standard LSTM network. This evaluates the impact of attention-enhanced temporal modeling on anomaly detection.</p></list-item>
<list-item><label>3.</label><p><bold>Full FedCognis</bold>&#x2014;This includes both QSA and SALSTM components as proposed in the original architecture.</p></list-item>
</list></p>
<p>Performance was evaluated using four core metrics: accuracy, precision, recall, and AUC (Area Under ROC Curve). Results clearly indicate that both QSA and SALSTM contribute significantly to the robustness and effectiveness of the framework. Excluding either component results in reduced detection performance, especially in recall and AUC, which are critical for real-time anomaly detection in IIoTCC networks.</p>
<p>These findings confirm that QSA enhances resilience against malicious updates while SALSTM boosts the model&#x2019;s ability to capture long-term dependencies and subtle anomalies in sensor data.</p>
<p><xref ref-type="table" rid="table-3">Table 3</xref> summarizes the performance impact of removing key components from the FedCognis framework. Without QSA, the model becomes more vulnerable to adversarial updates, reducing its accuracy and AUC. Similarly, removing SALSTM lowers detection quality due to weaker temporal modeling. The full FedCognis model consistently outperforms the ablated versions across all metrics, confirming that both QSA and SALSTM are essential for achieving high anomaly detection performance in IIoTCC environments.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Ablation results showing the impact of QSA and SALSTM components</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Configuration</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
<th>AUC</th>
</tr>
</thead>
<tbody>
<tr>
<td>Without QSA</td>
<td>91.2%</td>
<td>89.4%</td>
<td>88.1%</td>
<td>0.871</td>
</tr>
<tr>
<td>Without SALSTM</td>
<td>90.7%</td>
<td>87.6%</td>
<td>86.9%</td>
<td>0.862</td>
</tr>
<tr>
<td>Full FedCognis</td>
<td>94.5%</td>
<td>92.3%</td>
<td>91.5%</td>
<td>0.896</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>Security Metrics and Classification Accuracy</title>
<sec id="s4_3_1">
<label>4.3.1</label>
<title>Distribution of Benign vs. Adversarial Updates</title>
<p><xref ref-type="fig" rid="fig-9">Fig. 9</xref> shows that adversarial nodes typically have much higher update magnitudes than benign ones. The applied threshold effectively separates the two groups, validating the anomaly filter. A trust threshold of 0.8 was effective in distinguishing both groups, with a 6.4% false positive and 26% false negative rate.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Distribution analysis of update magnitudes</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-9.tif"/>
</fig>
<p>This figure compares the bandwidth usage of centralized learning, vanilla FL, and FedCognis. It also shows that FedCognis saves 72% bandwidth from centralized learning and 50% from vanilla FL. In the lower subplot, it is shown that instantaneous bandwidth usage stays low after round 20. The significance of the results lies in the fact that the framework is suitable for bandwidth constrained IoT environments and does not compromise performance.</p>
</sec>
<sec id="s4_3_2">
<label>4.3.2</label>
<title>ROC Curve for Anomaly Classification</title>
<p><xref ref-type="fig" rid="fig-10">Fig. 10</xref> ROC curve confirms the model&#x2019;s capability to distinguish anomalous vs. normal samples under low false positive constraints&#x2014;key for industrial IoT reliability.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>ROC curve showing classification performance of FedCognis</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-10.tif"/>
</fig>
<p>Evolution of the trust score of a benign node and an adversarial node over 100 training rounds, are plotted in this. Adversarial node experiences sharp trust drop immediately when it starts malicious activity and slow recovery after mitigation. The benign node also keeps its trust level stable. FedCognis being able to do this dynamic trust assessment enables isolation of threats quickly and re-evaluating trustworthiness over time improving long term system integrity.</p>
</sec>
<sec id="s4_3_3">
<label>4.3.3</label>
<title>Accuracy, Precision, and Recall Evolution</title>
<p><xref ref-type="fig" rid="fig-11">Fig. 11</xref> presents performance metric convergence. The low gap between accuracy and recall ensures balanced detection, minimizing both false alarms and missed threats. The model converges after 27 rounds with accuracy stabilizing at 94.5%.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Evolution of accuracy, precision, and recall over training rounds</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-11.tif"/>
</fig>
<p>The evolution of three performance metrics (accuracy, precision, and recall) with training rounds are presented in this figure. It converges by round 27 and the final metrics on all categories exceed 92%. A minimal performance gap between accuracy and precision (2.9%) and accuracy and recall (0.8%) indicates that the model maintains balanced detection capability, that is, that it can still perform well on both false positives and false negatives in anomaly detection.</p>
</sec>
</sec>
<sec id="s4_4">
<label>4.4</label>
<title>Bandwidth and Computation Efficiency</title>
<sec id="s4_4_1">
<label>4.4.1</label>
<title>Cumulative and Instantaneous Bandwidth Usage</title>
<p>In <xref ref-type="fig" rid="fig-12">Fig. 12</xref>, both cumulative and round-wise bandwidth usage are tracked. FedCognis rapidly stabilizes to minimal bandwidth demands, showcasing its suitability for resource-constrained deployments.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>Cumulative and per-round bandwidth comparison</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-12.tif"/>
</fig>
<p>This figure&#x2019;s ROC curve shows the trade-off between true positive rate (TPR) and false positive rate (FPR) for the anomaly detection classifier in FedCognis. The model results in an AUC of 0.896 and optimal threshold of 0.443, which yields a 37% TPR at a strict 5% FPR. This shows that the model is highly discriminative in picking out true anomalies while having low false alert rates under tight constraints.</p>
</sec>
<sec id="s4_4_2">
<label>4.4.2</label>
<title>Computational Scalability</title>
<p><xref ref-type="fig" rid="fig-13">Fig. 13</xref> highlights the scalability advantage of FedCognis. Even with hundreds of nodes, the runtime remains tractable, validating its applicability to large-scale cognitive cities.</p>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>Computational efficiency across various network sizes</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-13.tif"/>
</fig>
<p><xref ref-type="fig" rid="fig-13">Fig. 13</xref> compares the computational scalability of FedCognis against centralized and vanilla FL architectures. FedCognis has <inline-formula id="ieqn-134"><mml:math id="mml-ieqn-134"><mml:mrow><mml:mi>&#x1D4AA;</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow><mml:mrow><mml:mn>0.9</mml:mn></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> complexity which scales much better than centralized learning <inline-formula id="ieqn-135"><mml:math id="mml-ieqn-135"><mml:mrow><mml:mi>&#x1D4AA;</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mtext>n</mml:mtext></mml:mrow><mml:mrow><mml:mn>1.8</mml:mn></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>. In large networks, the speedup factor exceeds 1000&#x00D7;, proving that FedCognis is able to run on the real time basis even at scale, which makes it a perfect solution for industrial and urban scale IoT deployments.</p>
</sec>
</sec>
<sec id="s4_5">
<label>4.5</label>
<title>Enterprise Security Benchmarking</title>
<p>FedCognis was evaluated across six critical dimensions of security performance to determine its robustness against real-world threats. These include model poisoning resistance, adversarial robustness, quantum-secure authentication (QSA) effectiveness, trust-based isolation, concept drift resilience, and privacy preservation. The Security Score reported in <xref ref-type="table" rid="table-4">Table 4</xref> is a composite metric calculated as the normalized average of the framework&#x2019;s performance across these six dimensions. Each component is independently measured and scaled to a range of 0&#x2013;100%, with higher scores indicating better security performance. This scoring approach draws on established practices in enterprise cybersecurity assessments and aligns conceptually with the ISO/IEC 27001 risk management framework and NIST SP 800-53 standards, particularly in evaluating model integrity, access control, and trust evaluation.</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Security evaluation across six dimensions</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Security dimension</th>
<th>FedCognis score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Model poisoning resistance</td>
<td>97.8%</td>
</tr>
<tr>
<td>Adversarial robustness</td>
<td>96.2%</td>
</tr>
<tr>
<td>Trust-Based isolation</td>
<td>95.7%</td>
</tr>
<tr>
<td>Authentication strength (QSA)</td>
<td>98.5%</td>
</tr>
<tr>
<td>Concept drift resilience</td>
<td>94.3%</td>
</tr>
<tr>
<td>Privacy preservation</td>
<td>96.9%</td>
</tr>
<tr>
<td>Average security score</td>
<td>96.56%</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>FedCognis achieved an average security score of 96.56%, surpassing the industry-accepted benchmark of 90% for resilient AI systems. Its QSA module scored 98.5%, indicating high effectiveness in detecting and preventing unauthorized or manipulated updates. Similarly, the system maintained 97.8% resistance to model poisoning and over 96% performance in both adversarial and concept drift resilience.</p>
<p>These results underscore FedCognis&#x2019;s suitability for high-risk, mission-critical deployments in cognitive cities, where data integrity and system robustness are non-negotiable.</p>
<p>As summarized in <xref ref-type="table" rid="table-4">Table 4</xref>, Overall, FedCognis surpasses all set security thresholds on six dimensions of criticality: model poisoning resistant, adversarial robust, QSA authentication, privacy preserving, trust derived update filtering and stability against concept drift. This validates the framework&#x2019;s enterprise readiness and enables FedCognis to serve as a complete solution for secure and reliable federated anomaly detection in Industrial IoT enabled Cognitive Cities ecosystems.</p>

<p>As seen in <xref ref-type="fig" rid="fig-14">Fig. 14</xref>, FedCognis consistently performs above enterprise-grade thresholds across all security metrics. It ensures robustness against both external and internal adversaries. FedCognis exceeds the 90% security threshold in all categories, including QSA effectiveness.</p>
<fig id="fig-14">
<label>Figure 14</label>
<caption>
<title>Security benchmarks across multiple categories</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_66898-fig-14.tif"/>
</fig>
<p>This bar chart benchmarks FedCognis against centralized and vanilla FL systems across six critical security metrics. In practically all of the categories, fedCognis has an average score above 95% e.g., model poisoning, adversarial robustness, authentication strength, and effectiveness when querying suspected adversaries (QSA). It also meets the security bar set by leading industry standards (90%)&#x2014;proving it is prepared for enterprise and fully armored in adversary environments.</p>
</sec>
<sec id="s4_6">
<label>4.6</label>
<title>Cross-Dataset Generalization</title>
<p>To evaluate the generalization ability of FedCognis across different data characteristics and domains, we tested the framework on two additional benchmark datasets: ToN-IoT and SWaT.
<list list-type="bullet">
<list-item>
<p><bold>ToN-IoT</bold> (<bold>UNSW Canberra</bold>): A multi-domain IIoT dataset containing telemetry from smart homes, smart cities, and industrial systems. It includes system logs, sensor streams, and network traffic, with both normal and anomalous behavior annotated.</p></list-item>
<list-item>
<p><bold>SWaT</bold> (<bold>Secure Water Treatment</bold>): A real-world cyber-physical dataset collected from a water treatment plant testbed. It captures both normal operation and diverse cyberattacks such as MITM, DoS, and command injection.</p></list-item>
</list></p>
<p><bold>Experimental Setup:</bold></p>
<p>FedCognis was trained using the same configuration as the WUSTL-IIoTCC-2021 experiments (with &#x03B7; &#x003D; 0.01, &#x0394; &#x003D; 0.5, R &#x003D; 5). Datasets were partitioned by device or sensor type across simulated IIoT nodes. For each dataset, 10% of nodes were adversarial, submitting poisoned updates during 20% of the rounds.</p>
<p><bold>Analysis:</bold></p>
<p>As shown in <xref ref-type="table" rid="table-5">Table 5</xref>, FedCognis consistently achieved over 92% accuracy across all datasets, confirming its ability to generalize across IIoT domains with varying temporal and structural properties. The precision-recall tradeoffs remained balanced, and resilience against adversarial attacks stayed above 94% in both ToN-IoT and SWaT. Minor variations in performance are attributed to the inherent differences in dataset noise, feature distributions, and attack sophistication.</p>
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>FedCognis performance across three benchmark datasets, demonstrating generalization in accuracy, resilience, and bandwidth savings</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th align="center">Dataset</th>
<th align="center">Accuracy (%)</th>
<th align="center">Precision <bold>(%)</bold></th>
<th align="center">Recall <bold>(%)</bold></th>
<th align="center">AUC</th>
<th align="center">Bandwidth Saving <bold>(%)</bold></th>
<th align="center">Resilience <bold>(%)</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td>ToN-IoT</td>
<td>93.7</td>
<td>91.1</td>
<td>90.4</td>
<td>0.887</td>
<td>68</td>
<td>95.1</td>
</tr>
<tr>
<td>SWaT</td>
<td>92.9</td>
<td>89.6</td>
<td>91.7</td>
<td>0.873</td>
<td>66</td>
<td>94.3</td>
</tr>
<tr>
<td>WUSTL-IIoTCC-2021</td>
<td>94.5</td>
<td>92.3</td>
<td>91.5</td>
<td>0.896</td>
<td>72</td>
<td>96.5</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>These results demonstrate that FedCognis generalizes well across diverse IIoT data sources and application scenarios, maintaining strong performance without architecture or parameter tuning. This confirms its robustness and adaptability for deployment in various cognitive city subsystems.</p>
</sec>
<sec id="s4_7">
<label>4.7</label>
<title>Parameter Sensitivity Analysis</title>
<p>To assess the stability and adaptability of FedCognis under different hyperparameter configurations, a sensitivity analysis was conducted on three core parameters:
<list list-type="simple">
<list-item><label>1.</label><p><bold>Learning Rate</bold> <inline-formula id="ieqn-136"><mml:math id="mml-ieqn-136"><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>&#x2014;Governs the pace of local model updates; evaluated in the range: 0.001, 0.005, 0.01, 0.05.</p></list-item>
<list-item><label>2.</label><p><bold>Trust Penalty Rate</bold> <inline-formula id="ieqn-137"><mml:math id="mml-ieqn-137"><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B4;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>&#x2014;Determines the reduction applied to a node&#x2019;s trust score upon deviation; evaluated values: 0.1, 0.3, 0.5, 0.7.</p></list-item>
<list-item><label>3.</label><p><bold>Communication Round Interval</bold> <inline-formula id="ieqn-138"><mml:math id="mml-ieqn-138"><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>&#x2014;Defines the interval between global aggregations; tested at: every 1, 5, 10, and 15 local epochs.</p></list-item>
</list></p>
<p><bold>Evaluation Metrics:</bold></p>
<p>For each parameter configuration, we measured:
<list list-type="bullet">
<list-item>
<p><bold>Detection Accuracy</bold> <bold>(%)</bold></p></list-item>
<list-item>
<p><bold>Bandwidth Usage</bold> (<bold>MB</bold>)</p></list-item>
<list-item>
<p><bold>Adversarial Resilience</bold> <bold>(%)</bold></p></list-item>
</list></p>
<p><xref ref-type="table" rid="table-6">Table 6</xref> shows how changes in learning rate <inline-formula id="ieqn-139"><mml:math id="mml-ieqn-139"><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, trust penalty <inline-formula id="ieqn-140"><mml:math id="mml-ieqn-140"><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B4;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, and communication interval <inline-formula id="ieqn-141"><mml:math id="mml-ieqn-141"><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> affect FedCognis performance. The best accuracy (94.5%) and resilience (96.5%) are achieved at <inline-formula id="ieqn-142"><mml:math id="mml-ieqn-142"><mml:mi>&#x03B7;</mml:mi><mml:mo>=</mml:mo><mml:mn>0.01</mml:mn></mml:math></inline-formula> and <inline-formula id="ieqn-143"><mml:math id="mml-ieqn-143"><mml:mi>&#x03B4;</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:math></inline-formula>. Larger &#x0394; improves resilience but slightly reduces accuracy. A communication interval of 5 epochs offers a good balance between bandwidth savings and model performance. Overall, FedCognis remains stable across a wide range of settings.</p>
<table-wrap id="table-6">
<label>Table 6</label>
<caption>
<title>Sensitivity analysis of core FedCognis parameters</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th align="center">Parameter</th>
<th align="center">Value</th>
<th align="center">Accuracy (%)</th>
<th align="center">Bandwidth (MB)</th>
<th align="center">Resilience (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="4">Learning rate <inline-formula id="ieqn-148"><mml:math id="mml-ieqn-148"><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
<td>0.001</td>
<td>91.3</td>
<td>128</td>
<td>94.2</td>
</tr>
<tr>

<td>0.005</td>
<td>93.4</td>
<td>124</td>
<td>95.1</td>
</tr>
<tr>

<td>0.01</td>
<td>94.5</td>
<td>122</td>
<td>96.5</td>
</tr>
<tr>

<td>0.05</td>
<td>90.2</td>
<td>135</td>
<td>89.7</td>
</tr>
<tr>
<td rowspan="4">Trust penalty <inline-formula id="ieqn-149"><mml:math id="mml-ieqn-149"><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B4;</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
<td>0.1</td>
<td>94.7</td>
<td>130</td>
<td>87.9</td>
</tr>
<tr>

<td>0.3</td>
<td>94.2</td>
<td>126</td>
<td>91.4</td>
</tr>
<tr>

<td>0.5</td>
<td>94.5</td>
<td>122</td>
<td>96.5</td>
</tr>
<tr>

<td>0.7</td>
<td>93.1</td>
<td>119</td>
<td>97.4</td>
</tr>
<tr>
<td rowspan="4">Comm. interval <inline-formula id="ieqn-150"><mml:math id="mml-ieqn-150"><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
<td>1</td>
<td>94.5</td>
<td>128</td>
<td>96.5</td>
</tr>
<tr>

<td>5</td>
<td>94.2</td>
<td>88</td>
<td>95.7</td>
</tr>
<tr>

<td>10</td>
<td>92.6</td>
<td>56</td>
<td>94.1</td>
</tr>
<tr>

<td>15</td>
<td>89.7</td>
<td>40</td>
<td>92.3</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><bold>Analysis:</bold>
<list list-type="bullet">
<list-item>
<p><bold>Learning Rate:</bold> Accuracy peaked at <inline-formula id="ieqn-144"><mml:math id="mml-ieqn-144"><mml:mi>&#x03B7;</mml:mi><mml:mo>=</mml:mo><mml:mn>0.01</mml:mn></mml:math></inline-formula>. Lower values slowed convergence, while larger values introduced instability and degraded resilience.</p></list-item>
<list-item>
<p><bold>Trust Penalty:</bold> While <inline-formula id="ieqn-145"><mml:math id="mml-ieqn-145"><mml:mi>&#x03B4;</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:math></inline-formula> yielded the best trade-off, a higher <inline-formula id="ieqn-146"><mml:math id="mml-ieqn-146"><mml:mi>&#x03B4;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>0.7</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> improved adversarial suppression but slightly reduced overall accuracy due to stricter node penalization.</p></list-item>
<list-item>
<p><bold>Communication Interval:</bold> Less frequent global updates <inline-formula id="ieqn-147"><mml:math id="mml-ieqn-147"><mml:mo stretchy="false">(</mml:mo><mml:mi>R</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mn>10</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> reduced bandwidth but hurt accuracy and model responsiveness. An interval of 5 epochs offered an optimal trade-off between communication savings and performance stability.</p></list-item>
</list></p>
<p>The sensitivity analysis confirms that FedCognis is robust across a wide range of parameter values, but optimal performance is achieved with a learning rate of 0.01, trust penalty of 0.5, and a communication interval of 5 epochs.</p>
</sec>
<sec id="s4_8">
<label>4.8</label>
<title>Discussion</title>
<p>Extensive simulations and real time deployment scenarios have been carried out to obtain the empirical results, and the results indicate that FedCognis is indeed robust and adaptive in detecting anomaly across dynamic IIoTCC environments. Under class imbalance and noise, the model&#x2019;s ROC-AUC score of 0.896 shows that it is capable of effectively classifying normal from anomalous patterns. Furthermore, the model demonstrated a high precision-recall AUC of 0.941, which further validates that the high false positive rate in critical infrastructure monitoring can be maintained with high confident true anomalies while avoiding inviting costly downtime or misdiagnosis.</p>
<p>As shown in <xref ref-type="table" rid="table-7">Table 7</xref>, the best final classification accuracy of 94.5% is demonstrative of a stable and well generalized model with respect to both concept drift, adversarial attacks, and federated heterogeneity. FedCognis is communication efficient in terms of bandwidth consumption that is 72% less than that required by traditional architectures, thus making it suitable for large-scale high bandwidth constrained IIoTCC networks. This is particularly important in cases that involve a poor communication cost or latency, which prohibits real time processing.</p>
<table-wrap id="table-7">
<label>Table 7</label>
<caption>
<title>Core performance metrics of FedCognis</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Metric</th>
<th>FedCognis score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Accuracy</td>
<td>94.5%</td>
</tr>
<tr>
<td>ROC-AUC</td>
<td>0.896</td>
</tr>
<tr>
<td>Precision-Recall AUC</td>
<td>0.941</td>
</tr>
<tr>
<td>Bandwidth savings</td>
<td>72%</td>
</tr>
<tr>
<td>Trust recovery rate</td>
<td>0.024/round</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>FedCognis embeds an adaptive trust mechanism that, given that trust is penalized temporarily with trust recovery rate of 0.024 per round, allows temporarily penalized nodes to recover trust depending on their consistent behaviour over time. This dynamic adjustment is robust to adversarial threats and fair to benign participants in reintegration.</p>
<sec id="s4_8_1">
<label>4.8.1</label>
<title>Deployment Challenges in Large-Scale IIoTCC Networks</title>
<p>While FedCognis has demonstrated high accuracy, security resilience, and communication efficiency in moderate-scale deployments, expanding the framework to city-scale networks with thousands of IIoT nodes introduces several non-trivial challenges. Although the system exhibits sublinear computational complexity (&#x1D4AA;(n<sup>0.9</sup>)), which supports scalability in principle, practical issues emerge as the network size grows beyond 1000 nodes.</p>
<p>First, communication bottlenecks may arise due to the increased volume of model updates transmitted during each aggregation round. Even though FedCognis uses selective trust-based filtering, in large networks the cumulative size of authenticated updates and their associated metadata (e.g., trust scores, model parameters) can saturate available bandwidth, especially in edge-constrained environments.</p>
<p>Second, the Quantum Secure Authentication (QSA) process&#x2014;while lightweight for small networks&#x2014;can become a computational burden at scale. Each update must be individually signed and verified, and with thousands of nodes participating per round, the cumulative verification latency can degrade real-time responsiveness. This is particularly problematic in time-critical infrastructures like traffic control or power grid management.</p>
<p>Third, managing and updating trust scores dynamically for a massive number of nodes can introduce synchronization delays and require additional memory overhead on the central server. Ensuring consistency and fairness in trust evaluation becomes harder when dealing with diverse device types, varying data quality, and fluctuating participation rates.</p>
<p>To address these challenges, future enhancements of FedCognis will explore several optimization strategies. These include:
<list list-type="bullet">
<list-item>
<p>Model compression and update sparsification to reduce communication payloads;</p></list-item>
<list-item>
<p>Signature aggregation techniques to verify batches of updates collectively rather than individually;</p></list-item>
<list-item>
<p>Asynchronous or hierarchical aggregation, where updates are first merged locally in subnetworks before global synchronization;</p></list-item>
<list-item>
<p>Edge-level clustering to divide large networks into smaller, manageable units with dedicated local aggregators.</p></list-item>
</list></p>
<p>In summary, while FedCognis is architecturally capable of supporting large-scale IIoTCC environments, practical scalability will require targeted improvements in communication handling, authentication efficiency, and distributed coordination. Addressing these challenges will be crucial for enabling secure and real-time anomaly detection in future cognitive city deployments involving tens of thousands of devices.</p>
</sec>
<sec id="s4_8_2">
<label>4.8.2</label>
<title>Large-Scale Simulation Validation</title>
<p>To validate the scalability and robustness of FedCognis in real-world, city-scale deployments, we conducted a large-scale emulated simulation involving 5000 IIoT nodes across three major urban domains: traffic control, smart grid monitoring, and public safety. The simulation was designed to reflect the heterogeneity, communication sparsity, and adversarial dynamics commonly found in cognitive city infrastructures.</p>
<p>Simulation Setup</p>
<p>The 5000-node network was logically divided into:
<list list-type="bullet">
<list-item>
<p><bold>2000 traffic nodes</bold> (e.g., signal controllers, intersection cameras, congestion sensors);</p></list-item>
<list-item>
<p><bold>1500 energy grid nodes</bold> (e.g., smart meters, grid substations, distributed solar units);</p></list-item>
<list-item>
<p><bold>1500 safety infrastructure nodes</bold> (e.g., surveillance systems, emergency alert units).</p></list-item>
</list></p>
<p>Each node operated on locally partitioned data derived from the WUSTL-IIoTCC-2021 base dataset, with domain-specific temporal perturbations introduced to simulate localized drift and seasonal variation. 10% of the nodes were adversarial, designed to inject poisoned updates intermittently.</p>
<p>The communication topology was semi-synchronous with cluster-based aggregation: nodes were grouped into 50 edge clusters, each containing 100 nodes. Local aggregation was performed at the cluster level before being pushed to the central global server using FedCognis&#x2019; trust-weighted, QSA-authenticated update mechanism.</p>
<p>Results and Observations</p>
<p><bold>Runtime and Convergence:</bold></p>
<p>FedCognis converged in 33 global rounds, with each round averaging 6.2 s in total processing time (local training &#x002B; aggregation &#x002B; QSA validation), demonstrating linear scalability with minimal degradation compared to smaller-scale runs.</p>
<p><bold>Bandwidth Usage:</bold></p>
<p>Average communication load per cluster dropped by 70%, owing to selective model update participation driven by trust filtering and sparse update scheduling. Total data transmission was reduced from an estimated 4.5 GB (centralized baseline) to 1.3 GB.</p>
<p><bold>Latency Impact:</bold></p>
<p>Average model update latency remained below 1.8 s, even under high node concurrency, due to parallelized QSA verification and the use of hierarchical aggregation.</p>
<p><bold>Accuracy and Resilience:</bold></p>
<p>Detection accuracy held steady at 93.8%, slightly lower than the 94.5% baseline in smaller experiments, with an AUC of 0.887 and adversarial resilience score of 95.3%. This confirms that FedCognis maintains strong performance even as network complexity and adversarial risk scale significantly.</p>
</sec>
</sec>
<sec id="s4_9">
<label>4.9</label>
<title>Application in Cognitive City Environments</title>
<p>We go on to explain how FedCognis is applicable to representative use cases in cognitive city infrastructures.
<list list-type="order">
<list-item>
<p><bold>Anomaly Detection in Traffic Networks:</bold> Cognitive cities heavily rely on intelligent transportation systems which collect traffic signal, smart cameras, and connected vehicles data continuously generating data, which is an area of anomaly detection in traffic networks. The real time anomaly detection across these nodes (unusual vehicle behavior, congestion anomaly, or sensor failure) can happen without centralizing data and therefore protecting privacy and minimizing latency with FedCognis.</p></list-item>
<list-item>
<p><bold>Energy Anomaly Detection in Smart Utility Grids:</bold> Urban energy infrastructure in smart cities contain smart meters, power controllers or grid controllers and sensors of consumption. To support distributed monitoring and to facilitate abnormal energy consumption detection, power theft and device failure detection at smart grid nodes, FedCognis operates subject to bandwidth constraints and improves grid resilience.</p></list-item>
<list-item>
<p><bold>Federated Learning across Heterogeneous City Infrastructures:</bold> Behind cognitive cities, however, is an existence of many of these domains, ranging from healthcare IoT, public safety systems and environmental monitoring. The collaboration of interactive models with heterogeneous systems is supported by dynamic normalized trust and secure authentication to ensure anomaly detection that is reliable even in variations and distributions of data at the domain level.</p></list-item>
</list></p>
<p>The FedCognsis a valid and scalable solution for anomaly detection in cognitive city environment validated through use cases, where the privacy, security, and real time responsiveness are important.</p>
</sec>
<sec id="s4_10">
<label>4.10</label>
<title>Baseline Comparison</title>
<p>To comprehensively evaluate the effectiveness of FedCognis, we extended our comparison against a broader range of federated and centralized anomaly detection models relevant to Industrial IoT (IIoT) scenarios. These include:
<list list-type="simple">
<list-item><label>(1)</label><p><bold>Vanilla Federated Learning</bold> (<bold>FL</bold>) without any security enhancements or trust-based filtering;</p></list-item>
<list-item><label>(2)</label><p><bold>FL with Lightweight Encryption</bold>, based on the approach by Liu et al. [<xref ref-type="bibr" rid="ref-6">6</xref>], which provides basic encryption for model privacy but lacks dynamic trust scoring or authentication;</p></list-item>
<list-item><label>(3)</label><p><bold>QFDSA</bold> (<bold>Quantum Federated Dynamic Security Architecture</bold>) proposed by Ren et al. [<xref ref-type="bibr" rid="ref-25">25</xref>], which uses quantum-safe techniques for smart grid security but suffers from scalability limitations;</p></list-item>
<list-item><label>(4)</label><p><bold>GNN-CAE Hybrid Model</bold>, a graph neural network combined with a convolutional autoencoder introduced by Liu et al. [<xref ref-type="bibr" rid="ref-34">34</xref>], designed for robust anomaly detection in time series;</p></list-item>
<list-item><label>(5)</label><p><bold>Dual-Attention GAN</bold>, a generative adversarial network-based model proposed by Wang et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] for addressing class imbalance in multivariate industrial anomaly detection.</p></list-item>
</list></p>
<p>All models were evaluated using the WUSTL-IIoTCC-2021 dataset [<xref ref-type="bibr" rid="ref-35">35</xref>] under identical conditions&#x2014;same data splits, communication rounds, and simulated concept drift. Metrics assessed include anomaly detection accuracy, bandwidth efficiency, and adversarial resilience.</p>
<p><bold>Anomaly</bold> <bold>Detection</bold> <bold>Accuracy:</bold></p>
<p>FedCognis achieved the highest accuracy of 94.5%, outperforming all baseline methods. The Dual-Attention GAN [<xref ref-type="bibr" rid="ref-17">17</xref>] reached 93.1%, the GNN-CAE hybrid [<xref ref-type="bibr" rid="ref-34">34</xref>] achieved 92.6%, while QFDSA [<xref ref-type="bibr" rid="ref-25">25</xref>] and vanilla FL lagged behind at 92.3% and 91.2% respectively. The superior performance of FedCognis is attributed to its Self-Attention LSTM (SALSTM) module, which captures intricate temporal dependencies, and the trust-based aggregation scheme that filters unreliable contributions.</p>
<p><bold>Bandwidth Efficiency:</bold></p>
<p>FedCognis reduced bandwidth usage by 72%, outperforming vanilla FL (41%), encrypted FL (58%), and QFDSA (60%). Centralized methods such as the GAN and GNN-CAE models, which require continuous global data streaming, offered no bandwidth savings and are not optimized for federated settings. The minimal communication cost of FedCognis stems from compact model updates, trust-based participant selection, and infrequent synchronization rounds.</p>
<p><bold>Adversarial Resilience:</bold></p>
<p>Under adversarial simulation conditions (10% compromised nodes), FedCognis retained 96.5% resilience, significantly higher than vanilla FL (84.2%), encrypted FL (88.6%), and QFDSA (93.2%). Centralized models like the Dual-Attention GAN and GNN-CAE are inherently vulnerable to poisoned inputs and offer no built-in defense mechanisms. FedCognis&#x2019;s performance is driven by its Quantum Secure Authentication (QSA) mechanism and trust adjustment logic that automatically suppresses malicious updates.</p>
<p><xref ref-type="table" rid="table-8">Table 8</xref> results collectively demonstrate that FedCognis achieves superior performance across all key metrics. Its hybrid use of quantum-secure authentication and adaptive attention-driven anomaly detection establishes a strong baseline for future federated learning systems in cognitive city environments.</p>
<table-wrap id="table-8">
<label>Table 8</label>
<caption>
<title>Comparative performance of FedCognis against baseline federated learning methods</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Method</th>
<th>Accuracy (%)</th>
<th>Bandwidth saving (%)</th>
<th>Adversarial resilience (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Vanilla FL</td>
<td>91.2</td>
<td>41</td>
<td>84.2</td>
</tr>
<tr>
<td>FL with lightweight encryption</td>
<td>90.1</td>
<td>58</td>
<td>88.6</td>
</tr>
<tr>
<td>QFDSA</td>
<td>92.3</td>
<td>60</td>
<td>93.2</td>
</tr>
<tr>
<td>GNN-CAE hybrid</td>
<td>92.6</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td>Dual-Attention GAN</td>
<td>93.1</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td>FedCognis (Proposed)</td>
<td>94.5</td>
<td>72</td>
<td>96.5</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4_11">
<label>4.11</label>
<title>Integration Roadmap with Existing IIoT Systems</title>
<p>To enable practical deployment, FedCognis is designed for seamless integration with existing Industrial Internet of Things (IIoT) and cognitive city systems, many of which rely on heterogeneous technologies, legacy infrastructure, and real-time data exchange protocols.</p>
<sec id="s4_11_1">
<label>4.11.1</label>
<title>Communication Interface Compatibility</title>
<p>FedCognis supports industry-standard lightweight communication protocols such as MQTT (Message Queuing Telemetry Transport) and OPC-UA (Open Platform Communications Unified Architecture). These protocols allow easy interfacing with IIoT edge devices such as smart meters, traffic sensors, surveillance systems, and programmable logic controllers (PLCs). The trust-aware communication architecture of FedCognis ensures that model updates and anomaly feedback can be securely exchanged over constrained networks using these protocols without protocol modification.</p>
</sec>
<sec id="s4_11_2">
<label>4.11.2</label>
<title>Interoperability with Legacy Systems</title>
<p>Many current IIoT deployments consist of legacy systems that lack native support for federated learning or cryptographic authentication. To support backward compatibility, FedCognis can operate through lightweight edge gateways that act as federated learning proxies. These gateways can preprocess data, manage QSA authentication, and communicate with the central aggregation server without requiring changes to legacy device firmware or operating systems.</p>
</sec>
<sec id="s4_11_3">
<label>4.11.3</label>
<title>Real-Time Adaptation</title>
<p>Cognitive city environments demand rapid response to anomalies such as traffic congestion, grid overload, or intrusions. FedCognis&#x2019;s asynchronous model update mode supports update intervals as low as 1&#x2013;3 s, with end-to-end detection latency averaging 1.8 s in our WUSTL-IIoTCC simulation. This enables deployment in real-time control systems that require high-frequency updates while still minimizing communication overhead.</p>
</sec>
<sec id="s4_11_4">
<label>4.11.4</label>
<title>Security Compliance and Standards</title>
<p>To align with real-world operational environments, FedCognis adheres to recognized cybersecurity standards:
<list list-type="bullet">
<list-item>
<p><bold>NIST SP 800-53</bold>: Covers model integrity, access control, and adversarial resilience.</p></list-item>
<list-item>
<p><bold>ISO/IEC 27001</bold>: Maps to risk management policies, update validation, and audit logging supported by the QSA module.</p></list-item>
</list></p>
<p>These compliance-ready features make FedCognis suitable for regulated sectors such as energy, public safety, and healthcare infrastructure in smart cities.</p>
<p>The integration roadmap demonstrates that FedCognis is not only a research prototype but a practical solution engineered for real-world IIoT deployments. It addresses the technical and regulatory barriers commonly faced when introducing intelligent anomaly detection systems into existing cognitive city infrastructure.</p>
</sec>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion</title>
<p>In the context of secure anomaly detection within IIoT-enabled Cognitive Cities (IIoTCC), FedCognis emerges as a robust and adaptive federated learning framework designed to address critical challenges related to trust, privacy, and communication efficiency in distributed urban infrastructures. By integrating Quantum Secure Authentication (QSA) and a dynamic trust evaluation mechanism, FedCognis significantly enhances security and adaptability against adversarial threats and concept drift in heterogeneous city systems. The model demonstrated strong performance in noisy and imbalanced data scenarios, achieving 94.5% accuracy, an ROC-AUC of 0.896, and a precision-recall AUC of 0.941. In terms of communication efficiency, FedCognis achieved a 72% reduction in bandwidth usage, along with a trust recovery rate of 0.024 per round, making it well-suited for deployment in bandwidth-constrained smart infrastructure environments. Security-wise, it attained an average score of 96.56%, including 97.8% resistance to model poisoning and 98.5% authentication strength via QSA, confirming its robustness under adversarial conditions. Additionally, the model converged in just 27 training rounds, maintaining a minimal gap of 2.9% between accuracy and precision, while demonstrating sublinear computational complexity of &#x1D4AA;(n<sup>0.9</sup>), indicating strong scalability for real-time applications. FedCognis effectively sets a new intelligent, secure, and trust-aware benchmark for anomaly detection across cognitive city systems. Future extensions, including differential privacy, multi-modal sensor fusion, and dynamic resource scheduling, present exciting opportunities to enhance its capabilities in more critical smart city environments.</p>
</sec>
</body>
<back>
<ack>
<p>The Researcher would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2025).</p>
</ack>
<sec>
<title>Funding Statement</title>
<p>The Researcher would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2025).</p>
</sec>
<sec sec-type="data-availability">
<title>Availability of Data and Materials</title>
<p>The data and materials utilized in this review originate from publicly available databases and previously published studies.</p>
</sec>
<sec>
<title>Ethics Approval</title>
<p>Not applicable.</p>
</sec>
<sec sec-type="COI-statement">
<title>Conflicts of Interest</title>
<p>The authors declare no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rong</surname> <given-names>C</given-names></string-name>, <string-name><surname>OuYang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Anomaly detection in QAR data using VAE-LSTM with multihead self-attention mechanism</article-title>. <source>Mob Inf Syst</source>. <year>2022</year>;<volume>2022</volume>(<issue>6</issue>):<fpage>8378187</fpage>. doi:<pub-id pub-id-type="doi">10.1155/2022/8378187</pub-id>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Xie</surname> <given-names>T</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Jiang</surname> <given-names>C</given-names></string-name>, <string-name><surname>Gao</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name></person-group>. <article-title>A robust anomaly detection model for pumps based on the spectral residual with self-attention variational autoencoder</article-title>. <source>IEEE Trans Ind Inform</source>. <year>2024</year>;<volume>209</volume>(<issue>6</issue>):<fpage>9059</fpage>&#x2013;<lpage>69</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TII.2024.3381790</pub-id>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Yao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Hutabarat</surname> <given-names>W</given-names></string-name>, <string-name><surname>Farnsworth</surname> <given-names>M</given-names></string-name>, <string-name><surname>Tiwari</surname> <given-names>D</given-names></string-name>, <string-name><surname>Tiwari</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Time series anomaly detection in vehicle sensors using self-attention mechanisms</article-title>. <source>IEEE Trans Intell Transp Syst</source>. <year>2024</year>;<volume>25</volume>(<issue>11</issue>):<fpage>15964</fpage>&#x2013;<lpage>76</lpage>. doi:<pub-id pub-id-type="doi">10.1109/tits.2024.3415435</pub-id>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Mishra</surname> <given-names>S</given-names></string-name>, <string-name><surname>Kshirsagar</surname> <given-names>V</given-names></string-name>, <string-name><surname>Dwivedula</surname> <given-names>R</given-names></string-name>, <string-name><surname>Hota</surname> <given-names>C</given-names></string-name></person-group>. <chapter-title>Attention-based Bi-LSTM for anomaly detection on time-series data</chapter-title>. In: <person-group person-group-type="editor"><string-name><surname>Farka&#x0161;</surname> <given-names>I</given-names></string-name>, <string-name><surname>Masulli</surname> <given-names>P</given-names></string-name>, <string-name><surname>Otte</surname> <given-names>S</given-names></string-name>, <string-name><surname>Wermter</surname> <given-names>S</given-names></string-name></person-group>, editors. <source>Artificial Neural Networks and Machine Learning&#x2014;ICANN 2021</source>. <publisher-loc>Cham, Switzerland</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>; <year>2021</year>. p. <fpage>129</fpage>&#x2013;<lpage>40</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-030-86362-3_11</pub-id>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Jiang</surname> <given-names>K</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Ruan</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>J</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>ALAE: self-attention reconstruction network for multivariate time series anomaly identification</article-title>. <source>Soft Comput</source>. <year>2023</year>;<volume>27</volume>(<issue>15</issue>):<fpage>10509</fpage>&#x2013;<lpage>19</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s00500-023-08467-4</pub-id>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Kumar</surname> <given-names>N</given-names></string-name>, <string-name><surname>Xiong</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Lim</surname> <given-names>WYB</given-names></string-name>, <string-name><surname>Kang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Niyato</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Communication-efficient federated learning for anomaly detection in industrial Internet of Things</article-title>. In: <conf-name>Proceedings of the GLOBECOM 2020&#x2014;2020 IEEE Global Communications Conference</conf-name>; <year>2020 Dec 7&#x2013;11</year>; <publisher-loc>Taipei, Taiwan</publisher-loc>. doi:<pub-id pub-id-type="doi">10.1109/globecom42002.2020.9348249</pub-id>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Matar</surname> <given-names>M</given-names></string-name>, <string-name><surname>Xia</surname> <given-names>T</given-names></string-name>, <string-name><surname>Huguenard</surname> <given-names>K</given-names></string-name>, <string-name><surname>Huston</surname> <given-names>D</given-names></string-name>, <string-name><surname>Wshah</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Multi-head attention based Bi-LSTM for anomaly detection in multivariate ticme-series of WSN</article-title>. In: <conf-name>Proceedings of the 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)</conf-name>; <year>2023 Jun 11&#x2013;13</year>; <publisher-loc>Hangzhou, China</publisher-loc>. doi:<pub-id pub-id-type="doi">10.1109/AICAS57966.2023.10168670</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Najari</surname> <given-names>N</given-names></string-name>, <string-name><surname>Berlemont</surname> <given-names>S</given-names></string-name>, <string-name><surname>Lefebvre</surname> <given-names>G</given-names></string-name>, <string-name><surname>Duffner</surname> <given-names>S</given-names></string-name>, <string-name><surname>Garcia</surname> <given-names>C</given-names></string-name></person-group>. <chapter-title>RESIST: robust transformer for unsupervised time series anomaly detection</chapter-title>. In: <person-group person-group-type="editor"><string-name><surname>Guyet</surname> <given-names>T</given-names></string-name>, <string-name><surname>Ifrim</surname> <given-names>G</given-names></string-name>, <string-name><surname>Malinowski</surname> <given-names>S</given-names></string-name>, <string-name><surname>Bagnall</surname> <given-names>A</given-names></string-name>, <string-name><surname>Shafer</surname> <given-names>P</given-names></string-name>, <string-name><surname>Lemaire</surname> <given-names>V</given-names></string-name></person-group>, editors. <source>Advanced analytics and learning on temporal data</source>. <publisher-loc>Cham, Switzerland</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>; <year>2023</year>. p. <fpage>66</fpage>&#x2013;<lpage>82</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-031-24378-3_5</pub-id>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>You</surname> <given-names>C</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>C</given-names></string-name></person-group>. <chapter-title>sBiLSAN: stacked bidirectional self-attention LSTM network for anomaly detection and diagnosis from system logs</chapter-title>. In: <person-group person-group-type="editor"><string-name><surname>Arai</surname> <given-names>K</given-names></string-name></person-group>, editor. <source>Intelligent systems and applications</source>. <publisher-loc>Cham, Switzerland</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>; <year>2021</year>. p. <fpage>777</fpage>&#x2013;<lpage>93</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-030-82199-9_52</pub-id>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>W</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>G</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>M</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wen</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Generative adversarial networks for abnormal event detection in videos based on self-attention mechanism</article-title>. <source>IEEE Access</source>. <year>2021</year>;<volume>9</volume>:<fpage>124847</fpage>&#x2013;<lpage>60</lpage>. doi:<pub-id pub-id-type="doi">10.1109/access.2021.3110798</pub-id>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Pandya</surname> <given-names>S</given-names></string-name>, <string-name><surname>Srivastava</surname> <given-names>G</given-names></string-name>, <string-name><surname>Jhaveri</surname> <given-names>R</given-names></string-name>, <string-name><surname>Babu</surname> <given-names>MR</given-names></string-name>, <string-name><surname>Bhattacharya</surname> <given-names>S</given-names></string-name>, <string-name><surname>Maddikunta</surname> <given-names>PKR</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Federated learning for smart cities: a comprehensive survey</article-title>. <source>Sustain Energy Technol Assess</source>. <year>2023</year>;<volume>55</volume>(<issue>5</issue>):<fpage>102987</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.seta.2022.102987</pub-id>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ghadi</surname> <given-names>YY</given-names></string-name>, <string-name><surname>Mazhar</surname> <given-names>T</given-names></string-name>, <string-name><surname>Shah</surname> <given-names>SFA</given-names></string-name>, <string-name><surname>Haq</surname> <given-names>I</given-names></string-name>, <string-name><surname>Ahmad</surname> <given-names>W</given-names></string-name>, <string-name><surname>Ouahada</surname> <given-names>K</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Integration of federated learning with IoT for smart cities applications: challenges and opportunities</article-title>. <source>PeerJ Comput Sci</source>. <year>2022</year>;<volume>8</volume>(<issue>5</issue>):<fpage>e1035</fpage>. doi:<pub-id pub-id-type="doi">10.7717/peerj-cs.1657</pub-id>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Jiang</surname> <given-names>JC</given-names></string-name>, <string-name><surname>Kantarci</surname> <given-names>B</given-names></string-name>, <string-name><surname>Oktug</surname> <given-names>S</given-names></string-name>, <string-name><surname>Soyata</surname> <given-names>T</given-names></string-name></person-group>. <article-title>Federated learning in smart city sensing: challenges and opportunities</article-title>. <source>Sensors</source>. <year>2020</year>;<volume>20</volume>(<issue>21</issue>):<fpage>E6230</fpage>. doi:<pub-id pub-id-type="doi">10.3390/s20216230</pub-id>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Prabow</surname> <given-names>OM</given-names></string-name>, <string-name><surname>Supangkat</surname> <given-names>SH</given-names></string-name>, <string-name><surname>Mulyana</surname> <given-names>E</given-names></string-name></person-group>. <article-title>Anomaly detection techniques in smart cities: a review from a framework perspective</article-title>. In: <conf-name>Proceedings of the 2021 International Conference on ICT for Smart Society (ICIS)</conf-name>; <year>2021 Aug 2&#x2013;4</year>;<publisher-loc>Bandung, Indonesia</publisher-loc>. doi:<pub-id pub-id-type="doi">10.1109/ICISS53185.2021.9533252</pub-id>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Poorazad</surname> <given-names>SK</given-names></string-name>, <string-name><surname>Benzaid</surname> <given-names>C</given-names></string-name>, <string-name><surname>Taleb</surname> <given-names>T</given-names></string-name></person-group>. <article-title>A novel buffered federated learning framework for privacy-driven anomaly detection in IIoT</article-title>. <comment>arXiv:2408.08722. 2024</comment>. doi:<pub-id pub-id-type="doi">10.48550/arXiv.2408.08722</pub-id>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Janani</surname> <given-names>RP</given-names></string-name>, <string-name><surname>Renuka</surname> <given-names>K</given-names></string-name>, <string-name><surname>Aruna</surname> <given-names>A</given-names></string-name>, <string-name><surname>Lakshmi Narayanan</surname> <given-names>K</given-names></string-name></person-group>. <article-title>IoT in smart cities: a contemporary survey</article-title>. <source>Glob Transit Proc</source>. <year>2021</year>;<volume>2</volume>(<issue>2</issue>):<fpage>187</fpage>&#x2013;<lpage>93</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.gltp.2021.08.069</pub-id>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Garg</surname> <given-names>S</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>H</given-names></string-name>, <string-name><surname>Hu</surname> <given-names>J</given-names></string-name>, <string-name><surname>Kaddoum</surname> <given-names>G</given-names></string-name>, <string-name><surname>Piran</surname> <given-names>MJ</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Toward accurate anomaly detection in industrial Internet of Things using hierarchical federated learning</article-title>. <source>IEEE Internet Things J</source>. <year>2022</year>;<volume>9</volume>(<issue>10</issue>):<fpage>7110</fpage>&#x2013;<lpage>9</lpage>. doi:<pub-id pub-id-type="doi">10.1109/jiot.2021.3074382</pub-id>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Huong</surname> <given-names>TT</given-names></string-name>, <string-name><surname>Bac</surname> <given-names>TP</given-names></string-name>, <string-name><surname>Long</surname> <given-names>DM</given-names></string-name>, <string-name><surname>Luong</surname> <given-names>TD</given-names></string-name>, <string-name><surname>Dan</surname> <given-names>NM</given-names></string-name>, <string-name><surname>Quang</surname> <given-names>LA</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Detecting cyberattacks using anomaly detection in industrial control systems: a federated learning approach</article-title>. <source>Comput Ind</source>. <year>2021</year>;<volume>132</volume>(<issue>7</issue>):<fpage>103509</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.compind.2021.103509</pub-id>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Mothukuri</surname> <given-names>V</given-names></string-name>, <string-name><surname>Khare</surname> <given-names>P</given-names></string-name>, <string-name><surname>Parizi</surname> <given-names>RM</given-names></string-name>, <string-name><surname>Pouriyeh</surname> <given-names>S</given-names></string-name>, <string-name><surname>Dehghantanha</surname> <given-names>A</given-names></string-name>, <string-name><surname>Srivastava</surname> <given-names>G</given-names></string-name></person-group>. <article-title>Federated-learning-based anomaly detection for IoT security attacks</article-title>. <source>IEEE Internet Things J</source>. <year>2022</year>;<volume>9</volume>(<issue>4</issue>):<fpage>2545</fpage>&#x2013;<lpage>54</lpage>. doi:<pub-id pub-id-type="doi">10.1109/jiot.2021.3077803</pub-id>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rashid</surname> <given-names>MM</given-names></string-name>, <string-name><surname>Khan</surname> <given-names>SU</given-names></string-name>, <string-name><surname>Eusufzai</surname> <given-names>F</given-names></string-name>, <string-name><surname>Redwan</surname> <given-names>MA</given-names></string-name>, <string-name><surname>Sabuj</surname> <given-names>SR</given-names></string-name>, <string-name><surname>Elsharief</surname> <given-names>M</given-names></string-name></person-group>. <article-title>A federated learning-based approach for improving intrusion detection in industrial Internet of Things networks</article-title>. <source>Network</source>. <year>2023</year>;<volume>3</volume>(<issue>1</issue>):<fpage>158</fpage>&#x2013;<lpage>79</lpage>. doi:<pub-id pub-id-type="doi">10.3390/network3010008</pub-id>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Weinger</surname> <given-names>B</given-names></string-name>, <string-name><surname>Kim</surname> <given-names>J</given-names></string-name>, <string-name><surname>Sim</surname> <given-names>A</given-names></string-name>, <string-name><surname>Nakashima</surname> <given-names>M</given-names></string-name>, <string-name><surname>Moustafa</surname> <given-names>N</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>KJ</given-names></string-name></person-group>. <article-title>Enhancing IoT anomaly detection performance for federated learning</article-title>. <source>Digit Commun Netw</source>. <year>2022</year>;<volume>8</volume>(<issue>3</issue>):<fpage>314</fpage>&#x2013;<lpage>23</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.dcan.2022.02.007</pub-id>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>P</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>T</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Enhancing quantum security over federated learning via post-quantum cryptography</article-title>. In: <conf-name>Proceedings of the 2024 IEEE 6th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA); 2024 Oct 28&#x2013;31; Washington, DC, USA</conf-name>. doi:<pub-id pub-id-type="doi">10.1109/TPS-ISA62245.2024.00067</pub-id>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Taheri</surname> <given-names>R</given-names></string-name>, <string-name><surname>Shojafar</surname> <given-names>M</given-names></string-name>, <string-name><surname>Alazab</surname> <given-names>M</given-names></string-name>, <string-name><surname>Tafazolli</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Fed-IIoT: a robust federated malware detection architecture in industrial IoT</article-title>. <source>IEEE Trans Ind Inf</source>. <year>2021</year>;<volume>17</volume>(<issue>12</issue>):<fpage>8442</fpage>&#x2013;<lpage>52</lpage>. doi:<pub-id pub-id-type="doi">10.1109/tii.2020.3043458</pub-id>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Truong</surname> <given-names>HT</given-names></string-name>, <string-name><surname>Ta</surname> <given-names>BP</given-names></string-name>, <string-name><surname>Le</surname> <given-names>QA</given-names></string-name>, <string-name><surname>Nguyen</surname> <given-names>DM</given-names></string-name>, <string-name><surname>Le</surname> <given-names>CT</given-names></string-name>, <string-name><surname>Nguyen</surname> <given-names>HX</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Light-weight federated learning-based anomaly detection for time-series data in industrial control systems</article-title>. <source>Comput Ind</source>. <year>2022</year>;<volume>140</volume>:<fpage>103692</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.compind.2022.103692</pub-id>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ren</surname> <given-names>C</given-names></string-name>, <string-name><surname>Yan</surname> <given-names>R</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>M</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Niyato</surname> <given-names>D</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>QFDSA: a quantum-secured federated learning system for smart grid dynamic security assessment</article-title>. <source>IEEE Internet Things J</source>. <year>2024</year>;<volume>11</volume>(<issue>5</issue>):<fpage>8414</fpage>&#x2013;<lpage>26</lpage>. doi:<pub-id pub-id-type="doi">10.1109/jiot.2023.3321793</pub-id>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Javeed</surname> <given-names>D</given-names></string-name>, <string-name><surname>Saeed</surname> <given-names>MS</given-names></string-name>, <string-name><surname>Ahmad</surname> <given-names>I</given-names></string-name>, <string-name><surname>Adil</surname> <given-names>M</given-names></string-name>, <string-name><surname>Kumar</surname> <given-names>P</given-names></string-name>, <string-name><surname>Islam</surname> <given-names>AKMN</given-names></string-name></person-group>. <article-title>Quantum-empowered federated learning and 6G wireless networks for IoT security: concept, challenges and future directions</article-title>. <source>Future Gener Comput Syst</source>. <year>2024</year>;<volume>160</volume>(<issue>1</issue>):<fpage>577</fpage>&#x2013;<lpage>97</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.future.2024.06.023</pub-id>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Kannan</surname> <given-names>E</given-names></string-name>, <string-name><surname>MJ</surname>, <given-names>CMB</given-names></string-name>, <string-name><surname>Ravikumar</surname> <given-names>S</given-names></string-name>, <string-name><surname>Kannan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Vijay</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Quantum-safe federated learning: enhancing data privacy and security</article-title>. In: <conf-name>Proceedings of the 2024 International Conference on Emerging Research in Computational Science (ICERCS); 2024 Dec 12&#x2013;14; Coimbatore, India</conf-name>. doi:<pub-id pub-id-type="doi">10.1109/ICERCS63125.2024.10895353</pub-id>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Aljrees</surname> <given-names>T</given-names></string-name>, <string-name><surname>Kumar</surname> <given-names>A</given-names></string-name>, <string-name><surname>Singh</surname> <given-names>KU</given-names></string-name>, <string-name><surname>Singh</surname> <given-names>T</given-names></string-name></person-group>. <article-title>Enhancing IoT security through a green and sustainable federated learning platform: leveraging efficient encryption and the quondam signature algorithm</article-title>. <source>Sensors</source>. <year>2023</year>;<volume>23</volume>(<issue>19</issue>):<fpage>8090</fpage>. doi:<pub-id pub-id-type="doi">10.3390/s23198090</pub-id>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Qiao</surname> <given-names>C</given-names></string-name>, <string-name><surname>Li</surname> <given-names>M</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Tian</surname> <given-names>Z</given-names></string-name></person-group>. <article-title>Transitioning from federated learning to quantum federated learning in Internet of Things: a comprehensive survey</article-title>. <source>IEEE Commun Surv Tutor</source>. <year>2025</year>;<volume>27</volume>(<issue>1</issue>):<fpage>509</fpage>&#x2013;<lpage>45</lpage>. doi:<pub-id pub-id-type="doi">10.1109/COMST.2024.3399612</pub-id>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yamany</surname> <given-names>W</given-names></string-name>, <string-name><surname>Moustafa</surname> <given-names>N</given-names></string-name>, <string-name><surname>Turnbull</surname> <given-names>B</given-names></string-name></person-group>. <article-title>OQFL: an optimized quantum-based federated learning framework for defending against adversarial attacks in intelligent transportation systems</article-title>. <source>IEEE Trans Intell Transp Syst</source>. <year>2023</year>;<volume>24</volume>(<issue>1</issue>):<fpage>893</fpage>&#x2013;<lpage>903</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TITS.2021.3130906</pub-id>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Deng</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>R</given-names></string-name>, <string-name><surname>Ren</surname> <given-names>J</given-names></string-name>, <string-name><surname>Ren</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>PQSF: post-quantum secure privacy-preserving federated learning</article-title>. <source>Sci Rep</source>. <year>2024</year>;<volume>14</volume>(<issue>1</issue>):<fpage>23553</fpage>. doi:<pub-id pub-id-type="doi">10.1038/s41598-024-74377-6</pub-id>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Veeramachaneni</surname> <given-names>V</given-names></string-name></person-group>. <article-title>Dynamic resource allocation framework for resilient and secure IoT communication using federated learning and quantum cryptography</article-title>. <source>J Adv Comput Intell Theory</source>. <year>2025</year>;<volume>7</volume>(<issue>1</issue>):<fpage>41</fpage>&#x2013;<lpage>59</lpage>. doi:<pub-id pub-id-type="doi">10.5281/zenodo.14168731</pub-id>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>F</given-names></string-name>, <string-name><surname>Jiang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>R</given-names></string-name>, <string-name><surname>Wei</surname> <given-names>A</given-names></string-name>, <string-name><surname>Xie</surname> <given-names>J</given-names></string-name>, <string-name><surname>Pang</surname> <given-names>X</given-names></string-name></person-group>. <article-title>A survey of deep anomaly detection in multivariate time series: taxonomy, applications, and directions</article-title>. <source>Sensors</source>. <year>2025</year>;<volume>25</volume>(<issue>1</issue>):<fpage>190</fpage>. doi:<pub-id pub-id-type="doi">10.3390/s25010190</pub-id>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>F</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Li</surname> <given-names>X</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name></person-group>. <article-title>A hybrid graph neural network and convolutional autoencoder for robust time series anomaly detection</article-title>. <source>Inf Sci</source>. <year>2024</year>;<volume>642</volume>:<fpage>119362</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.ins.2024.120222</pub-id>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ismail</surname> <given-names>S</given-names></string-name>, <string-name><surname>Dandan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Qushou</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Intrusion detection in IoT and IIoT: comparing lightweight machine learning techniques using TON_IoT, WUSTL-IIOT-2021, and EdgeIIoTset datasets</article-title>. <source>IEEE Access</source>. <year>2025</year>;<volume>13</volume>:<fpage>73468</fpage>&#x2013;<lpage>85</lpage>. doi:<pub-id pub-id-type="doi">10.1109/access.2025.3554083</pub-id>.</mixed-citation></ref>
</ref-list>
</back></article>