<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CSSE</journal-id>
<journal-id journal-id-type="nlm-ta">CSSE</journal-id>
<journal-id journal-id-type="publisher-id">CSSE</journal-id>
<journal-title-group>
<journal-title>Computer Systems Science &#x0026; Engineering</journal-title>
</journal-title-group>
<issn pub-type="ppub">0267-6192</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">36864</article-id>
<article-id pub-id-type="doi">10.32604/csse.2023.036864</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Adaptive Learning Video Streaming with QoE in Multi-Home Heterogeneous Networks</article-title>
<alt-title alt-title-type="left-running-head">Adaptive Learning Video Streaming with QoE in Multi-Home Heterogeneous Networks</alt-title>
<alt-title alt-title-type="right-running-head">Adaptive Learning Video Streaming with QoE in Multi-Home Heterogeneous Networks</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Vijayashaarathi</surname><given-names>S.</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>vijayashaarathis@gmail.com</email></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>NithyaKalyani</surname><given-names>S.</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<aff id="aff-1"><label>1</label><institution>Department of Electronics and Communication, Sona College of Technology</institution>, <addr-line>Salem, 636005, Tamil Nadu</addr-line>, <country>India</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of Information Technology, K. S. R. College of Engineering</institution>, <addr-line>Tiruchengode, 637215, Tamil Nadu</addr-line>, <country>India</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: S. Vijayashaarathi. Email: <email>vijayashaarathis@gmail.com</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2023</year></pub-date>
<pub-date date-type="pub" publication-format="electronic">
<day>31</day>
<month>3</month>
<year>2023</year>
</pub-date>
<volume>46</volume>
<issue>3</issue>
<fpage>2881</fpage>
<lpage>2897</lpage>
<history>
<date date-type="received">
<day>14</day>
<month>10</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>12</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Vijayashaarathi and NithyaKalyani</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Vijayashaarathi and NithyaKalyani</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CSSE_36864.pdf"></self-uri>
<abstract>
<p>In recent years, real-time video streaming has grown in popularity. The growing popularity of the Internet of Things (IoT) and other wireless heterogeneous networks mandates that network resources be carefully apportioned among versatile users in order to achieve the best Quality of Experience (QoE) and performance objectives. Most researchers focused on Forward Error Correction (FEC) techniques when attempting to strike a balance between QoE and performance. However, as network capacity increases, the performance degrades, impacting the live visual experience. Recently, Deep Learning (DL) algorithms have been successfully integrated with FEC to stream videos across multiple heterogeneous networks. But these algorithms need to be changed to make the experience better without sacrificing packet loss and delay time. To address the previous challenge, this paper proposes a novel intelligent algorithm that streams video in multi-home heterogeneous networks based on network-centric characteristics. The proposed framework contains modules such as Intelligent Content Extraction Module (ICEM), Channel Status Monitor (CSM), and Adaptive FEC (AFEC). This framework adopts the Cognitive Learning-based Scheduling (CLS) Module, which works on the deep Reinforced Gated Recurrent Networks (RGRN) principle and embeds them along with the FEC to achieve better performances. The complete framework was developed using the Objective Modular Network Testbed in C&#x002B;&#x002B; (OMNET&#x002B;&#x002B;), Internet networking (INET), and Python 3.10, with Keras as the front end and Tensorflow 2.10 as the back end. With extensive experimentation, the proposed model outperforms the other existing intelligent models in terms of improving the QoE, minimizing the End-to-End Delay (EED), and maintaining the highest accuracy (98%) and a lower Root Mean Square Error (RMSE) value of 0.001.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Real-time video streaming</kwd>
<kwd>IoT</kwd>
<kwd>multi-home heterogeneous networks</kwd>
<kwd>forward error coding</kwd>
<kwd>deep reinforced gated recurrent networks</kwd>
<kwd>QoE</kwd>
<kwd>prediction accuracy</kwd>
<kwd>RMSE</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>In recent years, the increasing popularity of IoT and wireless communication networks has enabled users to access their networks and stream videos anywhere. The proliferating wireless infrastructure offers a wide range of broadcasts, including Wireless Fidelity (Wi-Fi), Wireless Local-Area Networks (WLAN), Worldwide Interoperability for Microwave Access (Wi-MAX), Institute of Electrical and Electronics Engineers 802.11 (IEEE 802.11), and even Mobile Cellular Communication [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>].</p>
<p>According to a report [<xref ref-type="bibr" rid="ref-3">3</xref>], due to the exponential growth of these wireless technologies, real-time video will most likely account for 90% of the increase in network traffic by 2023. Single wireless networks cannot deliver good video-sharing quality compared to their controlled capacity, fragility, and irregular coverage. While cellular networks like Universal Mobile Telecommunications Service (UMTS) and Global System for Mobile (GSM) communication might offer a stable connection, they fall short when it comes to providing the highest Quality of Service (QoS). Despite having excellent coverage and faster data rates, Long Term Evolution (LTE) and Wi-MAX are not widely used [<xref ref-type="bibr" rid="ref-4">4</xref>]. According to the talks, users must provide their devices with several connectors to connect to several networks simultaneously and have multi-home access. To deliver a better QoE in heterogeneous networks with multiple-homed clients, effective coding schemes and Adaptive Forward Error Coding (AFEC) methods are used in existing work. [<xref ref-type="bibr" rid="ref-5">5</xref>&#x2013;<xref ref-type="bibr" rid="ref-7">7</xref>]. Nonetheless, all existing results only use static network patterns to predict future designs, ignoring the possible relationship between static and future that influences the failure of these algorithms as the network changes dynamically [<xref ref-type="bibr" rid="ref-8">8</xref>&#x2013;<xref ref-type="bibr" rid="ref-10">10</xref>].</p>
<p>To develop a better QoE and better performance, recent studies have explored the advantages of the machine and DL architectures in AFEC for real-time video transmission in multi-path environments. Long Short-Term Memory (LSTM) [<xref ref-type="bibr" rid="ref-11">11</xref>] has recently attracted many researchers to embed these networks with AFECs. Despite implementing Deep Learning (DL) methods that improve performance, video streaming applications in multi-homed clients continue to suffer from packet losses, distortion, low PDR, and latency [<xref ref-type="bibr" rid="ref-12">12</xref>&#x2013;<xref ref-type="bibr" rid="ref-15">15</xref>]. Motivated by the above drawbacks, this paper proposes a novel intelligent network and content-aware framework to achieve a better QoE with high performance. The Adaptive Reinforced Gated Recurrent Neural Networks (ARGRN) are introduced for scheduling packets according to different network conditions. This is the first type to incorporate a Gated Recurrent Neural Network (GRNN) with AFEC to improve QoE and performance. It may open a new gateway for video streaming in multi-path research.</p>
<p>The main contributions of this research are listed as follows:
<list list-type="order">
<list-item>
<p>Embed the Reinforced Gated Recurrent Networks (RGRN) with FEC to solve the problem of streaming video packets in a network that changes over time.</p></list-item>
<list-item>
<p>Network Content-Aware Transmission (NCAT) is adopted to save bandwidth, thus reducing the transmission delay and increasing performance.</p></list-item>
<list-item>
<p>Extensive experiments and novel evaluation measures have been adopted to prove the excellence of the proposed model when compared with other existing algorithms.</p></list-item>
</list></p>
<p>The paper&#x2019;s organization is as follows: The related works by different writers are included in Section 2. The specific operating system of the recommended model is discussed in Section 3. The experimental setup, performance analysis, and comparisons with current systems are described in Section 4. Finally, after the future improvements, Section 5 presents the conclusion.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Works</title>
<p>By assessing the continued results of Adaptive Video Streaming (AVS) for heterogeneous video transmission, the author [<xref ref-type="bibr" rid="ref-16">16</xref>] investigates the crucial concept of QoE designs. Finally, by incorporating a coordinated telecom framework, this study brings the hypothetical models closer to plausible execution. However, the community genuinely requires modern approaches. In areas like video coding, multiuser communication, and broadcasting networks, both academia and business will need to work together to come up with effective techniques.</p>
<p>Author [<xref ref-type="bibr" rid="ref-17">17</xref>] presents LSTM-QoE, an intermittent brain network-based QoE expectation model using an LSTM organization. The LSTM-QoE is a set of streaming LSTM units created to represent the intricate nonlinear effects and transitory situations connected with time-varying QoE. Based on an analysis of different persistent QoE datasets that are available to the public, this method shows that it has the potential to represent the QoE characteristics well. The comparison between the proposed model and the top-performing QoE forecast models in this structure demonstrates the proposed model&#x2019;s remarkable performance across these datasets. Additionally, by showing approaches to the QoE expectation, this system illustrates the practicality of the state space point of view for the LSTM-QoE. But despite all this, this framework&#x2019;s main flaw is that it has trouble handling heterogeneous information over time.</p>
<p>Author [<xref ref-type="bibr" rid="ref-18">18</xref>] showed Video Quality Aware Resource Allocation (ViQARA), a perceptual QoE-based resource allocation (RA) method for video web-based in cell organizations. ViQARA employs the most recent consistent QoE models and combines the summed-up and reasonable RA methodologies. This structure illustrates how ViQARA, compared to conventional throughput-based RA methods, may offer a notable increase in the users&#x2019; perceived QoE and a surprise drop in rebufferings. The planned computation is also there to allow better QoE streamlining of the usually available resources when the mobile network doesn&#x2019;t have enough resources or has long delays in sending packets. However, the prices of network ventures and Content Delivery Network (CDN) use fees go up with this technique.</p>
<p>Another quality-conscious multi-source video web-based conspiracy for Content-Centric Networking (CCN) is suggested by [<xref ref-type="bibr" rid="ref-19">19</xref>]. First, various storage methods are considered for the material conveyance of video recordings between CCN hubs. Second, a versatile video real-time system with adequate storage. Adaptive Video Streaming with Distributed Caching (AVSDC) calculation is designed to maintain QoE while sharing data between good sources. The conveyance of AVS is taken into account in the AVSDC computation. It consequently alters the layers in video transmission when there is source switching in light of a QoE model that illustrates the effect of slowing down. In terms of the QoE determined by human emotional tests, experimental results show that AVSDC computation works better than dynamic versatile spilling over HyperText Transfer Protocol HTTP (DASH) in the CCN stage. The fundamental attraction of this arrangement is that it promotes computational complexity.</p>
<p>A novel cooperative QoE-based versatile mindful video real-time conspiracy sent to Mobile Edge Computing (MEC) servers is proposed [<xref ref-type="bibr" rid="ref-20">20</xref>]. The author&#x2019;s suggested plan can be implemented to maintain a suitable QoE level for each client throughout the entire video conferencing. Broad reproductions have investigated how the implied conspiracy will be carried out. In contrast to prior methods, the results demonstrate that high productivity is obtained by a coordinated effort across MEC servers, leveraging unambiguous window size transformation, cooperative prefetching, and handover among the edges. However, this system&#x2019;s drawback is the requirement for increased transmission capacity for streaming.</p>
<p>The author [<xref ref-type="bibr" rid="ref-21">21</xref>] proposes the CNN-QoE as a more potent TCN-based model for constantly predicting the QoE, which exhibits consecutive information features. By learning about upgrades to further increase the accuracy of QoE expectations, the proposed model uses TCN&#x2019;s advantages to get over this framework&#x2019;s computational complexity limitations. A thorough investigation of this structure shows that the presented framework may be able to provide high QoE forecast execution on both PCs and mobile devices, which would be better than existing methods. In either scenario, the arrangement results in handover delays.</p>
<p>Flex-Steward (FS) is a programme made by [<xref ref-type="bibr" rid="ref-22">22</xref>] that improves the joint QoE of flexible video for multiple clients in real time while sharing bottleneck data transfer. The term <italic>&#x2018;Joint QoE Improvement&#x2019;</italic> refers to improving QoE consistency among users of different video devices using separate services and having multiple demands. FS facilitates learning at the network edge and transmits a flexible bitrate computation based on Neural Networks (NN). A developed NN model is needed to make recommendations for the right bitrate for video bits to be sent by clients with the same bottleneck transmission capacity. In terms of joint QoE enhancement, the results proved how FS reduces shamefulness by between 10.9% and 41.7%. This system&#x2019;s primary constraint is that it necessitates time complexity and additional resources throughout the cycle.</p>
<p>The author [<xref ref-type="bibr" rid="ref-23">23</xref>] examines and develops HTTP adaptive streaming&#x2019;s processing and data-transmission capabilities for live streaming web-based video with different edge rates and objectives. In order to evaluate the viewer experience of live video channels, this system also presents an asset-aware QoE model. The structure then provides a QoE-driven HAS to support a new channel paradigm to enhance the usual client QoE. Using a heuristic solution, this framework transforms the boost issue into a Multidimensional Knapsack Problem (MKP). The results of the experimental analysis projected the suggested method&#x2019;s viability compared to benchmark setups. However, this system required greater computation energy.</p>
<p>Author [<xref ref-type="bibr" rid="ref-24">24</xref>] offers a novel Adaptive Bitrate Algorithm (ABR) calculation that can reduce traffic volume while keeping QoE higher than the goal. Customers&#x2019; desires (or) CDN budgets might be considered when streaming service providers evaluate the intended QoE. Each meal chooses an acceptable bitrate by assessing QoE and traffic volume so that all bitrate designs support the approaching few bits based on future throughput and a cushion change forecast. The QoE is better than existing computations while reducing network traffic by an average of 18.3% to 51.2% in the adaptable environment and by 1.2% to 38.3% in the broadband location, following the flow-based reproduction. In any case, this structure has handover problems when it uses long-range communication. Based on the related works, the research gap is that AVS applications in multi-homed clients that still have problems like packet loss, PDR, EED, and many others (<xref ref-type="table" rid="table-1">Table 1</xref>) need an intelligent system.</p>
<list list-type="bullet">
<list-item>
<p>Maximized the average user&#x2019;s QoE.</p>
</list-item>
<list-item>
<p>Long-term video quality is improved without reducing the delayed performance.</p></list-item>
<list-item>
<p>Reduce the amount of time that services are unavailable.</p></list-item>
<list-item>
<p>The designed approach handles the streaming of different video bitrates under congested network conditions to provide sufficient video quality.</p></list-item>
<list-item>
<p>A designed framework ensured near-optimal satisfaction and efficiency.</p></list-item>
<list-item>
<p>Minimized resource consumption, better throughput, and reduced delay.</p></list-item>
</list>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Summary of related works</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Authors</th>
<th>Methodology</th>
<th>Merits</th>
<th>Demerits</th>
</tr>
</thead>
<tbody>
<tr>
<td>Liu et al.</td>
<td>QoE-driven HTTP adaptive streaming (HAS)</td>
<td>Maximized average user QoE</td>
<td>High energy consumption</td>
</tr>
<tr>
<td>Kimura et al.</td>
<td>ABR</td>
<td>Minimized network traffic</td>
<td>High bandwidth</td>
</tr>
<tr>
<td>Feng et al.</td>
<td>Long-term rate control scheme</td>
<td>Less delay</td>
<td>High energy consumption</td>
</tr>
<tr>
<td>Haotong et al.</td>
<td>Virtual network embedding (VNE)</td>
<td>Minimize the time of service interruption</td>
<td>Increased resource utilization</td>
</tr>
<tr>
<td>Samira et al.</td>
<td>Content-aware and path-aware (CAPA)</td>
<td>High video quality</td>
<td>High energy consumption</td>
</tr>
<tr>
<td>Eksert et al.</td>
<td>Intra and inter-cluster link scheduling</td>
<td>Near-optimal satisfaction and efficiency</td>
<td>Increased delay and bandwidth</td>
</tr>
<tr>
<td>Guanyu et al.</td>
<td>Video transcoding in ABR streaming</td>
<td>Less resource consumption</td>
<td>But increased time complexity</td>
</tr>
<tr>
<td>Xiongli et al.</td>
<td>Blind DL-driven method</td>
<td>High throughput</td>
<td>High resource complexity</td>
</tr>
<tr>
<td>Ghadiyaram et al.</td>
<td>QoE-live mobile stall video database-II</td>
<td>Less delay</td>
<td>High computational complexity</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3">
<label>3</label>
<title>Proposed Methodology</title>
<p><xref ref-type="fig" rid="fig-1">Fig. 1</xref> shows the proposed model for intelligent and AFEC-based video transmission over multiple-communication interfaces. The proposed system consists of (a) a module for Intelligent Content Extraction, (b) a cognitive Learning-based Scheduling and FEC Module (CLS-FEC) (c) a module for Channel Monitoring.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Proposed model for intelligent and AFEC for video transmission</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CSSE_36864-fig-1.tif"/>
</fig>
<p>In the first module, the transmitted video is converted into corresponding frames by a frame splitter, in which the principle of saliency mapping extracts the actual contents. Also, the channel properties of multi-paths are monitored and collected by the channel monitor in a pipelined fashion. Finally, the combined parameters from the above modules are then passed to the CLS-FEC module, which consists of AFEC for forwarding error correction and DL architecture for scheduling methods. The video transmitter then transmits the packets to the different users. The decoding of the video stream is done on the receiver. When a packet reaches the receiver side, the decoder puts it together correctly and shows it to the user&#x2019;s video player.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Network Model</title>
<p>Considering two transmission pathways, a heterogeneous wireless network with <italic>P</italic> routing paths. The Gilbert simulator is used to model the losses on each path. The path state <italic>x(t)</italic> is taken to be either 1 (good) or 0 (bad),<italic>&#x2018;t&#x2019;</italic> at the time. If <italic>x(t) &#x003D; 1</italic>, the packet will be executed, but the results will be lost; if <italic>x(t) &#x003D; 0</italic>, the packet will be destroyed. Let&#x2019;s suppose that <italic>M</italic><sub><italic>t</italic></sub> is the maximum transmission unit and that &#x2018;<italic>O&#x2019;</italic> represents the frame&#x2019;s output bits. The following is the expression for &#x2018;N&#x2019;, the number of video frames: <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref></p>
<p><disp-formula id="eqn-1">
<label>(1)</label>
<mml:math id="mml-eqn-1" display="block"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mi>O</mml:mi><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>M</mml:mi><mml:mi>t</mml:mi></mml:math></disp-formula></p>
<p>The transmission loss rate on the path <italic>&#x201C;p&#x201D;</italic> for each <italic>&#x201C;g&#x201D;</italic> content frame is expressed by the tuple of pathways <italic>&#x201C;p&#x201D;</italic> with the size of <italic>&#x201C;N</italic><sub><italic>p</italic></sub><italic>&#x201D;</italic> as <xref ref-type="disp-formula" rid="eqn-2">Eq. (2)</xref></p>
<p><disp-formula id="eqn-2">
<label>(2)</label>
<mml:math id="mml-eqn-2" display="block"><mml:mi>F</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>g</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mrow><mml:mtext>N</mml:mtext><mml:mi>p</mml:mi></mml:mrow><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mi>p</mml:mi></mml:mrow></mml:msubsup><mml:mi>&#x03B2;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msup><mml:mo>==</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula>where, C(p) &#x003D; n{1,0,1,0,1,0,1,0,1&#x2026;1}. From <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>, <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi mathvariant="bold-italic">&#x03B2;</mml:mi><mml:mo mathvariant="bold" stretchy="false">&#x2192;</mml:mo></mml:math></inline-formula> indicator function value. In ability networks, the exponential distribution described in [<xref ref-type="bibr" rid="ref-25">25</xref>] can be used to figure out how likely it is that packets will be lost along path &#x201C;p&#x201D; after the deadline &#x201C;T.&#x201D;</p>
<p><disp-formula id="eqn-3">
<label>(3)</label>
<mml:math id="mml-eqn-3" display="block"><mml:mi>F</mml:mi><mml:mo>&quot;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>g</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>D</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>g</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x003E;</mml:mo></mml:math></disp-formula>where D (g, p) is the EED for the content frame. <italic>&#x201C;g&#x201D;</italic> includes delivery processing and propagation latency. The total delay is calculated for each path that carries the video streaming flow and is expressed as <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref></p>
<p><disp-formula id="eqn-4">
<label>(4)</label>
<mml:math id="mml-eqn-4" display="block"><mml:mi>X</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>T</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>X</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:mi>T</mml:mi><mml:mo>&#x2217;</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>g</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>For calculating the video streams, X(T) <bold>&#x2192;</bold> size of the cumulative sub-streaming flow [0,t] over the path p, and X(p) <bold>&#x2192;</bold> long-term average video streaming rate, the packet path is used. As mentioned in [<xref ref-type="bibr" rid="ref-26">26</xref>], the model employs a work-conserving queueing system, which is then used to calculate the overall delay. As a result, the overall EED for the frame <italic>&#x201C;g&#x201D;</italic> full path is the sum of all queuing delays associated with the path <italic>&#x201C;p&#x201D;</italic>. Mathematically, the overall EED is expressed as follows, <xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref>:</p>
<p><disp-formula id="eqn-5">
<label>(5)</label>
<mml:math id="mml-eqn-5" display="block"><mml:mi>D</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>g</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>F</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>g</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>X</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo></mml:mrow></mml:mfrac><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>d</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula>where F (d) is the fixed EED on each path, the proposed system uses the distortion model from [<xref ref-type="bibr" rid="ref-24">24</xref>], where d is the fixed delay on each path.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Intelligent Content Extraction Module</title>
<p>The content extraction module employs the visual saliency technique for extracting the visual content from the video frames. In order to determine the salient objects from the movies, a saliency model incorporating the cues of object intensity, color, and motion has been developed [<xref ref-type="bibr" rid="ref-27">27</xref>,<xref ref-type="bibr" rid="ref-28">28</xref>]. A simple statistical model that works on both videos and images. This method combines conditional random fields with local information like color and motion signals to create saliency maps. Several bottom-up techniques have been presented in order to identify the prominent items in films. A multiscale method for video saliency map computation combines motion cues to extract features from movies. This article employs one-dimensional Gated Recurrent Units (GRU) to reduce pixel processing. The full explanation of GRU is discussed in Section 3.4. It is found that each frame of the video consists of static content (background) and specific content (foreground). In this GRU-based process, saliency maps represent each frame&#x2019;s different contents based on the pixels&#x2019; color, intensity, and luminance values. The pixels with saliency content are labeled as the &#x201C;higher-pixels&#x201D;, whereas those without saliency content are labeled as the &#x201C;lower-pixels&#x201D;. Both categories of pixels are stored in the same buffer, which is used for encoding and transmission according to the network characteristics. <xref ref-type="table" rid="table-2">Table 2</xref> shows the GRU specifications used for saliency extractions.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Specification of the GRU used for the extraction of saliency maps</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Parameters used for GRU training</th>
<th>Specification</th>
</tr>
</thead>
<tbody>
<tr>
<td>Number of cells</td>
<td>10</td>
</tr>
<tr>
<td>Learning rate</td>
<td>0.001</td>
</tr>
<tr>
<td>Dropout ratio</td>
<td>0.2</td>
</tr>
<tr>
<td>Number of the hidden layers</td>
<td>100</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Network Channel Status Monitor</title>
<p>The network channel monitor is responsible for collecting the path status information from the multiple heterogeneous paths and directing it to the cognitive learner, where the properties are used to train the proposed DL method. Based on the network parameters, the proposed model predicts the best QoS path among the multiple paths and schedules the frame packets according to the network parameters. The network characteristics, such as Available Bandwidth (B<sub>a</sub>), Received Signal Strength (RSS), End-to-End Delay (EED), and noise and distortion level (Signal to Noise Ratio (SNR)) are measured and then used for training the proposed model.</p>
</sec>
<sec id="s3_4">
<label>3.4</label>
<title>Cognitive-Based Scheduling and Adaptive Forward Error Correction</title>
<p>The proposed FEC ensembles predict paths based on network and content properties by combining Q-Learning and GRU networks. The descriptions of Q-learning and GRU are evident in the preceding section.</p>
<sec id="s3_4_1">
<label>3.4.1</label>
<title>Q-Learning Concepts</title>
<p>A saliency model, including the signals of object intensity, color, and motion, has been constructed in order to identify the salient items from the videos [<xref ref-type="bibr" rid="ref-29">29</xref>]. A straightforward statistical method applies to both images and videos. This method combines local information like color and motion signals with conditionally random fields to produce the saliency maps. The most significant elements in movies have been identified using different bottom-up approaches. A multiscale video saliency map computation method derives features from movies by combining motion cues. This article uses a one-dimensional GRU to reduce pixel processing. The aim of RL as feedback to the learning model is to maximize reward. According to Q-learning, significant reinforcement learning success It is the procedural version of the off-policy model-free method, often known as the Q-learning algorithm. The standard algorithm for resolving related problems is Q-learning. Using samples collected during interactions with the environment, the Q-function can approximate the state of action pairs [<xref ref-type="bibr" rid="ref-30">30</xref>,<xref ref-type="bibr" rid="ref-31">31</xref>]. <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref> represents the discrete-time Q-function. A Markov Decision Process (MDP) generates Q-learning as a reinforcement learning method. This MDP establishes the criteria for the state action, the reward, and the likelihood that it will occur as <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mo>,</mml:mo><mml:mi>A</mml:mi><mml:mo>,</mml:mo><mml:mi>P</mml:mi><mml:mo>,</mml:mo><mml:mi>R</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msubsup><mml:mi>P</mml:mi><mml:mrow><mml:mi>z</mml:mi><mml:msup><mml:mi>z</mml:mi><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mi>a</mml:mi></mml:mrow></mml:msubsup></mml:math></inline-formula>. Let <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>z</mml:mi></mml:math></inline-formula> current state and <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msup><mml:mi>z</mml:mi><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow></mml:msup></mml:math></inline-formula> is the next state with action value <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mo>&#x2018;</mml:mo><mml:msup><mml:mi>a</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:math></inline-formula></p>
<p><disp-formula id="eqn-6">
<label>(6)</label>
<mml:math id="mml-eqn-6" display="block"><mml:msubsup><mml:mi>P</mml:mi><mml:mrow><mml:mi>z</mml:mi><mml:msup><mml:mi>z</mml:mi><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mi>a</mml:mi></mml:mrow></mml:msubsup><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>b</mml:mi><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mi>z</mml:mi><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>z</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>a</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>The state reward function is given as [ <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi>z</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:msup><mml:mi>z</mml:mi><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow></mml:msup><mml:mo stretchy="false">]</mml:mo></mml:math></inline-formula> is given as <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mo stretchy="false">[</mml:mo><mml:msubsup><mml:mi>R</mml:mi><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>a</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x22C5;</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">]</mml:mo></mml:math></inline-formula>. For the current state, the overall reward function is <xref ref-type="disp-formula" rid="eqn-7">Eq. (7)</xref></p>
<p><disp-formula id="eqn-7">
<label>(7)</label>
<mml:math id="mml-eqn-7" display="block"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:mi>Z</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mi>P</mml:mi><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msubsup><mml:msubsup><mml:mi>R</mml:mi><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>z</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msubsup></mml:math></disp-formula></p>
</sec>
<sec id="s3_4_2">
<label>3.4.2</label>
<title>Gated Recurrent Units</title>
<p>GRU is the most appealing LSTM variant [<xref ref-type="bibr" rid="ref-32">32</xref>,<xref ref-type="bibr" rid="ref-33">33</xref>]. This idea was proposed in [<xref ref-type="bibr" rid="ref-34">34</xref>,<xref ref-type="bibr" rid="ref-35">35</xref>], which combines the forget gate and input vector as a single vector. This network supports long-term sequences and also has long memories. The complexity is greatly reduced when compared with the LSTM network. The following <xref ref-type="disp-formula" rid="eqn-8">Eqs. (8)</xref>&#x2013;<xref ref-type="disp-formula" rid="eqn-11">(11)</xref> are coined by the author to represent the characteristics of GRU:</p>
<p><disp-formula id="eqn-8">
<label>(8)</label>
<mml:math id="mml-eqn-8" display="block"><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2299;</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2299;</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p><disp-formula id="eqn-9">
<label>(9)</label>
<mml:math id="mml-eqn-9" display="block"><mml:mrow><mml:mover><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x007E;</mml:mo></mml:mover></mml:mrow><mml:mo>=</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>h</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>U</mml:mi><mml:mrow><mml:mi>h</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>r</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2299;</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>h</mml:mi></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p><disp-formula id="eqn-10">
<label>(10)</label>
<mml:math id="mml-eqn-10" display="block"><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>h</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>U</mml:mi><mml:mrow><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p><disp-formula id="eqn-11">
<label>(11)</label>
<mml:math id="mml-eqn-11" display="block"><mml:msub><mml:mi>r</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>h</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>U</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>b</mml:mi><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>The overall GRU characteristic <xref ref-type="disp-formula" rid="eqn-11">Eqs. (11)</xref> and <xref ref-type="disp-formula" rid="eqn-12">(12)</xref> is represented by</p>
<p><disp-formula id="eqn-12">
<label>(12)</label>
<mml:math id="mml-eqn-12" display="block"><mml:mi>P</mml:mi><mml:mo>=</mml:mo><mml:mi>G</mml:mi><mml:mi>R</mml:mi><mml:mi>U</mml:mi><mml:mstyle scriptlevel="0"><mml:mrow><mml:mo maxsize="2.047em" minsize="2.047em">(</mml:mo></mml:mrow></mml:mstyle><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mo stretchy="false">[</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:msub><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:msub><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:msub><mml:msub><mml:mi>r</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>W</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mi>B</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mi>&#x03B7;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">n</mml:mi><mml:mi mathvariant="italic">h</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:math></disp-formula>where &#x201C;W<sub>t</sub><bold>&#x02192;</bold>weights and B<sub>t</sub><bold>&#x02192;</bold>bias weights at the current instant, <italic>Z</italic><sub><italic>t</italic>,</sub><italic>r</italic><sub><italic>t</italic></sub><bold>&#x02192;</bold>update and reset gates, <italic>x</italic><sub><italic>t</italic></sub><bold>&#x02192;</bold>input feature at the current state, <italic>y</italic><sub><italic>t</italic></sub><bold>&#x02192;</bold>output state, and <italic>h</italic><sub><italic>t</italic></sub><bold>&#x02192;</bold>the module&#x2019;s output at the current instant&#x201D;.</p>
</sec>
<sec id="s3_4_3">
<label>3.4.3</label>
<title>Adaptive Learning and Path Prediction</title>
<p>The proposed learning model ensembles the Q-GRU learning for scheduling method and the Markov Decision Process (MDP) and is used to select an proper path and schedule it in line with the available bandwidth, EED, SNR, and signal intensity. To avoid random study at the initial phase, the algorithm has been initialized with a partially pre-computed policy applied to the different values using <xref ref-type="disp-formula" rid="eqn-13">Eqs. (13)</xref> and <xref ref-type="disp-formula" rid="eqn-14">(14)</xref>. The proposed Q-based GRU network receives input from the channel status monitor, which evaluations the best QoS path, and compares it to pre-calculated rules consisting of several heterogeneous network reward functions. Based on the computation, Q-learning ranks the different paths, schedules the content, and sends it to the FEC coder. Algorithm 1 represents the Q-learning-based QoS aware paths. Specifically, MDP for the QoS path environment defines a set of <italic>&#x201C;s&#x201D;</italic> of nodes and a group of <italic>&#x201C;A&#x201D;</italic> actions that allow an agent to move to different states. Other states in this case represent QoS-aware paths. The Reward Policy &#x201C;R&#x201D; defines the reward given by an action that selects the best path to transmit the video content without distortion. Finally, the main goal of the MDP is to find the optimal paths. More specifically, MDP is made up of a series of &#x201C;n&#x201D; discrete steps t &#x003D; 0, 1, 2,..., l, in which an agent looks at each part of the network and chooses the best way to get from one network to another. An agent gets an immediate reward once the best path is selected. Rewards are modeled based on <xref ref-type="disp-formula" rid="eqn-15">Eq. (15)</xref>. Mathematically, the reward function for this decision on the QoS-aware path is modified.</p>
<p><disp-formula id="eqn-13">
<label>(13)</label>
<mml:math id="mml-eqn-13" display="block"><mml:mi>R</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>P</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2208;</mml:mo><mml:mi>Z</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mi>P</mml:mi><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msubsup><mml:msubsup><mml:mi>R</mml:mi><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>z</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>A</mml:mi></mml:mrow><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msubsup></mml:math></disp-formula>where,</p>
<p><disp-formula id="eqn-14">
<label>(14)</label>
<mml:math id="mml-eqn-14" display="block"><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>s</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mi>M</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>B</mml:mi><mml:mo>,</mml:mo><mml:mi>R</mml:mi><mml:mi>S</mml:mi><mml:mi>S</mml:mi><mml:mi>I</mml:mi><mml:mo>,</mml:mo><mml:mi>S</mml:mi><mml:mi>N</mml:mi><mml:mi>R</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></disp-formula></p>
<p><disp-formula id="eqn-15">
<label>(15)</label>
<mml:math id="mml-eqn-15" display="block"><mml:msub><mml:mi>s</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>s</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mo fence="false" stretchy="false">{</mml:mo><mml:mi>M</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>B</mml:mi><mml:mo>,</mml:mo><mml:mi>R</mml:mi><mml:mi>S</mml:mi><mml:mi>S</mml:mi><mml:mi>I</mml:mi><mml:mo>,</mml:mo><mml:mi>S</mml:mi><mml:mi>N</mml:mi><mml:mi>R</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>M</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo fence="false" stretchy="false">}</mml:mo></mml:math></disp-formula></p>
<p>The mathematical models of how each state reaches the best rewards ensure that the best path is chosen. Here the bandwidth, Received Signal Strength Indication (RSSI), SNR, and EED play a major role in selecting the best path. The proposed DL model is embedded along with FEC coders that encode the data and correct the bit frames according to the nature of the path selected by the DL model. Finally, it sends each content frame to the video transmitter. The video transmitter transmits the video frames according to the chosen path.</p>
</sec>
<sec id="s3_4_4">
<label>3.4.4</label>
<title>Algorithm-1 for Proposed Model</title>
<p><bold>Step 1.</bold> Initiate the streaming rate X(p), Fix the EED, channel</p>
<p><bold>Step 2.</bold> Compute the bandwidth, RSSI, D, and SNR</p>
<p><bold>Step 3.</bold> Compute the paths and arrange them in descending order</p>
<p><bold>Step 4.</bold> If (R(p) &#x003D;&#x003D; D(t, p) &#x003C; T, where T &#x003D; threshold reward function;</p>
<p>Select the best path from the stored data</p>
<p><bold>Step 5.</bold> Allocate the Saliency Contents in the path selected</p>
<p><bold>Step 6.</bold> Encode and transmit to the video controller</p>
<p><bold>Step 7.</bold> Else</p>
<p><bold>Step 8. </bold>Go to Step 3</p>
<p><bold>Step 9. </bold>End</p>
</sec>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Result and Discussion</title>
<p>The simulation experimentation was conducted using OMNET&#x002B;&#x002B;5.6 interfaced with the INET framework. The INET framework is a software plug-in for OMENT&#x002B;&#x002B; 5.6, which supports most wireless communication network interfaces such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Internet Protocol version 4 (IPv4), IEEE 802.11, and even IPv6. Hence, these are used for emulating heterogeneous wireless networks. A Python 3.10-based video codec has been developed and explored for various input applications. All the network properties are downloaded offline and used for training the proposed model. The DL model has been developed using the Tensorflow and Keras libraries. The server has one connection, and the client has three connection interfaces. An end-to-end connection is set up between the client and server by binding a pair of Internet Protocol (IP) addresses from the server to the clients. <xref ref-type="table" rid="table-3">Table 3</xref> lists the experiment&#x2019;s parameters, which can be found in the previous work [37].</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Parameters used for the experimentation</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Experimental parameters</th>
<th>Specification</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bandwidth</td>
<td>350 kbps</td>
</tr>
<tr>
<td>Number of route</td>
<td>4</td>
</tr>
<tr>
<td>Capacity</td>
<td>50-350 kbps</td>
</tr>
<tr>
<td>SNR ranges</td>
<td>15-30 db</td>
</tr>
<tr>
<td>Number wireless networks</td>
<td>4</td>
</tr>
<tr>
<td>Loss rate</td>
<td>3 to 5%</td>
</tr>
<tr>
<td>Bit rates</td>
<td>600 kbps</td>
</tr>
<tr>
<td>Channel power</td>
<td>30 to 45 dBm</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To prove the excellence of the proposed algorithm, existing algorithms such as Earliest Delivery Path First (EDPF), the Round-Robin (RR), Local Balancing Algorithm (LBA), and Recurrent Neural Network (RNN)-based Region of Interest (ROI) detectors were compared. The evaluation of performance is done in three parts: Video Quality Analysis (VQA), Exit Event Detection (EED), and Path Predictive Analysis (PPA).</p>
<sec id="s4_1">
<label>4.1</label>
<title>Video Quality Analysis</title>
<p>For the analysis of video quality at the receiver side, Peak Signal-to-Noise Ratio (PSNR) and Mean Opinion Score (MOS) are used for evaluating the proposed model by using mathematical expressions as mentioned in <xref ref-type="table" rid="table-4">Tables 4</xref>&#x2013;<xref ref-type="table" rid="table-6">6</xref> show the PSNR of the different models for the different video streaming rates. The result proved the other models&#x2019; PSNR at a streaming rate of 650 kbps. As the streaming rates increase, algorithms such as LB, RR, and EDPF have drastically reduced their performance, whereas LSTM-ROI and the proposed model have produced very good PSNR ranges of 33&#x2013;40. But still, the usage of Q-GRU in the proposed network has outperformed the LSTM-ROI in creating a better PSNR for the increased streaming rates. The results show that PSNR decreases as the bit rate increases, which affects the video quality received on the user&#x2019;s side. Hence, the experimentation involves measuring MOS values at the receiver&#x2019;s end.</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Compares the various models at a video streaming rate of 600 kbps</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Algorithms</th>
<th>PSNR (db)</th>
</tr>
</thead>
<tbody>
<tr>
<td>EDPF</td>
<td>26</td>
</tr>
<tr>
<td>RR</td>
<td>27</td>
</tr>
<tr>
<td>LB</td>
<td>24</td>
</tr>
<tr>
<td>RNN-ROI</td>
<td>34</td>
</tr>
<tr>
<td>Proposed model</td>
<td>37</td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>A comparison of the various models at 850 kbps video streaming rate</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Algorithms</th>
<th>PSNR (db)</th>
</tr>
</thead>
<tbody>
<tr>
<td>EDPF</td>
<td>25.78</td>
</tr>
<tr>
<td>RR</td>
<td>25.4</td>
</tr>
<tr>
<td>LB</td>
<td>24</td>
</tr>
<tr>
<td>RNN-ROI</td>
<td>33.5</td>
</tr>
<tr>
<td>Proposed model</td>
<td>36.5</td>
</tr>
</tbody>
</table>
</table-wrap><table-wrap id="table-6">
<label>Table 6</label>
<caption>
<title>Comparative analysis of the different models at video streaming rates of 1 Mbps</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Algorithms</th>
<th>PSNR (db)</th>
</tr>
</thead>
<tbody>
<tr>
<td>EDPF</td>
<td>20</td>
</tr>
<tr>
<td>RR</td>
<td>20.4</td>
</tr>
<tr>
<td>LB</td>
<td>23</td>
</tr>
<tr>
<td>RNN-ROI</td>
<td>32.5</td>
</tr>
<tr>
<td>Proposed model</td>
<td>36.5</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="disp-formula" rid="fig-2">Figs. 2a</xref>&#x2013;<xref ref-type="disp-formula" rid="fig-2">2c</xref> show the MOS performance of algorithms at different streaming rates. The output indicates that the LB, RR, and EDPF have degraded MOS scores ranging from 1.2 to 3 as the number of clients and streaming rate increase, while the LSTM-ROI and the proposed model have produced many suitable performances for the increased clients and streaming rate. But the proposed model has produced good video quality (MOS &#x003D; 4.7) and has edged out the LSTM-ROI (MOS &#x003D; 4.2) method for the increased streaming rates.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>MOS performance with algorithms (a) Streaming rate &#x003D; 650 kbps; (b) Streaming rate &#x003D; 1 Mbps; (c) Streaming rate &#x003D; 2 Mbps</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CSSE_36864-fig-2.tif"/>
</fig>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>End-to-End Delay Analysis</title>
<p><xref ref-type="disp-formula" rid="fig-3">Figs. 3a</xref>&#x2013;<xref ref-type="disp-formula" rid="fig-3">3c</xref> plot the average EED in Group Of Picture (GOP) units of various algorithms with varying streaming rates. <xref ref-type="disp-formula" rid="fig-3">Fig. 3a</xref> shows the EED analysis for lower bitrates. <xref ref-type="disp-formula" rid="fig-3">Fig. 3b</xref> shows the EED for the high bit rates. From <xref ref-type="disp-formula" rid="fig-3">Fig. 3c</xref>, it is clear that the EED of all five algorithms ranges from 100 to 300 ms. As the bit rates increase, the EED of the LB, RR, and EDPF drastically increases, even to 700 ms. The EED of the proposed model is less than 500 ms, even though the streaming rate increases. As a result, the EED goes up, but the quality of what is received goes up, as shown by the MOS results.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Average EED analysis for different algorithms (a) streaming rate of 650 kbps; (b) streaming rate of 1 Mbps; (c) streaming rate of 2 Mbps</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CSSE_36864-fig-3.tif"/>
</fig>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>Path Prediction Analysis</title>
<p><xref ref-type="disp-formula" rid="fig-4">Figs. 4a</xref>&#x2013;<xref ref-type="disp-formula" rid="fig-4">4c</xref> depict the PPA for learning models such as the proposed Q-GRU, RNN, and Hidden Markov Models (HMM) at different epochs, where prediction accuracy is used to assess the algorithm&#x2019;s strength. The proposed model has produced the highest prediction accuracy of 98%, with a lower RMSE error of 0.001. On the other hand, the accuracy of RNN is 92% with an RMSE of 0.2678, and HMM&#x2019;s accuracy is 84% with an RMSE of 0.458, respectively. The proposed model has outperformed the other existing algorithms due to its adaptive learning with less complex GRU networks. RNN has a problem with vanishing gradients, making it fail to learn the data features. This drawback of RNN significantly affects the performance, as evident from the output.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Path prediction analysis for the different algorithms: (a) proposed model; (b) RNN (without reinforcement); (c) Markov hidden models</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CSSE_36864-fig-4.tif"/>
</fig>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion and Future Work</title>
<p>This research article presents a new framework for content-aware and network-aware video transmission in heterogeneous networks with multi-homed terminals. Even at lower bandwidths, the framework provides a higher QoE. The different network and channel properties were collected and calculated using the wireless network model. The system integrates Q-adaptive learning to enhance the QoE and schedules the packets based on the network characteristics. Additionally, saliency contents are extracted by 1-D GRU networks from the video pixels. These pixels are transmitted to the network according to the path decided by the intelligent learning algorithms. Also, adaptive FEC codes are used for error correction and make decisions based on the bit frames and networks. The novel experimental environment was created using OMNET&#x002B;&#x002B;, INET, and Python 3.10 to support the different wireless terminals and to deploy the DL systems. Experiments show that the proposed method has worked better to improve QoE than other models that are already out there.</p>
<p>This framework can be enhanced in future work by paying more attention to video encoders and AFECs, which are trade-offs between error bits and redundant bits.</p>
</sec>
</body>
<back>
<sec><title>Funding Statement</title>
<p>The authors received no specific funding for this study.</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Mariem</surname></string-name>, <string-name><given-names>E. A.</given-names> <surname>Adnen</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Abderrazak</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Fran&#x00E7;ois</surname></string-name></person-group>, &#x201C;<article-title>Learning-based metaheuristic approach for home healthcare optimization problem</article-title>,&#x201D; <source>Computer Systems Science and Engineering</source>, vol. <volume>45</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>19</lpage>, <year>2023</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Prabhu</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Rajesh</surname></string-name></person-group>, &#x201C;<article-title>An advanced dynamic scheduling for achieving optimal resource allocation</article-title>,&#x201D; <source>Computer Systems Science and Engineering</source>, vol. <volume>44</volume>, no. <issue>1</issue>, pp. <fpage>281</fpage>&#x2013;<lpage>295</lpage>, <year>2023</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Noorul Ameen</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Jabeen Begum</surname></string-name></person-group>, &#x201C;<article-title>Evolutionary algorithm based adaptive load balancing (EA-ALB) in cloud computing framework</article-title>,&#x201D; <source>Intelligent Automation &#x0026; Soft Computing</source>, vol. <volume>34</volume>, no. <issue>2</issue>, pp. <fpage>1281</fpage>&#x2013;<lpage>1294</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Dinesh</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Neetesh</surname></string-name></person-group>, &#x201C;<article-title>Machine learning techniques in emerging cloud computing integrated paradigms: A survey and taxonomy</article-title>,&#x201D; <source>Journal of Network and Computer Applications</source>, vol. <volume>205</volume>, no. <issue>103419</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>39</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Jiang</surname></string-name>, <string-name><given-names>G. M.</given-names> <surname>Muntean</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Zou</surname></string-name></person-group>, &#x201C;<article-title>Learning-based joint QoE optimization for adaptive video streaming based on smart edge</article-title>,&#x201D; <source>IEEE Transactions on Network and Service Management</source>, vol. <volume>19</volume>, no. <issue>2</issue>, pp. <fpage>1789</fpage>&#x2013;<lpage>1806</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Zhong</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Ji</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Qin</surname></string-name> and <string-name><given-names>G. M.</given-names> <surname>Muntean</surname></string-name></person-group>, &#x201C;<article-title>A Q-learning driven energy-aware multipath transmission solution for 5G media services</article-title>,&#x201D; <source>IEEE Transactions on Broadcasting</source>, vol. <volume>68</volume>, no. <issue>2</issue>, pp. <fpage>559</fpage>&#x2013;<lpage>571</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Jingfu</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Jiuling</surname></string-name></person-group>, &#x201C;<article-title>Two-phase sample average approximation for video distribution strategy of edge computing in heterogeneous network</article-title>,&#x201D; <source>Computer Communication</source>, vol. <volume>182</volume>, no. <issue>11</issue>, pp. <fpage>255</fpage>&#x2013;<lpage>267</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C. R.</given-names> <surname>Debanjan</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Sukumar</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Diganta</surname></string-name></person-group>, &#x201C;<article-title>Video streaming over IoV using IP multicast</article-title>,&#x201D; <source>Journal of Network and Computer Applications</source>, vol. <volume>197</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>15</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Tasnim</surname></string-name>, <string-name><given-names>B. L.</given-names> <surname>Asma</surname></string-name> and <string-name><given-names>A. E.</given-names> <surname>Sadok</surname></string-name></person-group>, &#x201C;<article-title>User behavior-ensemble learning based improving QoE fairness in HTTP adaptive streaming over SDN approach</article-title>,&#x201D; <source>Advances in Computers</source>, vol. <volume>123</volume>, no. <issue>1</issue>, pp. <fpage>245</fpage>&#x2013;<lpage>269</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Mahmoud</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Amin</surname></string-name> and <string-name><given-names>T.</given-names> <surname>Amir</surname></string-name></person-group>, &#x201C;<article-title>Deep learning for network traffic monitoring and analysis (NTMA): A survey</article-title>,&#x201D; <source>Computer Communications</source>, vol. <volume>170</volume>, no. <issue>3</issue>, pp. <fpage>19</fpage>&#x2013;<lpage>41</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Si</surname></string-name> and <string-name><given-names>B.</given-names> <surname>He</surname></string-name></person-group>, &#x201C;<article-title>Sliding-window forward error correction based on reference order for real-time video streaming</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>10</volume>, pp. <fpage>34288</fpage>&#x2013;<lpage>34295</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D. S.</given-names> <surname>Rajput</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Somula</surname></string-name> and <string-name><given-names>R. K.</given-names> <surname>Poluru</surname></string-name></person-group>, &#x201C;<article-title>A novel architectural model for dynamic updating and verification of data storage in cloud environment</article-title>,&#x201D; <source>International Journal of Grid and High-Performance Computing</source>, vol. <volume>13</volume>, no. <issue>4</issue>, pp. <fpage>75</fpage>&#x2013;<lpage>83</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Vashishtha</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Chouksey</surname></string-name>, <string-name><given-names>D. S.</given-names> <surname>Rajput</surname></string-name>, <string-name><given-names>S. R.</given-names> <surname>Reddy</surname></string-name>, <string-name><given-names>M. P. K.</given-names> <surname>Reddy</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Security and detection mechanism in IoT-based cloud computing using hybrid approach</article-title>,&#x201D; <source>International Journal of Internet Technology and Secured Transactions</source>, vol. <volume>11</volume>, no. <issue>5</issue>, pp. <fpage>436</fpage>&#x2013;<lpage>451</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Huang</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Du</surname></string-name> and <string-name><given-names>Q.</given-names> <surname>Zheng</surname></string-name></person-group>, &#x201C;<article-title>QoE-driven has live video channel placement in the media cloud</article-title>,&#x201D; <source>IEEE Transactions on Multimedia</source>, vol. <volume>23</volume>, pp. <fpage>1530</fpage>&#x2013;<lpage>1541</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Kimura</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Kimura</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Matsumoto</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Yamagishi</surname></string-name></person-group>, &#x201C;<article-title>Balancing quality of experience and traffic volume in adaptive bitrate streaming</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>9</volume>, pp. <fpage>15530</fpage>&#x2013;<lpage>15547</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Feng</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Jie</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Mingkui</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Jiyan</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Nam</surname></string-name></person-group>, &#x201C;<article-title>Long-term rate control for concurrent multipath real-time video transmission in heterogeneous wireless networks</article-title>,&#x201D; <source>Journal of Visual Communication and Image Representation</source>, vol. <volume>77</volume>, no. <issue>102999</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>13</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Haotong</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Yue</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Xiang</surname></string-name></person-group>, &#x201C;<article-title>Towards intelligent virtual resource allocation in UAVs-assisted 5G networks</article-title>,&#x201D; <source>Computer Networks</source>, vol. <volume>185</volume>, no. <issue>107660</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Samira</surname></string-name>, <string-name><given-names>E. R.</given-names> <surname>Christian</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Vanessa</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Prakash</surname></string-name> and <string-name><given-names>B.</given-names> <surname>Imed</surname></string-name></person-group>, &#x201C;<article-title>Multipath MMT-based approach for streaming high-quality video over multiple wireless access network</article-title>,&#x201D; <source>Computer Networks</source>, vol. <volume>185</volume>, no. <issue>107638</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. L.</given-names> <surname>Eksert</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Hamdullah</surname></string-name> and <string-name><given-names>O.</given-names> <surname>Ertan</surname></string-name></person-group>, &#x201C;<article-title>Intra- and inter-cluster link scheduling in CUPS-based ad hoc networks</article-title>,&#x201D; <source>Computer Networks</source>, vol. <volume>185</volume>, no. <issue>10765</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Guanyu</surname></string-name> and <string-name><given-names>W.</given-names> <surname>Yonggang</surname></string-name></person-group>, &#x201C;<article-title>Video transcoding for adaptive bitrate streaming over edge-cloud continuum</article-title>,&#x201D; <source>Digital Communications and Networks</source>, vol. <volume>7</volume>, no. <issue>4</issue>, pp. <fpage>598</fpage>&#x2013;<lpage>604</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Xiongli</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Feng</surname></string-name></person-group>, &#x201C;<article-title>Blind quality assessment of omnidirectional videos using Spatio-temporal convolutional neural networks</article-title>,&#x201D; <source>Optik</source>, vol. <volume>226</volume>, no. <issue>1</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Ghadiyaram</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Pan</surname></string-name> and <string-name><given-names>A. C.</given-names> <surname>Bovik</surname></string-name></person-group>, &#x201C;<article-title>A subjective and objective study of stalling events in mobile streaming videos</article-title>,&#x201D; <source>IEEE Transactions on Circuits System Video Technology</source>, vol. <volume>29</volume>, no. <issue>1</issue>, pp. <fpage>183</fpage>&#x2013;<lpage>197</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Eswara</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Chakraborty</surname></string-name>, <string-name><given-names>H. P.</given-names> <surname>Sethuram</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Kuchi</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Kumar</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Perceptual QoE-optimal resource allocation for adaptive video streaming</article-title>,&#x201D; <source>IEEE Transactions on Broadcasting</source>, vol. <volume>66</volume>, no. <issue>2</issue>, pp. <fpage>346</fpage>&#x2013;<lpage>358</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. F.</given-names> <surname>Tuysuz</surname></string-name> and <string-name><given-names>M. E.</given-names> <surname>Aydin</surname></string-name></person-group>, &#x201C;<article-title>QoE-based mobility-aware collaborative video streaming on the edge of 5G</article-title>,&#x201D; <source>IEEE Transactions on Industrial Informatics</source>, vol. <volume>16</volume>, no. <issue>11</issue>, pp. <fpage>7115</fpage>&#x2013;<lpage>7125</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T. N.</given-names> <surname>Duc</surname></string-name>, <string-name><given-names>C. T.</given-names> <surname>Minh</surname></string-name>, <string-name><given-names>T. P.</given-names> <surname>Xuan</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Kamioka</surname></string-name></person-group>, &#x201C;<article-title>Convolutional neural networks for continuous QoE prediction in video streaming services</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>8</volume>, pp. <fpage>116268</fpage>&#x2013;<lpage>116278</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Yang</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Yu</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Zhou</surname></string-name></person-group>, &#x201C;<article-title>Lstm and GRU neural network performance comparison study: Taking yelp review dataset as an example</article-title>,&#x201D; in <conf-name>Proc. of Int. Workshop on Electronic Communication and Artificial Intelligence</conf-name>, <publisher-loc>Shanghai, China</publisher-loc>, pp. <fpage>98</fpage>&#x2013;<lpage>101</lpage>, <year>2020</year>. </mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. E.</given-names> <surname>Omer</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Hassan</surname></string-name> and <string-name><given-names>M. E.</given-names> <surname>Tarhuni</surname></string-name></person-group>, &#x201C;<article-title>An integrated scheme for streaming scalable encoded video-on-demand over CR networks</article-title>,&#x201D; <source>Physical Communication</source>, vol. <volume>35</volume>, no. <issue>100701</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Hao</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Weimin</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Wei</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Yunchong</surname></string-name></person-group>, &#x201C;<article-title>A joint optimization method of coding and transmission for conversational HD video service</article-title>,&#x201D; <source>Computer Communications</source>, vol. <volume>145</volume>, no. <issue>7</issue>, pp. <fpage>243</fpage>&#x2013;<lpage>262</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M. A.</given-names> <surname>Samsuden</surname></string-name>, <string-name><given-names>N. M.</given-names> <surname>Diah</surname></string-name> and <string-name><given-names>N. A.</given-names> <surname>Rahman</surname></string-name></person-group>, &#x201C;<article-title>A review paper on implementing reinforcement learning technique in optimising games performance</article-title>,&#x201D; in <conf-name>Proc. of IEEE Int. Conf. on System Engineering and Technology</conf-name>, <publisher-loc>Shah Alam, Malaysia</publisher-loc>, pp. <fpage>258</fpage>&#x2013;<lpage>263</lpage>, <year>2019</year>. </mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhan</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Sheng</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Xiao</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Reinforcement learning-based sensor access control for WBANs</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>7</volume>, pp. <fpage>8483</fpage>&#x2013;<lpage>8494</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Ghermezcheshmeh</surname></string-name>, <string-name><given-names>V. S.</given-names> <surname>Mansouri</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Ghanbari</surname></string-name></person-group>, &#x201C;<article-title>Analysis and performance evaluation of scalable video coding over heterogeneous cellular networks</article-title>,&#x201D; <source>Computer Networks</source>, vol. <volume>148</volume>, no. <issue>9</issue>, pp. <fpage>151</fpage>&#x2013;<lpage>163</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. E.</given-names> <surname>Omer</surname></string-name>, <string-name><given-names>M. S.</given-names> <surname>Hassan</surname></string-name> and <string-name><given-names>M. E.</given-names> <surname>Tarhuni</surname></string-name></person-group>, &#x201C;<article-title>An integrated scheme for streaming scalable encoded video-on-demand over cr networks</article-title>,&#x201D; <source>Physical Communication</source>, vol. <volume>35</volume>, no. <issue>2</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Lahby</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Essouiri</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Sekkaki</surname></string-name></person-group>, &#x201C;<article-title>A novel modeling approach for vertical handover based on dynamic k-partite graph in heterogeneous networks</article-title>,&#x201D; <source>Digital Communications and Networks</source>, vol. <volume>5</volume>, no. <issue>4</issue>, pp. <fpage>297</fpage>&#x2013;<lpage>307</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. M.</given-names> <surname>Hassan</surname></string-name>, <string-name><given-names>I. K. T.</given-names> <surname>Tan</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Selvaretnam</surname></string-name> and <string-name><given-names>K. H.</given-names> <surname>Poo</surname></string-name></person-group>, &#x201C;<article-title>SINR-based conversion and prediction approach for handover performance evaluation of video communication in proxy mobile ipv6</article-title>,&#x201D; <source>Computers and Electrical Engineering</source>, vol. <volume>74</volume>, no. <issue>8</issue>, pp. <fpage>164</fpage>&#x2013;<lpage>183</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M. A.</given-names> <surname>Samsuden</surname></string-name>, <string-name><given-names>N. M.</given-names> <surname>Diah</surname></string-name> and <string-name><given-names>N. A.</given-names> <surname>Rahman</surname></string-name></person-group>, &#x201C;<article-title>A review paper on implementing reinforcement learning technique in optimising games performance</article-title>,&#x201D; in <conf-name>Proc. of IEEE Int. Conf. on System Engineering and Technology</conf-name>, <publisher-loc>Shah Alam, Malaysia</publisher-loc>, pp. <fpage>258</fpage>&#x2013;<lpage>263</lpage>, <year>2019</year>. </mixed-citation></ref>
</ref-list>
</back>
</article>








