<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">30531</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2023.030531</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>SP-DSTS-MIMO Scheme-Aided H.266 for Reliable High Data Rate Mobile Video Communication</article-title>
<alt-title alt-title-type="left-running-head">SP-DSTS-MIMO Scheme-Aided H.266 for Reliable High Data Rate Mobile Video Communication</alt-title>
<alt-title alt-title-type="right-running-head">SP-DSTS-MIMO Scheme-Aided H.266 for Reliable High Data Rate Mobile Video Communication</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Ullah</surname><given-names>Khadem</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref><email>khademullah-ncbc@uetpeshawar.edu.pk</email>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Minallah</surname><given-names>Nasru</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Nayab</surname><given-names>Durre</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Ahmed</surname><given-names>Ishtiaque</given-names>
</name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Frnda</surname><given-names>Jaroslav</given-names>
</name><xref ref-type="aff" rid="aff-3">3</xref>
<xref ref-type="aff" rid="aff-4">4</xref></contrib>
<contrib id="author-6" contrib-type="author">
<name name-style="western"><surname>Nedoma</surname><given-names>Jan</given-names>
</name><xref ref-type="aff" rid="aff-4">4</xref></contrib>
<aff id="aff-1"><label>1</label><institution>Department of Computer Systems Engineering, University of Engineering and Technology Peshawar</institution>, <addr-line>Peshawar, 25000</addr-line>, <country>Pakistan</country></aff>
<aff id="aff-2"><label>2</label><institution>National Centre in Big Data and Cloud Computing, University of Engineering and Technology Peshawar (NCBC-UETP)</institution>, <addr-line>Peshawar, 25000</addr-line>, <country>Pakistan</country></aff>
<aff id="aff-3"><label>3</label><institution>Department of Quantitative Methods and Economic Informatics, Faculty of Operation and Economics of Transport and Communications, University of Zilina</institution>, <addr-line>010 26, Zilina</addr-line>, <country>Slovakia</country></aff>
<aff id="aff-4"><label>4</label><institution>Department of Telecommunications, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava</institution>, <addr-line>Ostrava-Poruba</addr-line>, <country>Czech Republic</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Khadem Ullah. Email: <email>khademullah-ncbc@uetpeshawar.edu.pk</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-08-16"><day>16</day>
<month>08</month>
<year>2022</year></pub-date>
<volume>74</volume>
<issue>1</issue>
<fpage>995</fpage>
<lpage>1010</lpage>
<history>
<date date-type="received">
<day>28</day>
<month>3</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>6</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Ullah et al.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Ullah et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_30531.pdf"></self-uri>
<abstract>
<p>With the ever growth of Internet users, video applications, and massive data traffic across the network, there is a higher need for reliable bandwidth-efficient multimedia communication. Versatile Video Coding (VVC/H.266) is finalized in September 2020 providing significantly greater compression efficiency compared to Highest Efficient Video Coding (HEVC) while providing versatile effective use for Ultra-High Definition (HD) videos. This article analyzes the quality performance of convolutional codes, turbo codes and self-concatenated convolutional (SCC) codes based on performance metrics for reliable future video communication. The advent of turbo codes was a significant achievement ever in the era of wireless communication approaching nearly the Shannon limit. Turbo codes are operated by the deployment of an interleaver between two Recursive Systematic Convolutional (RSC) encoders in a parallel fashion. Constituent RSC encoders may be operating on the same or different architectures and code rates. The proposed work utilizes the latest source compression standards H.266 and H.265 encoded standards and Sphere Packing modulation aided differential Space Time Spreading (SP-DSTS) for video transmission in order to provide bandwidth-efficient wireless video communication. Moreover, simulation results show that turbo codes defeat convolutional codes with an averaged <italic>E</italic><sub><italic>b</italic></sub><italic>/N</italic><sub>0</sub> gain of 1.5 dB while convolutional codes outperform compared to SCC codes with an <italic>E</italic><sub><italic>b</italic></sub><italic>/N</italic><sub>0</sub> gain of 3.5 dB at Bit Error Rate (BER) of <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:msup><mml:mn>10</mml:mn><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>. The Peak Signal to Noise Ratio (PSNR) results of convolutional codes with the latest source coding standard of H.266 is plotted against convolutional codes with H.265 and it was concluded H.266 outperform with about 6 dB PSNR gain at <italic>E</italic><sub><italic>b</italic></sub><italic>/N</italic><sub>0</sub> value of 4.5 dB.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>H.265</kwd>
<kwd>RSC</kwd>
<kwd>turbo codes</kwd>
<kwd>SCC</kwd>
<kwd>SP-DSTS</kwd>
<kwd>BP-CNN</kwd>
<kwd>BER</kwd>
<kwd>PSNR</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Shannon identified the channel behavior of a noisy channel to show the upper limit of data rates to be achieved on some specific channel [<xref ref-type="bibr" rid="ref-1">1</xref>]. This theorem predicts the highest data rate and specifies the bound on error-free information to be achieved on specific bandwidth over a noisy communication channel upon the addition of redundant bits to the transmitted messages. Source encoder is embedded in the systems to effectively deal with the noisy channel constraints while immersing the unpalatable vulnerability of transmission errors to the bit-stream. A noisy channel allows us to transmit limited information over an allocated bandwidth. Therefore, the source compression standard is required to transmit more contents within an allocated bitstream. Data integrity is another important parameter that ensures that the data needs to be delivered accurately to its intended user. Attenuation, shadowing, fading and multi-user interference are the major factors that causes time-varying and location varying channel conditions. There are numerous techniques i.e., diversity techniques, Forward Error Correction (FEC), interleaving, fast power control, Multiple-Input Multiple-Output (MIMO) systems and broadband access are used to overcome the variation in channel condition [<xref ref-type="bibr" rid="ref-2">2</xref>&#x2013;<xref ref-type="bibr" rid="ref-7">7</xref>]. Some amount of errors may be tolerated for low-delay applications due to the noisy communication channel. Considering the practical scenario, Joint Source Channel Decoding (JSCD) gets significant research interest due to providing the lowest possible Bit Error Rate (BER) on realistic channels [<xref ref-type="bibr" rid="ref-8">8</xref>&#x2013;<xref ref-type="bibr" rid="ref-10">10</xref>]. The series of JSCD schemes operated on residual redundancy as a prime source for error protection in the coded video bitstream [<xref ref-type="bibr" rid="ref-11">11</xref>,<xref ref-type="bibr" rid="ref-12">12</xref>]. To cope with the noisy behavior of wireless channels, Data Partitioning (DP) for error-resilient is compensated in Advance Video Coding (AVC). Each stream is divided into three different stream layers in DP and have their own importance and set of parameters. Several error-resilient schemes exist but with the trade-off increasing computational complexity and reducing compression efficiency [<xref ref-type="bibr" rid="ref-13">13</xref>]. Similarly, motivated by concatenated codes, the authors in [<xref ref-type="bibr" rid="ref-9">9</xref>] proposed Iterative Source and Channel Decoding (ISCD) for improving the error robustness features in digital systems by manipulating residual and artificial redundancy. The extent of achievement in the number of profitable iterations is determined by Extrinsic Information Transfer (EXIT) chart analysis [<xref ref-type="bibr" rid="ref-14">14</xref>]. In [<xref ref-type="bibr" rid="ref-9">9</xref>], the error-correcting or concealment capabilities of ISCD are evaluated by the EXIT Chart. In [<xref ref-type="bibr" rid="ref-15">15</xref>], the authors show the EXIT chart as a versatile tool for designing different serial concatenated codes. The source coding part in ISCD extracts the spectral coefficient from the multimedia contents (audio or video signal). Natural residual redundancy remains in the spectral coefficient after passing from the source codec in the form of non-uniform distribution. Residual redundancy is the source of performance manipulated at the receiver side to overcome transmission errors. Furthermore, a soft input decoder based on exploiting the residual redundancy in compressed bits and on the A-Posteriori Probability (APP) of each symbol was presented in [<xref ref-type="bibr" rid="ref-9">9</xref>]. In [<xref ref-type="bibr" rid="ref-10">10</xref>] Irregular-Variable length coding (IVLC) is presented which endow its performance to near capacity joint source coding. Cordless video telephony and interactive cellular use burst by burst adoptive transceivers and a principles scheme is designed in [<xref ref-type="bibr" rid="ref-16">16</xref>]. Sphere Packing modulation aided Differential Space Time Spreading (SP-DSTS) is briefly presented in [<xref ref-type="bibr" rid="ref-3">3</xref>&#x2013;<xref ref-type="bibr" rid="ref-5">5</xref>]. An iterative Belief Propagation aided convolutional neural network (BP-CNN) architecture is presented in [<xref ref-type="bibr" rid="ref-17">17</xref>]. The deployment of 5G and 6G wireless communication is underway and the wireless research community is taking a higher interest in providing novel solutions. For this purpose, the authors in [<xref ref-type="bibr" rid="ref-18">18</xref>] discussed the evolution of mobile generation from the earlier First Generation (1G) mobile communication to the latest Fifth Generation (5G) and Sixth Generation (6G) by comparing the challenges and features. The future wireless communication will be transformative and will revolutionize the evolution from &#x201C;connected things&#x201D; to &#x201C;connected intelligence&#x201D; promising a very higher data transmission up to 1 Tera bits per second (Tb/s), very high energy efficiency with the support of enabling battery-free Internet of Things (IoT) devices, low latency, and utilizing broad frequency bands [<xref ref-type="bibr" rid="ref-19">19</xref>]. In [<xref ref-type="bibr" rid="ref-20">20</xref>], provides an overview and outlook on the architecture, modeling, design, and performance of massively distributed antenna systems (DAS) with nonideal optical fronthauls. In [<xref ref-type="bibr" rid="ref-21">21</xref>], the authors provides an overview and outlook on the application of sparse code multiple access (SCMA) for 6G wireless communication systems, which is an emerging disruptive non-orthogonal multiple access (NOMA) scheme for the enabling of massive connectivity. Moreover, the authors propose to use SCMA to support massively distributed access systems (MDASs) in 6G for faster, more scalable, more reliable, and more efficient massive access. Highest Efficient Video Coding (HEVC) is the source compression standard which is specially designed for providing parallel processing, coding gain, and error resilience efficiency. The main target of the HEVC development was to reduce the bitrate up to 50% with the same quality as the existing standards. On the other hand, Versatile Video Coding (VVC) is finalized in September 2020 providing significantly greater compression efficiency compared to Highly Efficient Video Coding (HEVC) [<xref ref-type="bibr" rid="ref-22">22</xref>&#x2013;<xref ref-type="bibr" rid="ref-24">24</xref>]. The novelty of the propose work is providing bandwidth-efficient communication, the latest source encoding standard VVC and HEVC is used as a compression standard while SP-DSTS as a MIMO scheme for providing reliable higher data rate video communication. It is a challenging task implementing this research architecture to transmit a highly compressed packets of HEVC &#x0026; VVC encoding standard over a correlated Rayleigh fading channel. The error rate on Rayleigh fading channel model cannot be much reduced by simply increasing transmission power or increasing the allocation of bandwidth as it is contrary to requirements of the next generation systems. As, a single bit in error from the highly compressed stream may affect the correct decoding of a number of frames. Therefore, a clever and intelligent system is designed in the propose work in order to get the attained the reliability along with providing a highly compressed bitstream. Moreover, a Belief Propagation aided Convolutional Neural Network (BP-CNN) architecture is included to find the results of the proposed on the said architecture with the transmission of a video sequence. In order to find the effect of the interesting neural network in stochastic channel noise estimation, the results of the proposed work is further compared with the same system utilizing H.265 source encoding standard and BP-CNN architecture at the decoding side. The motivation and contributions of the proposed research is itemized as given below:
<list list-type="bullet">
<list-item>
<p>To the best of our knowledge, this is the first scheme which transmits the video sequences compressed by VVC video encoded standard using SP-DSTS encoder and analyzed the received video sequence from the wireless network using objective and subjective performance metrics.</p></list-item>
<list-item>
<p>Turbo codes achieve the ultimate theoretical limits and can be used to implement real-time high energy efficient low latency transceivers with enabling high-speed data transmission and exploiting transmitter diversity gain, advanced modulations, self-concatenated, and differential codes to meet the required quality of service demand.</p></list-item>
<list-item>
<p>The results of convolutional codes with the source encoding standard VVC have been compared to the same system when H.265 and H.266 source encoder is utilized.</p></list-item>
<list-item>
<p>The objective and subjective video quality performance of the proposed is measured with AVC, HEVC and VVC. From the subjective video quality, it can visualized that HEVC preserved a large number of frames to the receiving end and frame dropout rate is too low. The same fact can be visualized for VVC video coding standard. Moreover, VVC maintaining a good PSNR values for all the frames even for a lower E<sub>b</sub>/N<sub>0</sub> value as well. It is clearly shown that the system with VVC conserved a large number of frames at the receiving side with high quality.</p></list-item>
<list-item>
<p>Moreover, the performance of BP-CNN is compared with the bench marker system using H.265 as a source encoding standard while transmitting the Akiyo video sequence.</p></list-item>
</list></p>
<p>The overall structure of the proposed work has been organized as follows. Section 2 briefly presents the preliminaries and system design criteria for the proposed work. Section 3 presents different parameters of the utilized channel codes. The system model has been presented in Section 4. Simulation results of the proposed work are given in Section 5. Finally, a conclusion of the proposed work is presented in Section 6.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Preliminaries &#x0026; System Design Criteria</title>
<p>The model between two parties comprises sender, receiver, message, channel and the underlying protocol. The protocol is a set of rules on which both the parties agree for specifying different layers and their functionalities to provide an application-oriented communication. The communication channel adds an unwanted signal to the original signal when the signal passes through the channel. Decoding the received signal requires modeling the behavior of a noise signal. In the case of Rayleigh fading or multipath channels, a statistical model is used for specifying the behavior of the channel. It is since there exists randomness in the location of the object and due to the multipath channel. The transmitting antennas, transmit a signal to the receiver while the signal at the decoding side does not remain the original signal but is received as a sum of different replicas i.e., reflected, scattered and diffracted versions received from walls, trees and buildings. In the absence of line of sight and if <italic>I</italic> versions of the original signal exist, then the received signal is the sum of <italic>I</italic> components with the Gaussian noise as follows [<xref ref-type="bibr" rid="ref-25">25</xref>]:</p>
<p><disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>r</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>P</mml:mi></mml:mrow></mml:munderover><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x03C0;</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03B7;</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula>where <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> represents amplitude, <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:mrow></mml:msub></mml:math></inline-formula> represents carrier frequency, while <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> represents the phase of the corresponding component and <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mrow><mml:mi mathvariant="normal">&#x03B7;</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> represents the Gaussian noise. Furthermore,</p>
<p><disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mi>r</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x03C0;</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mi>&#x03C0;</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mrow><mml:mtext>c</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03B7;</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></disp-formula></p>
<p>The above equation represents a summation of the term <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:munderover><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>I</mml:mi></mml:mrow></mml:munderover><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03D5;</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> i.e., the replicas of <italic>I</italic> random variables. Rayleigh random variable <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mrow><mml:mtext>X</mml:mtext></mml:mrow></mml:math></inline-formula> can be defined as the square root of independent and identically distributed Gaussian random variables <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> each having zero mean and two degrees of freedom [<xref ref-type="bibr" rid="ref-25">25</xref>].</p>
<p><disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:mi>X</mml:mi><mml:mo>=</mml:mo><mml:msqrt><mml:msubsup><mml:mi>X</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>X</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msubsup></mml:msqrt></mml:math></disp-formula></p>
<p>Probability Distribution Function (PDF) of <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mo stretchy="false">(</mml:mo><mml:mi>X</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is as given below:</p>
<p><disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mi>P</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>X</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing=".2em" columnspacing="1em" displaystyle="false"><mml:mtr><mml:mtd><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mi>X</mml:mi><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mfrac></mml:mstyle><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mi>X</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mtd><mml:mtd><mml:mi>X</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mrow><mml:mtext>otherwise</mml:mtext></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>In <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref>, the <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> represents the variance of <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> while <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mn>2</mml:mn><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> represents the sum of variances of <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>. The mean and variance of a variable <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mi>X</mml:mi></mml:math></inline-formula> is given by the following equation.</p>
<p><disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mrow><mml:mi>E</mml:mi><mml:mo stretchy='false'>[</mml:mo><mml:mi>X</mml:mi><mml:mo stretchy='false'>]</mml:mo><mml:mo>=</mml:mo><mml:mi>&#x03C3;</mml:mi><mml:msqrt><mml:mrow><mml:mfrac><mml:mi>&#x03C0;</mml:mi><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msqrt></mml:mrow></mml:math></disp-formula></p>
<p><disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mi>V</mml:mi><mml:mi>A</mml:mi><mml:mi>R</mml:mi><mml:mo stretchy="false">[</mml:mo><mml:mi>X</mml:mi><mml:mo stretchy="false">]</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mfrac><mml:mi>&#x03C0;</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>The Cumulative Distributive Function (CDF) of <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mi>X</mml:mi></mml:math></inline-formula> can be achieved by integrating the PDF and is represented as follows:</p>
<p><disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mi>F</mml:mi><mml:mo stretchy="false">[</mml:mo><mml:mi>X</mml:mi><mml:mo stretchy="false">]</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing=".2em" columnspacing="1em" displaystyle="false"><mml:mtr><mml:mtd><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mi>X</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mtd><mml:mtd><mml:mi>X</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#xA0;otherwise&#xA0;</mml:mtext></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Shadowing effect on the original signal is generated due to objects and large block such as building in the communication model channel. It can be modelled with the lognormal distribution <inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mo stretchy="false">(</mml:mo><mml:mi>Y</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> such as given below:</p>
<p><disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mi>Y</mml:mi><mml:mo>=</mml:mo><mml:mi>ln</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mi>S</mml:mi><mml:mo>&#x2228;</mml:mo><mml:mi>S</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mi>Y</mml:mi></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>Then the PDF of <inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mi>S</mml:mi></mml:math></inline-formula> is given as:</p>
<p><disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mi>P</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>S</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mtable columnalign="left left" rowspacing=".2em" columnspacing="1em" displaystyle="false"><mml:mtr><mml:mtd><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mn>1</mml:mn><mml:msqrt><mml:mn>2</mml:mn><mml:mi>&#x03C0;</mml:mi><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mi>S</mml:mi></mml:msqrt></mml:mfrac></mml:mstyle><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mfrac><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>ln</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mi>S</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>m</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mtd><mml:mtd><mml:mi>X</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mrow><mml:mtext>&#xA0;otherwise</mml:mtext></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>Where <inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mrow><mml:mtext>m</mml:mtext></mml:mrow></mml:math></inline-formula> and <inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mi>&#x03C3;</mml:mi></mml:math></inline-formula> represents the mean and variance respectively.</p>
<p><disp-formula id="eqn-10"><label>(10)</label><mml:math id="mml-eqn-10" display="block"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo> <mml:mi>X</mml:mi> <mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mfrac><mml:mrow><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msup></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math>
</disp-formula></p>
<p><disp-formula id="eqn-11"><label>(11)</label><mml:math id="mml-eqn-11" display="block"><mml:mrow><mml:mtext>VAR</mml:mtext><mml:mrow><mml:mo>[</mml:mo> <mml:mi>X</mml:mi> <mml:mo>]</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>m</mml:mi><mml:mo>+</mml:mo><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula></p>
</sec>
<sec id="s3">
<label>3</label>
<title>Channel Codes Performance Parameters</title>
<p>The proposed work compares three different channel codes to provide high data rate reliable video communication over a communication channel with aid of H.265 and H.266 source encoding standards and a BP-CNN architecture for H.265 compressed video decoding. The primary channel code in term of tremendous performance and ease of implementation is Recursive Systematic Convolutional (RSC) codes. The convolutional codes characteristic can be obtained with the following generator polynomial equation having constraint length <italic>v</italic> as in [<xref ref-type="bibr" rid="ref-13">13</xref>].</p>
<p><disp-formula id="eqn-12"><label>(12)</label><mml:math id="mml-eqn-12" display="block"><mml:msup><mml:mi>G</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mi>g</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>g</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mi>D</mml:mi><mml:mo>+</mml:mo><mml:msubsup><mml:mi>g</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:msup><mml:mi>D</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>&#x2026;</mml:mo><mml:msubsup><mml:mi>g</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>v</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:msup><mml:mi>D</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>v</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>The input polynomial expression of convolutional codes is given with the following equation.</p>
<p><disp-formula id="eqn-13"><label>(13)</label><mml:math id="mml-eqn-13" display="block"><mml:msubsup><mml:mi>U</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mi>u</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>u</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msubsup><mml:mi>D</mml:mi><mml:mo>+</mml:mo><mml:mo>&#x2026;</mml:mo><mml:msubsup><mml:mi>u</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msubsup><mml:msup><mml:mi>D</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></disp-formula></p>
<p>The output can be obtained as will the following equation for input bitstream.</p>
<p><disp-formula id="eqn-14"><label>(14)</label><mml:math id="mml-eqn-14" display="block"><mml:mrow><mml:msup><mml:mi>Y</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mi>G</mml:mi><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x22C5;</mml:mo><mml:msubsup><mml:mi>U</mml:mi><mml:mi>n</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula></p>
<p><disp-formula id="eqn-15"><label>(15)</label><mml:math id="mml-eqn-15" display="block"><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mo>=</mml:mo><mml:msubsup><mml:mi>Y</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi>Y</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:mi>D</mml:mi><mml:mo>+</mml:mo><mml:msubsup><mml:mi>Y</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:msup><mml:mi>D</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>+</mml:mo><mml:msubsup><mml:mi>Y</mml:mi><mml:mrow><mml:mi>v</mml:mi><mml:mo>+</mml:mo><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>i</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msubsup><mml:msup><mml:mi>D</mml:mi><mml:mrow><mml:mi>v</mml:mi><mml:mo>+</mml:mo><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>Convolutional codes take <italic>k</italic> input bits and output <italic>n</italic> bits while the output depends on the generator polynomial function. The characteristic of convolutional codes can be drawn with the help of a state machine diagram where the next state generally corresponds to the current state as well as the input infiltrate to the encoder. From <xref ref-type="fig" rid="fig-1">Fig. 1</xref>, the design example can be explained as, let the generator polynomials is <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:msub><mml:mi>G</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:msub><mml:mi>G</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. The initial state is represented in the state diagram while for an initial state 00, the next state will be the same as the current state if 0 is input to the encoder. The input and output are represented in the state diagram as <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mn>0</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>00</mml:mn></mml:math></inline-formula> (input/output). Similarly, for input 1, the next state is 10 from the current state while providing output 11 and can be expressed as 1/11. All the states and corresponding output can be obtained from the state diagram accordingly. Code rate is the primarily parameter for comparing the performance of different channel codes which is the ratio between the input <italic>k</italic> bits to that of the output <italic>n</italic> bits at a corresponding instance of time, i.e., can be expressed as with the following equation.</p>
<p><disp-formula id="eqn-16"><label>(16)</label><mml:math id="mml-eqn-16" display="block"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:mfrac></mml:math></disp-formula></p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>State machine diagram of convolutional coding</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-1.png"/>
</fig>
<p>Traditionally, as the number of output bits <italic>n</italic> is generally always greater than the number of input bits therefore, the ratio results in a code rate of less than 1. The unit of RSC codes is bits/transmission and represents the symbols that are transmitted in each individual instance of time. The number of symbols L i.e., M-ary symbols transmitted per codeword can be expressed as with the following equation. In <xref ref-type="disp-formula" rid="eqn-17">Eq. (17)</xref>, the length of the codeword is represented with <italic>n</italic> while the constellation size is represented with <italic>M</italic>.</p>
<p><disp-formula id="eqn-17"><label>(17)</label><mml:math id="mml-eqn-17" display="block"><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mi>n</mml:mi><mml:mrow><mml:msub><mml:mi>log</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2061;</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>The transmission rate (<italic>R</italic>) can be expressed as with the following equation. <xref ref-type="disp-formula" rid="eqn-18">Eq. (18)</xref> represents the transmission time for transmitting <italic>k</italic> information bits when the symbol duration is <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula>.</p>
<p><disp-formula id="eqn-18"><label>(18)</label><mml:math id="mml-eqn-18" display="block"><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mi>k</mml:mi><mml:mrow><mml:mi>L</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>The transmission rate can be further derived by putting the values of <italic>L</italic> from <xref ref-type="disp-formula" rid="eqn-17">Eq. (17)</xref> in <xref ref-type="disp-formula" rid="eqn-18">Eq. (18)</xref> and be expressed as with the following equation.</p>
<p><disp-formula id="eqn-19"><label>(19)</label><mml:math id="mml-eqn-19" display="block"><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mi>k</mml:mi><mml:mo>&#x2217;</mml:mo><mml:msub><mml:mi>log</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2061;</mml:mo><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2217;</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mstyle></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p><disp-formula id="eqn-20"><label>(20)</label><mml:math id="mml-eqn-20" display="block"><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>c</mml:mi></mml:mrow></mml:msub><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:msub><mml:mi>log</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2061;</mml:mo><mml:mi>M</mml:mi></mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mfrac></mml:mstyle><mml:mi>b</mml:mi><mml:mi>p</mml:mi><mml:mi>s</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>Spectral bitrate (r) or bandwidth efficiency is the ratio between the encoding scheme bitrate to that of the bandwidth utilized and can be obtained with the following equation.</p>
<p><disp-formula id="eqn-21"><label>(21)</label><mml:math id="mml-eqn-21" display="block"><mml:mi>r</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>R</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>b</mml:mi><mml:mi>p</mml:mi><mml:mi>s</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi>W</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>H</mml:mi><mml:mi>z</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>The efficiency of the transmitting system can be measured in terms of the spectral bitrate. The signal with utilizing bandwidth W must be decoded at the receiving end if the sampling rate is not less than 2 W per second. The following equation represents the degree of freedom with duration T and bandwidth W.</p>
<p><disp-formula id="eqn-22"><label>(22)</label><mml:math id="mml-eqn-23" display="block"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mi>W</mml:mi><mml:mi>T</mml:mi></mml:math></disp-formula></p>
<p>The threshold bandwidth requirements for a transmission can be expressed as with the following equation.</p>
<p><disp-formula id="eqn-23"><label>(23)</label><mml:math id="mml-eqn-24" display="block"><mml:mi>W</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mi>N</mml:mi><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mfrac></mml:math></disp-formula></p>
<p>Putting the value of <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula></p>
<p><disp-formula id="eqn-24"><label>(24)</label><mml:math id="mml-eqn-25" display="block"><mml:mi>W</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>R</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>log</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2061;</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:mfrac><mml:mi>b</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>s</mml:mi><mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mtext>sec</mml:mtext></mml:mrow></mml:math></disp-formula></p>
<p>With the passage of time and advancements in communication, the concept of turbo codes is widely accepted and matured [<xref ref-type="bibr" rid="ref-25">25</xref>]. Such powerful class of codes merely operates by the deployment of an interleaver between two Recursive Systematic Convolutional (RSC) encoders in a parallel fashion. The constituent RSC encoders may be operating on the same or different architectures and rates. These codes find extensive applications in the scenarios of low Bit-Error-Rate (BER) missions without any additional specific power requirements. The advent of turbo codes has paved the direction for the researchers in designing efficient codes that can be decoded with the least complexity as well [<xref ref-type="bibr" rid="ref-26">26</xref>]. These demanding codes approaching the Shannon&#x0027;s theoretical limit are based on the RSC codes for achieving the near-limit capacity as highlighted in the pioneering work of Shannon. The schematics of a turbo encoder and decoder is given in <xref ref-type="fig" rid="fig-2">Figs. 2</xref> and <xref ref-type="fig" rid="fig-3">3</xref>. The code puncturing techniques and multiplexing are mostly helpful in achieving the desired rate for the scheme. Turbo codes produces randomness in coding due to the presence of interleaver between the member encoders. Convolutional coding lacks interleaver. Turbo codes are recursive, systematic, and with parallel structure whereas convolutional codes are non-recursive and non-systematic. For the decoding of Turbo codes, a clever approach based on divide-and-conquer is processed. It is worth mentioning that the constituent decoders rely on the sharing of mutual information between each other. Mutual information (<italic>I</italic>) and entropy are two correlated terminologies that are commonly used where information sharing is the primarily resource of performance. Mutual information is the amount of information one variable have about other while the self-information of a random variable is called entropy.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Turbo encoder</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-2.png"/>
</fig>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Turbo decoder</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-3.png"/>
</fig>
<p>Mutual information is also called relative entropy where it represents the distance between two probability distributions. For a transmitted symbol <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mi>Y</mml:mi></mml:math></inline-formula> and with the channel output Z, the mutual information can be expressed as with the following equation [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>].</p>
<p><disp-formula id="eqn-26"><label>(25)</label><mml:math id="mml-eqn-26" display="block"><mml:mi>I</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>Y</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>;</mml:mo><mml:mi>Z</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:msub></mml:mfrac><mml:mo>&#x22C5;</mml:mo><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munderover><mml:mspace width="thinmathspace" /><mml:msubsup><mml:mo>&#x222B;</mml:mo><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x221E;</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x221E;</mml:mi></mml:mrow></mml:mrow></mml:msubsup><mml:mspace width="thinmathspace" /><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>&#x2223;</mml:mo></mml:mrow><mml:msub><mml:mi>Y</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mi>l</mml:mi><mml:mi>d</mml:mi><mml:mfrac><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>&#x2223;</mml:mo></mml:mrow><mml:msub><mml:mi>Y</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac><mml:mi>d</mml:mi><mml:mi>z</mml:mi></mml:math></disp-formula></p>
<p>With conditional probability density function (PDF)</p>
<p><disp-formula id="eqn-27"><label>(26)</label><mml:math id="mml-eqn-27" display="block"><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>&#x2223;</mml:mo></mml:mrow><mml:msub><mml:mi>Y</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mn>2</mml:mn><mml:mi>&#x03C0;</mml:mi></mml:msqrt><mml:mi>&#x03C3;</mml:mi></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>&#x22C5;</mml:mo><mml:mi>exp</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mspace width="thinmathspace" /><mml:mo>&#x2212;</mml:mo><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msup><mml:mi>&#x03C3;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mstyle><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p><disp-formula id="eqn-28"><label>(27)</label><mml:math id="mml-eqn-28" display="block"><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true" scriptlevel="0"><mml:mfrac><mml:mn>1</mml:mn><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:msub></mml:mfrac></mml:mstyle><mml:munderover><mml:mo>&#x2211;</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>A</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munderover><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mi>p</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>z</mml:mi><mml:mrow><mml:mo>&#x2223;</mml:mo></mml:mrow><mml:msub><mml:mi>Y</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>These decoders accept Soft-Input (SI) to yield Soft-Output (SO) [<xref ref-type="bibr" rid="ref-27">27</xref>]. Mostly, the stream of information that is iteratively shared between the decoders is in the form of Logarithmic Likelihood ratio (LLR). The input LLRs accepted by the SISO decoder is processed to increase the reliability about the transmitted data, using the concept of redundancy [<xref ref-type="bibr" rid="ref-28">28</xref>]. The output LLR from the SISO is expressed by the equation.</p>
<p><disp-formula id="eqn-29"><label>(28)</label><mml:math id="mml-eqn-29" display="block"><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mrow><mml:mtext>out&#xA0;</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>d</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mrow><mml:mtext>input&#xA0;</mml:mtext></mml:mrow></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>d</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>d</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>The alphabet <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mrow><mml:mtext>L</mml:mtext></mml:mrow></mml:math></inline-formula> stands for the LLR, corresponding to the appropriate subscript and <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mi>E</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>d</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is the extrinsic information of bit d. The block diagram of the turbo decoder is given in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>. In the decoding process, LLRs are appropriately interleaved and deinterleaved to continue the process of iterations. The main issue of convergence is solved primarily with this iterative behavior. The negative feedback governs for the stability of the overall decoding process. After several iterations, both decoders succeed in achieving the stability and convergence. Another interesting class is the Self-Concatenated Convolutional Coding (SCC) which is based on a very much simpler approach. As in the case of turbo codes, although a performance of near capacity was achieved, but it involves two RSC encoders and separate decoders as well. Keeping in view the complexity of turbo codes, a much simpler approach is offered by the SCC scheme [<xref ref-type="bibr" rid="ref-29">29</xref>]. It involves only a single <inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mrow><mml:mtext>RSC</mml:mtext></mml:mrow></mml:math></inline-formula> encoder and a single decoder. The block diagram is depicted in <xref ref-type="fig" rid="fig-4">Fig. 4</xref><inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mo>.</mml:mo></mml:math></inline-formula> The decoding side is divided into component decoders for the sake deployment of iterative decoding. The process of iterations continues and one component decoder feeds on the second rendering improvements in the knowledge of decoders [<xref ref-type="bibr" rid="ref-30">30</xref>]. The mathematical equation governing the overall code rate <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mi>R</mml:mi></mml:math></inline-formula> for the <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mi>S</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi><mml:mi>C</mml:mi></mml:math></inline-formula> based on <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is given by <xref ref-type="disp-formula" rid="eqn-30">Eq. (29)</xref> where <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is the rate of the RSC encoder used whereas <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is the puncturing rate used in the SCC scheme [<xref ref-type="bibr" rid="ref-31">31</xref>].</p>
<p><disp-formula id="eqn-30"><label>(29)</label><mml:math id="mml-eqn-30" display="block"><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x2217;</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Self-concatenated convolutional codes encoder and decoder</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-4.png"/>
</fig>
</sec>
<sec id="s4">
<label>4</label>
<title>Proposed System Model</title>
<p>The performance of the proposed is analyzed using H.266 and H. 265 source encoding aided a BP-CNN architecture at the decoding side refer to <xref ref-type="fig" rid="fig-5">Fig. 5</xref>. Initially, a Standard Definition or High Definition video is provided at the input of the latest H.266 and H.265 source encoding standard as shown in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>. The performance parameters for the proposed word is given in <xref ref-type="table" rid="table-1">Tab. 1</xref>. H.266 is the latest source compression standard with 50% compression efficiency compared to H.265. H.266 and H.265 generates a compressed bitstream <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mi>x</mml:mi></mml:math></inline-formula> which are then presented to the channel encoder. The channel encoder adds redundant bits to the bitstream and results in a stream of <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mi>u</mml:mi></mml:math></inline-formula>. Shannon information theory presented messages and signals as a function of space and the modulation process is a mapping from one space into another. Therefore, the bitstream is then forwarded to a SP modulation block where a symbol <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mi>s</mml:mi></mml:math></inline-formula> is generated from the constellation points. The symbol is then forwarded to DSTS block which results in differential encoded symbols y<sub>1</sub> and y<sub>2</sub>. The differential symbols are then transmitted over a channel which adds a noise <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mrow><mml:mi mathvariant="normal">&#x03B7;</mml:mi></mml:mrow></mml:math></inline-formula> to the signal. The error rate on Rayleigh fading channel model cannot be much reduced by simply increasing transmission power or increasing the allocation of bandwidth as it is contrary to requirements of the next generation systems. An iterative BP-CNN architecture is presented in [<xref ref-type="bibr" rid="ref-17">17</xref>]. The proposed work uses the BP-CNN architecture for noise estimation from the H.265 encoded video sequence at the receiver side. The symbol <inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mi>y</mml:mi></mml:math></inline-formula> is received from the channel where it is first presented to DSTS decoder. The DSTS decoded symbol s is then presented to SP Demodulation results in symbol. The symbol <italic>u</italic> is then further presented to BP-CNN. Finally, a resultant bitstream <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mrow><mml:mover><mml:mi>x</mml:mi><mml:mo stretchy="false">&#x005E;</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> is provided to the H.266 and H. 265 source decoder where a resultant video stream is reconstructed and different performance parameters has been computed.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Convolutional codes using H.265 aided CNN at the decoding</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-5.png"/>
</fig>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Performance parameters of the proposed scheme</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Source-code parameters</th>
<th>Value</th>
<th>Channel-related parameters</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Compression standard</td>
<td>VVC,HEVC and AVC</td>
<td>Modulation scheme</td>
<td>SP</td>
</tr>
<tr>
<td>Data rate</td>
<td>64 kbps</td>
<td><inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>T</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> antennas</td>
<td>2</td>
</tr>
<tr>
<td>Frame rate</td>
<td>15 fps</td>
<td><inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>R</mml:mi><mml:mi>x</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> antennas</td>
<td>1</td>
</tr>
<tr>
<td>Slices per frame</td>
<td>9</td>
<td>Channel</td>
<td>Correlated Rayleigh fading channel</td>
</tr>
<tr>
<td>Slice mode</td>
<td>1</td>
<td>Channel codes</td>
<td>RSC, SCC, Turbo codes and LDPC codes</td>
</tr>
<tr>
<td>No of macroblock per frame</td>
<td>11</td>
<td>Maximum coding unit width</td>
<td>64</td>
</tr>
<tr>
<td>Sample adoptive offset (SOA) mode</td>
<td>1</td>
<td>MIMO scheme</td>
<td>DSTS</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s5">
<label>5</label>
<title>Simulation Results</title>
<p>Objective quality of the proposed has been drawn on the performance parametric. BER and PSNR curve is plotted for visualizing the performance of the proposed system for objective video quality assessment. The proposed system is utilizing VVC and HEVC as a source encoder while convolutional, turbo, and self-concatenated convolutional codes as a channel encoder. There are two versions of the currently released VVC standard. One is VVCSoftware_VTM_master and the other one is VVenc-master. VVCSoftware takes much time in encoding while VVenc-master is the faster version. The compression efficiency of VVC over HEVC is given in <xref ref-type="table" rid="table-2">Tab. 2</xref>.</p>
<p><table-wrap id="table-2"><label>Table 2</label>
<caption>
<title>Compression efficiency of H.266 source encoding standard over H.265</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Encoding standard</th>
<th>Video sequence</th>
<th>Resolution</th>
<th>Frames</th>
<th>Original video sequence size</th>
<th>Compressed video sequence size</th>
<th>Compression times [sec]</th>
<th>Compression efficiency</th>
</tr>
</thead>
<tbody>
<tr>
<td>HEVC</td>
<td>Akiyo low</td>
<td>176 &#x00D7; 144</td>
<td>300</td>
<td>11 MB</td>
<td>408 kb</td>
<td>35.357</td>
<td rowspan="3">97%<break/> 91%</td>
</tr>
<tr>
<td>VVC<break/>Software_VTM</td>
<td>Akiyo_low</td>
<td>176 &#x00D7; 144</td>
<td>300</td>
<td>11 MB</td>
<td>12.8 kB</td>
<td>1460.530</td>
</tr>
<tr>
<td>VVenc</td>
<td>Akiyo_low</td>
<td>176 &#x00D7; 144</td>
<td>300</td>
<td>11 MB</td>
<td>36.7 kb</td>
<td>11.414</td>
</tr>
<tr>
<td>HEVC</td>
<td>Johnny_1280 &#x00D7; 720_60.yuv<break/>Video sequence</td>
<td>1280 &#x00D7; 720</td>
<td>300</td>
<td>829 MB</td>
<td>4.3 MB</td>
<td>1019.714</td>
<td rowspan="3">48.8%</td>
</tr>
<tr>
<td>VVC</td>
<td>Johnny_1280 &#x00D7; 720_60.yuv<break/>Video sequence</td>
<td>1280 &#x00D7; 720</td>
<td>40</td>
<td>829 MB</td>
<td>49.1 kb</td>
<td></td>
</tr>
<tr>
<td>VVenc</td>
<td>Johnny_1280 &#x00D7; 720_60.yuv<break/>Video sequence</td>
<td>1280 &#x00D7; 720</td>
<td>300</td>
<td>829 MB</td>
<td>2.2 MB</td>
<td>1033.05</td>
</tr>
</tbody>
</table>
</table-wrap></p>
<p>From the given results in <xref ref-type="table" rid="table-2">Tab. 2</xref>, VVC outperform over 97% compared to HEVC for a low resolution video sequence while for high resolution video sequence, VVC outperform on about 48.8% compared to HEVC encoding standard. Moreover, a BP-CNN architecture is included to find the results of the proposed on the said architecture with the transmission of a video sequence. The proposed system is using the same code rates for all the three channel codes and utilizing Akiyo video sequence for transmission over the system. The video bitstream is also transmitted on the system aiding the BP-CNN architecture at the decoding side. To perform a fair analysis, there must be some performance parameters for comparison. There are two approaches that are mainly utilized for comparing the system results i.e., Objective and Subjective analysis. The proposed work is evaluated on objective performance parameters which is BER and PSNR. The simulation results is plotted by varying different values of <inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mtext>&#xA0;dB</mml:mtext></mml:mrow></mml:math></inline-formula> having a constant code rate of <inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>3</mml:mn></mml:math></inline-formula> by transmitting bitstream.</p>
<p>As the value of <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mtext>&#xA0;dB</mml:mtext></mml:mrow></mml:math></inline-formula> increases the BER decreases and within limited time, BER of Turbo codes approaches to zero as shown in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>. Turbo codes on the developed system is also compared with the convolutional codes and self concatenated convolutional codes. Convolutional codes is performing better relative to SCC codes in terms of BER in the developed system. Turbo codes and Convolutional codes achieved a perfect knowledge about the channel state information and a steady jump to lower BER value has been visualized. An <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> of 1.5 dB gain is achieved in turbo codes compared to convolutional codes at on approximate BER value of <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:msup><mml:mn>10</mml:mn><mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>5</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> while convolutional codes gain an <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> of 3.2 dB compared to SCC. In order to find the effect of the interesting neural network in stochastic channel noise estimation, the results of the proposed work is further compared with the same system utilizing H.265 source encoding standard and BP-CNN architecture at the decoding side. BER <italic>vs.</italic> <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> dB curve is plotted in <xref ref-type="fig" rid="fig-7">Fig. 7</xref> while transmitting a compressed video sequence. The BER is decreases with increasing the <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> dB values refer to <xref ref-type="fig" rid="fig-7">Fig. 7</xref>. The channel codes when applying a BP-CNN architecture at the decoding side achieved a perfect knowledge about the channel stochastic noise estimation and a steady jump to lower BER value has been observed. A gain of 0.8 <inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> dB is achieved compared to the bench marker system. The <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> <italic>vs.</italic> PSNR curve is plotted for H.266, H.265 and H.264 source encoding standard considering convolutional codes as a channel code as shown in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>. PSNR is increasing with varying <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> values in dB. AVC performance is better for a lower E<sub>b</sub>/N<sub>0</sub> but the performance reduces for higher values. It is clear from the results that the convolutional codes with the latest H.266 outperform with the PSNR gain of <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mn>6</mml:mn><mml:mrow><mml:mtext>&#xA0;dB</mml:mtext></mml:mrow></mml:math></inline-formula> on about 3 dB E<sub>b</sub>/N<sub>0</sub> compared to H.265 while H.265 outperform with about 5.5 dB PSNR gain compared to H.264. Subjective Video quality for the considered AVC, HEVC, and VVC standard has been given in <xref ref-type="fig" rid="fig-9">Fig. 9</xref> while the validation for each corresponding frame is provided <xref ref-type="table" rid="table-3">Tab. 3</xref>.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>BER <italic>vs.</italic> <italic>E</italic><sub><italic>b</italic></sub><italic>/N</italic><sub><italic>0</italic></sub> comparison of convolutional, turbo and self concatenated codes</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-6.png"/>
</fig>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Channel codes utilizing CNN and HEVC</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-7.png"/>
</fig>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>E<sub>b</sub>/N<sub>0</sub> <italic>vs.</italic> PSNR for H.264, H.265, and H.266</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-8.png"/>
</fig>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Subjective video performance of AVC, HEVC, and VVC using the proposed model</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_30531-fig-9.png"/>
</fig>
<p>From the subjective video quality of the proposed work, it can be clearly observed that AVC perform better for a lower frame index i.e., 10<sup>th</sup> frame but afterwards drops the remaining frames as from <xref ref-type="fig" rid="fig-9">Figs. 9a</xref> and <xref ref-type="fig" rid="fig-9">9d</xref>. For HEVC, an about 300 number of frames are transmitted and received all the frames with a given PSNR value in <xref ref-type="table" rid="table-3">Tab. 3</xref>. The HEVC subjective results for a frame number of 74<sup>th</sup> and 251<sup>th</sup> is given in <xref ref-type="fig" rid="fig-9">Figs. 9b</xref> and <xref ref-type="fig" rid="fig-9">9e</xref>, correspondingly. It is clear that HEVC preserved a large number of frames to the receiving end and frame dropout rate is too low. The same fact can be visualized for VVC video coding standard. The frame number 20<sup>th</sup> and 44<sup>th</sup> is given in <xref ref-type="fig" rid="fig-9">Figs. 9c</xref> and <xref ref-type="fig" rid="fig-9">9f</xref>. VVC maintaining about a constant PSNR for all the frame for a lower E<sub>b</sub>/N<sub>0</sub> value as well.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>PSNR values for each frame of the original and decoded video sequence E<sub>b</sub>/N<sub>0</sub> value of 2.5 dB</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Encoding standard</th>
<th>Frame number</th>
<th>PSNR over luma</th>
<th>PSNR over two chroma components</th>
</tr>
</thead>
<tbody>
<tr>
<td>AVC</td>
<td>10<sup>th</sup> frame of Akiyo Video Sequence</td>
<td>36.48656</td>
<td>40.97436</td>
</tr>
<tr>
<td/>
<td>15<sup>th</sup> frame of Akiyo Video Sequence</td>
<td>14.24479</td>
<td>13.18357</td>
</tr>
<tr>
<td>HEVC</td>
<td>74th frame of Akiyo Video Sequence</td>
<td>32.57907</td>
<td>34.137251</td>
</tr>
<tr>
<td/>
<td>251th frame of Akiyo Video Sequence</td>
<td>32.9236</td>
<td>34.7027</td>
</tr>
<tr>
<td>VVC</td>
<td>20th frame of Akiyo Video Sequence</td>
<td>37.47395633</td>
<td>40.0042</td>
</tr>
<tr>
<td/>
<td>44th frame of Akiyo Video Sequence</td>
<td>37.1205126</td>
<td>39.41405</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s6">
<label>6</label>
<title>Conclusion</title>
<p>This article is comparing the architecture, performance and the design of three extensively used channel codes utilizing the latest source encoder VVC, HEVC and VVC. Moreover, a BP-CNN architecture is added at the decoding side for attaining lower BER compared to the benchmarked. We proceed by comparing their block structures to distinguish among the said codes. Further, after simulating the codes for their performance on accounts of the BER, it is conclusively accepted that the turbo codes exceed the others in performance by considerable amount. Turbo codes outperform with an <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> dB gain of 1.5 dB compared to convolutional codes while convolutional codes defeats SCC with an average <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:msub><mml:mi>E</mml:mi><mml:mrow><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> gain of 3.2 dB. The PSNR results of convolutional codes with the source coding standard of H.266 is plotted against convolutional codes with HEVC/H.265 and it was concluded VVC outperform with about 6 dB PSNR gain at E<sub>b</sub>/N<sub>0</sub> of 3 dB. From the subjective video quality assessment, it is concluded that HEVC and VVC preserved a higher number of frames at the receiving end while the frame dropout rate is too low. Subjective Furthermore, BP-CNN based system produces an <italic>E</italic><sub><italic>b</italic></sub><italic>/N</italic><sub><italic>0</italic></sub> gain of 0.8 dB compared to the bench marker system while transmitting the HEVC compressed video sequence.</p>
</sec>
</body>
<back>
<ack>
<p>The financial support of NCBC-UETP, under the auspices of Higher Education Commission, Pakistan is gratefully acknowledged.</p>
</ack>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> This article was supported by the Ministry of Education of the Czech Republic (Project No. SP2022/18 and No. SP2022/5) and by the European Regional Development Fund in the Research Centre of Advanced Mechatronic Systems project, project number CZ.02.1.01/0.0/0.0/16 019/0000867 within the Operational Programme Research, Development, and Education.</p>
</fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that there is no conflict of interest regarding the publication of this paper.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C. E.</given-names> <surname>Shannon</surname></string-name></person-group>, &#x201C;<article-title>A mathematical theory of communication</article-title>,&#x201D; <source>Bell System Technical Journal</source>, vol. <volume>27</volume>, no. <issue>3</issue>, pp. <fpage>379</fpage>&#x2013;<lpage>423</lpage>, <year>1948</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Hero</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Ma</surname></string-name> and <string-name><given-names>O.</given-names> <surname>Michel</surname></string-name></person-group>, &#x201C;<article-title>Imaging applications of stochastic minimal graphs</article-title>,&#x201D; <source>Proceedings 2001 Int. Conf. on Image Processing (Cat. No. 01CH37205)</source>, vol. <volume>3</volume>, pp. <fpage>573</fpage>&#x2013;<lpage>576</lpage>, <year>2001</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Minallah</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Ullah</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Frnda</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Hasan</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Nedoma</surname></string-name></person-group>, &#x201C;<article-title>On the performance of video resolution, motion and dynamism in transmission using near-capacity transceiver for wireless communication</article-title>,&#x201D; <source>Entropy</source>, vol. <volume>23</volume>, no. <issue>5</issue>, pp. <fpage>562</fpage>&#x2013;<lpage>586</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Minallah</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Ullah</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Frnda</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Cengiz</surname></string-name> and <string-name><given-names>M. A.</given-names> <surname>Javed</surname></string-name></person-group>, &#x201C;<article-title>Transmitter diversity gain technique aided irregular channel coding for mobile video transmission</article-title>,&#x201D; <source>Entropy</source>, vol. <volume>23</volume>, no. <issue>2</issue>, pp. <fpage>235</fpage>&#x2013;<lpage>256</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Khalil</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Minallah</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Ahmed</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Ullah</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Frnda</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Robust mobile video transmission using DSTS-SP via three-stage iterative joint source-channel decoding</article-title>,&#x201D; <source>Human Centric Computing and Information Sciences</source>, vol. <volume>11</volume>, no. <issue>42</issue>, pp. <fpage>343</fpage>&#x2013;<lpage>359</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Minallah</surname></string-name>, <string-name><given-names>M. F. U.</given-names> <surname>Butt</surname></string-name>, <string-name><given-names>I. U.</given-names> <surname>Khan</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Ahmed</surname></string-name>, <string-name><given-names>K. S.</given-names> <surname>Khattak</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Analysis of near-capacity iterative decoding schemes for wireless communication using EXIT charts</article-title>,&#x201D; <source>IEEE Access</source>, vol. <volume>8</volume>, pp. <fpage>124424</fpage>&#x2013;<lpage>124436</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Minallah</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Ahmed</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Ijaz</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Khan</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Hasan</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>On the performance of self-concatenated coding for wireless mobile video transmission using DSTS-SP-assisted smart antenna system</article-title>,&#x201D; <source>Wireless Communications and Mobile Computing</source>, vol. <volume>5</volume>, no. <issue>11</issue>, pp. <fpage>1530</fpage>&#x2013;<lpage>8669</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Guyader</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Fabre</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Guillemot</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Robert</surname></string-name></person-group>, &#x201C;<article-title>Joint source channel turbo decoding of entropy-coded sources</article-title>,&#x201D; <source>IEEE Journal on Selected Areas in Communications</source>, vol. <volume>19</volume>, no. <issue>9</issue>, pp. <fpage>1680</fpage>&#x2013;<lpage>1696</lpage>, <year>2001</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Kliewer</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Thobaben</surname></string-name></person-group>, &#x201C;<article-title>Iterative joint source-channel decoding of variable-length codes using residual source redundancy</article-title>,&#x201D; <source>IEEE Transactions on Wireless Communications</source>, vol. <volume>4</volume>, no. <issue>3</issue>, pp. <fpage>919</fpage>&#x2013;<lpage>929</lpage>, <year>2005</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R. G.</given-names> <surname>Maunder</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>S. X.</given-names> <surname>Ng</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Yang</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Hanzo</surname></string-name></person-group>, &#x201C;<article-title>On the performance and complexity of irregular variable length codes for near-capacity joint source and channel coding</article-title>,&#x201D; <source>IEEE Transactions on Wireless Communications</source>, vol. <volume>7</volume>, no. <issue>4</issue>, pp. <fpage>1338</fpage>&#x2013;<lpage>1347</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T.</given-names> <surname>Fingscheidt</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Vary</surname></string-name></person-group>, &#x201C;<article-title>Softbit speech decoding: A new approach to error concealment</article-title>,&#x201D; <source>IEEE Transactions on Speech and Audio Processing</source>, vol. <volume>9</volume>, no. <issue>3</issue>, pp. <fpage>240</fpage>&#x2013;<lpage>251</lpage>, <year>2001</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Adrat</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Vary</surname></string-name></person-group>, &#x201C;<article-title>Iterative source-channel decoding: Improved system design using exit charts</article-title>,&#x201D; <source>EURASIP Journal on Applied Signal Processing</source>, vol. <volume>2005</volume>, pp. <fpage>928</fpage>&#x2013;<lpage>941</lpage>, <year>2005</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Ostermann</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Bormans</surname></string-name>, <string-name><given-names>P.</given-names> <surname>List</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Marpe</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Narroschke</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Video coding with h. 264/avc: tools, performance, and complexity</article-title>,&#x201D; <source>IEEE Circuits and Systems Magazine</source>, vol. <volume>4</volume>, no. <issue>1</issue>, pp. <fpage>7</fpage>&#x2013;<lpage>28</lpage>, <year>2004</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Hagenauer</surname></string-name></person-group>, &#x201C;<article-title>The exit chart-introduction to extrinsic information transfer in iterative processing</article-title>,&#x201D; in <conf-name>2004 12th European Signal Processing Conf.</conf-name>, <publisher-name>IEEE</publisher-name>, pp. <fpage>1541</fpage>&#x2013;<lpage>1548</lpage>, <year>2004</year>. </mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. T.</given-names> <surname>Brink</surname></string-name></person-group>, &#x201C;<article-title>Designing iterative decoding schemes with the extrinsic information transfer chart</article-title>,&#x201D; <source>AEU Int. J. Electron Commun.</source>, vol. <volume>54</volume>, no. <issue>6</issue>, pp. <fpage>389</fpage>&#x2013;<lpage>398</lpage>, <year>2000</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Hanzo</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Cherriman</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Kuan</surname></string-name></person-group>, &#x201C;<article-title>Interactive cellular and cordless video telephony: State-of-the-art system design principles and expected performance</article-title>,&#x201D; <source>Proceedings of the IEEE</source>, vol. <volume>88</volume>, no. <issue>9</issue>, pp. <fpage>1388</fpage>&#x2013;<lpage>1413</lpage>, <year>2000</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Liang</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Shen</surname></string-name> and <string-name><given-names>F.</given-names> <surname>Wu</surname></string-name></person-group>, &#x201C;<article-title>An iterative bp-cnn architecture for channel decoding</article-title>,&#x201D; <source>IEEE Journal of Selected Topics in Signal Processing</source>, vol. <volume>12</volume>, no. <issue>1</issue>, pp. <fpage>144</fpage>&#x2013;<lpage>159</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. U.</given-names> <surname>Gawas</surname></string-name></person-group>, &#x201C;<article-title>An overview on evolution of mobile wireless communication networks: 1G-6G</article-title>,&#x201D; <source>International Journal on Recent and Innovation Trends in Computing and Communication</source>, vol. <volume>3</volume>, no. <issue>5</issue>, pp. <fpage>3130</fpage>&#x2013;<lpage>3133</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Khaled</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Chen</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Shi</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name> and <string-name><given-names>Y. A.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>The roadmap to 6G: AI empowered wireless networks</article-title>,&#x201D; <source>IEEE Communications Magazine</source>, vol. <volume>57</volume>, no. <issue>8</issue>, pp. <fpage>84</fpage>&#x2013;<lpage>90</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Lisu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Wu</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Zhou</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Larsson</surname></string-name> and <string-name><given-names>P.</given-names> <surname>Fan</surname></string-name></person-group>, &#x201C;<article-title>Massively distributed antenna systems with nonideal optical fiber fronthauls: A promising technology for 6G wireless communication systems</article-title>,&#x201D; <source>IEEE Vehicular Technology Magazine</source>, vol. <volume>15</volume>, no. <issue>4</issue>, pp. <fpage>43</fpage>&#x2013;<lpage>51</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Yu</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Zilong</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Miaowen</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Donghong</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Shuping</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Sparse code multiple access for 6G wireless communication networks: Recent advances and future directions</article-title>,&#x201D; <source>IEEE Communications Standards Magazine</source>, vol. <volume>5</volume>, no. <issue>2</issue>, pp. <fpage>92</fpage>&#x2013;<lpage>99</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Viitanen</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Sainio</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Mercat</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Lemmetti</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Vanne</surname></string-name></person-group>, &#x201C;<article-title>From HEVC to VVC: The first development steps of a practical intra video encoder</article-title>,&#x201D; <source>IEEE Transactions on Consumer Electronics</source>, vol. <volume>68</volume>, no. <issue>2</issue>, pp. <fpage>139</fpage>&#x2013;<lpage>148</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>B.</given-names> <surname>Bross</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Ye</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Liu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Chen</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Overview of the versatile video coding (VVC) standard and its applications</article-title>,&#x201D; <source>IEEE Transactions on Circuits and Systems for Video Technology</source>, vol. <volume>31</volume>, no. <issue>10</issue>, pp. <fpage>3736</fpage>&#x2013;<lpage>3764</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>X.</given-names> <surname>Zhao</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Kim</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Zhao</surname></string-name>, <string-name><given-names>H. E.</given-names> <surname>Egilmez</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Koo</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Transform coding in the VVC Standard</article-title>,&#x201D; <source>IEEE Transactions on Circuits and Systems for Video Technology</source>, vol. <volume>31</volume>, no. <issue>10</issue>, pp. <fpage>3878</fpage>&#x2013;<lpage>3890</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>J. G.</given-names> <surname>Proakis</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Salehi</surname></string-name></person-group>, <source>Digital Communications</source>. vol. <volume>4</volume>. <publisher-loc>New York</publisher-loc>: <publisher-name>McGrawhill</publisher-name>, <year>2001</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Urrea</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Kern</surname></string-name> and <string-name><given-names>R. L.</given-names> <surname>Escobar</surname></string-name></person-group>, &#x201C;<article-title>Design of an interleaver with criteria to improve the performance of turbo codes in short block lengths</article-title>,&#x201D; <source>Wireless Networks</source>, vol. <volume>28</volume>, no. <issue>90</issue>, pp. <fpage>1428</fpage>&#x2013;<lpage>1429</lpage>, <year>2022</year>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>L. A.</given-names> <surname>Perisoara</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Stoian</surname></string-name></person-group>, &#x201C;<article-title>The decision reliability of map, logmap, max-log-map and sova algorithms for turbo codes</article-title>,&#x201D; <source>International Journal of Communications</source>, vol. <volume>2</volume>, no. <issue>1</issue>, pp. <fpage>65</fpage>&#x2013;<lpage>74</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Berrou</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Pyndiah</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Adde</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Douillard</surname></string-name> and <string-name><given-names>R. L.</given-names> <surname>Bidan</surname></string-name></person-group>, &#x201C;<article-title>An overview of turbo codes and their applications</article-title>,&#x201D; in <conf-name>The European Conf. on Wireless Technology,</conf-name>, <publisher-name>IEEE</publisher-name>, pp. <fpage>1</fpage>&#x2013;<lpage>9</lpage>, <year>2005</year>. </mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S. X.</given-names> <surname>Ng</surname></string-name>, <string-name><given-names>M. F. U.</given-names> <surname>Butt</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Hanzo</surname></string-name></person-group>, &#x201C;<article-title>On the union bounds of self-concatenated convolutional codes</article-title>,&#x201D; <source>IEEE Signal Processing Letters</source>, vol. <volume>16</volume>, no. <issue>9</issue>, pp. <fpage>754</fpage>&#x2013;<lpage>757</lpage>, <year>2009</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. F. U.</given-names> <surname>Butt</surname></string-name>, <string-name><given-names>S. X.</given-names> <surname>Ng</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Hanzo</surname></string-name></person-group>, &#x201C;<article-title>Self-concatenated code design and its application in power-efficient cooperative communications</article-title>,&#x201D; <source>IEEE Communications Surveys &#x0026; Tutorials</source>, vol. <volume>14</volume>, no. <issue>3</issue>, pp. <fpage>858</fpage>&#x2013;<lpage>883</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><given-names>M. F. U.</given-names> <surname>Butt</surname></string-name></person-group>, &#x201C;<article-title>Self-concatenated coding for wireless communication systems</article-title>,&#x201D; <comment>PhD thesis, University of Southampton</comment>, <year>2010</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>
