<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">IASC</journal-id>
<journal-id journal-id-type="nlm-ta">IASC</journal-id>
<journal-id journal-id-type="publisher-id">IASC</journal-id>
<journal-title-group>
<journal-title>Intelligent Automation &#x0026; Soft Computing</journal-title>
</journal-title-group>
<issn pub-type="epub">2326-005X</issn><issn pub-type="ppub">1079-8587</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">17607</article-id>
<article-id pub-id-type="doi">10.32604/iasc.2021.017607</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Face Image Compression and Reconstruction Based on Improved PCA</article-title>
<alt-title alt-title-type="left-running-head">Face Image Compression and Reconstruction Based on Improved PCA</alt-title>
<alt-title alt-title-type="right-running-head">Face Image Compression and Reconstruction Based on Improved PCA</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western">
<surname>Xue</surname>
<given-names>Yu</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<xref ref-type="aff" rid="aff-2">2</xref>
<email>xueyu@nuist.edu.cn</email>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western">
<surname>Chen</surname>
<given-names>Chen</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western">
<surname>Wang</surname>
<given-names>ChiShe</given-names>
</name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western">
<surname>Li</surname>
<given-names>Linguo</given-names>
</name>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western">
<surname>Mansour</surname>
<given-names>Romany F.</given-names>
</name>
<xref ref-type="aff" rid="aff-4">4</xref>
</contrib>
<aff id="aff-1">
<label>1</label><institution>School of Computer and Software, Nanjing University of Information Science and Technology</institution>, <addr-line>Nanjing, 210044</addr-line>, <country>China</country></aff>
<aff id="aff-2">
<label>2</label><institution>Jiangsu Key Laboratory of Data Science and Smart Software, Jinling Institute of Technology</institution>, <addr-line>Nanjing, 211169</addr-line>, <country>China</country></aff>
<aff id="aff-3">
<label>3</label><institution>College of Information Engineering, Fuyang Normal University</institution>, <addr-line>Fuyang, 236041</addr-line>, <country>China</country></aff>
<aff id="aff-4">
<label>4</label><institution>Department of Mathematics, Faculty of Science, New Valley University</institution>, <addr-line>El-Kharga, 72511</addr-line>, <country>Egypt</country></aff>
</contrib-group><author-notes><corresp id="cor1">&#x002A;Corresponding Author: Yu Xue. Email: <email>xueyu@nuist.edu.cn</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-08-11">
<day>11</day>
<month>8</month>
<year>2021</year>
</pub-date>
<volume>30</volume>
<issue>3</issue>
<fpage>973</fpage>
<lpage>982</lpage>
<history>
<date date-type="received">
<day>04</day>
<month>2</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>02</day>
<month>7</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2021 Xue et al.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Xue et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_IASC_17607.pdf"></self-uri>
<abstract>
<p>Face recognition technology has many usages in the real-world applications, and it has generated extensive interest in recent years. However, the amount of data in a digital image is growing explosively, taking up a lot of storage and transmission resources. There is a lot of redundancy in an image data representation. Thus, image compression has become a hot topic. The principal component analysis (PCA) can effectively remove the correlation of an image and condense the image information into a characteristic image with several main components. At the same time, it can restore different data images according to their principal components and meet the needs of image compression and reconstruction at diverse levels. This paper introduces an improved PCA algorithms. The covariance matrix, calculated according to a batch of training samples, is an approximation of the real covariance matrix. The matrix is relatively to the dimension of the covariance matrix, and the number of training samples is often too small. Therefore, it difficult to accurately obtain the covariance matrix. This improved PCA algorithm called 2DPCA can solve this problem effectively. By comparing it with several discrete PCA improvement algorithms, we show that the 2DPCA has a better dimensionality reduction effect. Compared with the PCA algorithm, the 2DPCA has a lower root-mean-square error under the constant noise condition.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Image compression</kwd>
<kwd>PCA</kwd>
<kwd>feature extraction</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>At present, the amount of digital image data is soaring, occupying a lot of storage space and apportioning increased transmission resources [<xref ref-type="bibr" rid="ref-1">1</xref>]. Due to the high correlation of adjacent pixels, there is a lot of redundancy in image data representation. The principal component analysis (PCA) method can remove the correlation of the image data [<xref ref-type="bibr" rid="ref-2">2</xref>], and effectively compress the image information into several main components. At the same time, it can restore different data images according to their number of principal components, thus meet the needs of image compression and reconstruction at different levels. Moreover, PCA is often used for feature selection [<xref ref-type="bibr" rid="ref-3">3</xref>&#x2013;<xref ref-type="bibr" rid="ref-5">5</xref>].</p>
<p>Among the active subspaces, the researchers&#x2019; top concern is the face image. It has been of a wide concern and deeply studied by the academic community. Feature extraction and dimension reduction are the key steps of face compression [<xref ref-type="bibr" rid="ref-6">6</xref>]. However, there are many shortcomings in the PCA algorithm. The common PCA compression method cannot achieve good results due to external conditions such as change of facial expression and strong light. Another important factor to consider is the dimension of the pictures [<xref ref-type="bibr" rid="ref-7">7</xref>]. Therefore, it is necessary to study an improved PCA algorithm, which can enhance the compression efficiency and ameliorates the accuracy of reconstruction [<xref ref-type="bibr" rid="ref-8">8</xref>]. It should be noticed that self-adaptive parameter is a good direction to optimization PCA. The image compression and reconstruction can be used in drones [<xref ref-type="bibr" rid="ref-9">9</xref>]. It should be noticed that self-adaptive parameter is a good direction to optimization PCA [<xref ref-type="bibr" rid="ref-10">10</xref>&#x2013;<xref ref-type="bibr" rid="ref-13">13</xref>].</p>
<p>The main work of this paper is to study and analyzes the PCA algorithm for image compression and reconstruction. This paper focus on the study of PCA improved algorithm which includes 2DPCA, Mat PCA and Module PCA. The rest of the article is structured as follows: Section 2 introduces related work. Section 3 describes the PCA and improved PCA. Section 4 introduces the designs of experiment and analysis the experimental results. Finally, Section 5 provides the conclusions.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Work</title>
<p>PCA is also called principal component analysis. It is a statistical method that converts the original multiple variables into several new composite variables [<xref ref-type="bibr" rid="ref-14">14</xref>]. These new variables are uncorrelated with each other and effectively to represent the information of the original variables. PCA can remove the correlation between image data, and condense the image information on the characteristic image which is several main components. The PCA is effectively to realize image compression. At the same time, it can recover different data image according to the number of principal components which are meet the needs of image compression and reconstruction at diverse levels. To conduct data analysis through deep learning [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>]. The PCA can be used to preprocess multi-objective optimization algorithms [<xref ref-type="bibr" rid="ref-17">17</xref>]. The basic PCA image compression algorithm can achieve ideal compression ratio, but this method does not have a good standard for the selection of the number of retained features. The signal-to-noise ratio is very low, and the non-linear or non-stationary image signals are not considered. At the same time, the algorithm is optimized by the evolutionary algorithm and deep learning [<xref ref-type="bibr" rid="ref-18">18</xref>,<xref ref-type="bibr" rid="ref-19">19</xref>].</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Improved PCA Algorithms</title>
<p>The pixel represents redundant information on the face image. It can be used to subtract the predicted value <inline-formula id="ieqn-1">
<!--<alternatives><inline-graphic xlink:href="ieqn-1.png"/><tex-math id="tex-ieqn-1"><![CDATA[${I_P}$]]></tex-math>--><mml:math id="mml-ieqn-1"><mml:mrow><mml:msub><mml:mi>I</mml:mi><mml:mi>P</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> from the actual value <inline-formula id="ieqn-2">
<!--<alternatives><inline-graphic xlink:href="ieqn-2.png"/><tex-math id="tex-ieqn-2"><![CDATA[$I$]]></tex-math>--><mml:math id="mml-ieqn-2"><mml:mi>I</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>, which obtain the difference value <inline-formula id="ieqn-3">
<!--<alternatives><inline-graphic xlink:href="ieqn-3.png"/><tex-math id="tex-ieqn-3"><![CDATA[$\Delta I$]]></tex-math>--><mml:math id="mml-ieqn-3"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:mi>I</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>, and value <inline-formula id="ieqn-4">
<!--<alternatives><inline-graphic xlink:href="ieqn-4.png"/><tex-math id="tex-ieqn-4"><![CDATA[$\Delta I$]]></tex-math>--><mml:math id="mml-ieqn-4"><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:mi>I</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> is known as the prediction error. Finally, the prediction error is only compressed and encoded. Since the predicted value of each pixel only uses the previously encoded pixel, this coding process is also said to be causal. The decoding process based on causal encoding is shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>A classical predictive based compression system</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_17607-fig-1.png"/>
</fig>
<p>Another way of image compression is transformation. In the process of transformation, the image is first obtained by some transforming (linear or nonlinear), and then which quantize these coefficients to obtain the compressed image. At the end of decoding, the encoded coefficients are quantized inversely, and the actual image are produced by inverse transformation. A typical transformation based on compression system is shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>A typical transformation-based compression system</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_17607-fig-2.png"/>
</fig>
<p>Both forecast code and conversion code have their own advantages. The former one is relatively simple to implement, and the algorithm itself is adaptive to the original information of the image. The latter one generally has a higher compression ratio, but the cost is the complexity of transformation calculation, which also makes the implementation more complex. The evaluation method of image compression is usually divided into two aspects, compression performance and compression image quality. Compression performance is usually measured by compression ratio <inline-formula id="ieqn-5">
<!--<alternatives><inline-graphic xlink:href="ieqn-5.png"/><tex-math id="tex-ieqn-5"><![CDATA[${C_R}$]]></tex-math>--><mml:math id="mml-ieqn-5"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>R</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> or relative data redundancy <inline-formula id="ieqn-6">
<!--<alternatives><inline-graphic xlink:href="ieqn-6.png"/><tex-math id="tex-ieqn-6"><![CDATA[$R$]]></tex-math>--><mml:math id="mml-ieqn-6"><mml:mi>R</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>, which is defined as the ratio between the total amount of original data <italic>b</italic> and the total amount of compressed data <inline-formula id="ieqn-7">
<!--<alternatives><inline-graphic xlink:href="ieqn-7.png"/><tex-math id="tex-ieqn-7"><![CDATA[$b&#x2019;$]]></tex-math>--><mml:math id="mml-ieqn-7"><mml:msup><mml:mi>b</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:math>
<!--</alternatives>--></inline-formula>. Relative data redundancy <inline-formula id="ieqn-8">
<!--<alternatives><inline-graphic xlink:href="ieqn-8.png"/><tex-math id="tex-ieqn-8"><![CDATA[$R$]]></tex-math>--><mml:math id="mml-ieqn-8"><mml:mi>R</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> is defined as the percentage of the reduced amount of compressed data relative to the original amount of data:</p>
<p><disp-formula id="eqn-1">
<label>(1)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-1.png"/><tex-math id="tex-eqn-1"><![CDATA[$${C_R} = {{b} \over {b&#x2019;}}$$]]></tex-math>--><mml:math id="mml-eqn-1" display="block"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mi>R</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>b</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mi>b</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-2">
<label>(2)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-2.png"/><tex-math id="tex-eqn-2"><![CDATA[$$R = {{\Delta b} \over b} = {{b - b&#x2019;} \over b} = 1 - {{1 \over C}_R}$$]]></tex-math>--><mml:math id="mml-eqn-2" display="block"><mml:mi>R</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi mathvariant="normal">&#x0394;</mml:mi><mml:mi>b</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:mfrac></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mi>b</mml:mi><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mi>b</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow><mml:mi>b</mml:mi></mml:mfrac></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>C</mml:mi></mml:mfrac></mml:mrow><mml:mi>R</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-3">
<label>(3)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-3.png"/><tex-math id="tex-eqn-3"><![CDATA[$$bpp = {{b&#x2019;} \over {pixels}}$$]]></tex-math>--><mml:math id="mml-eqn-3" display="block"><mml:mi>b</mml:mi><mml:mi>p</mml:mi><mml:mi>p</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:msup><mml:mi>b</mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mi>i</mml:mi><mml:mi>x</mml:mi><mml:mi>e</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-5">
<label>(4)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-5.png"/><tex-math id="tex-eqn-5"><![CDATA[$${e_{rms}} = \sqrt {{1 \over {MN}}\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{{\left[ {\hat I\left( {x,y} \right) - I\left( {x,y} \right)} \right]}^2}} } }$$]]></tex-math>--><mml:math id="mml-eqn-5" display="block"><mml:mrow><mml:msub><mml:mi>e</mml:mi><mml:mrow><mml:mi>r</mml:mi><mml:mi>m</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:msqrt><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi>M</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mrow><mml:mover><mml:mi>I</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow></mml:mrow></mml:msqrt></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-7">
<label>(5)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-7.png"/><tex-math id="tex-eqn-7"><![CDATA[$$SNR = {{\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {\hat I} } {{\left( {x,y} \right)}^2}} \over {\sum\limits_{x = 0}^{M - 1} {\sum\limits_{y = 0}^{N - 1} {{{\left[ {\hat I\left( {x,y} \right) - I\left( {x,y} \right)} \right]}^2}} } }}$$]]></tex-math>--><mml:math id="mml-eqn-7" display="block"><mml:mi>S</mml:mi><mml:mi>N</mml:mi><mml:mi>R</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mrow><mml:mover><mml:mi>I</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:munderover><mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mrow><mml:mover><mml:mi>I</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mfrac></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-8">
<label>(6)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-8.png"/><tex-math id="tex-eqn-8"><![CDATA[$$PSNR = 10\log {{{{\left( {{2^n} - 1} \right)}^2}} \over {MSE}}$$]]></tex-math>--><mml:math id="mml-eqn-8" display="block"><mml:mi>P</mml:mi><mml:mi>S</mml:mi><mml:mi>N</mml:mi><mml:mi>R</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>10</mml:mn><mml:mi>log</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msup><mml:mn>2</mml:mn><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:mi>M</mml:mi><mml:mi>S</mml:mi><mml:mi>E</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>The quality of the compressed image can be evaluated either subjectively or objectively. Among them, common objective quality evaluation methods include root mean square error, SNR (signal to noise ratio) and PSNR (peak signal to noise ratio).</p>
<p>The <inline-formula id="ieqn-9">
<!--<alternatives><inline-graphic xlink:href="ieqn-9.png"/><tex-math id="tex-ieqn-9"><![CDATA[$I\left( {x,y} \right)$]]></tex-math>--><mml:math id="mml-ieqn-9"><mml:mi>I</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-10">
<!--<alternatives><inline-graphic xlink:href="ieqn-10.png"/><tex-math id="tex-ieqn-10"><![CDATA[$\hat I\left( {x,y} \right)$]]></tex-math>--><mml:math id="mml-ieqn-10"><mml:mrow><mml:mover><mml:mi>I</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represent decompressed image and original image respectively. In <italic>PSNR</italic> calculation formula, <italic>N</italic> is equal to the bits per pixel, usually 8 bits, and <italic>MSE</italic> represents the root mean square error. Although the factual quality assessment method is convenient and feasible, it can&#x2019;t really reflect people&#x2019;s subjective feelings towards the image, so the subjective quality assessment is more accurate. In this paper, the root mean square error is used primarily to evaluate the quality of image reconstruction.</p>
<sec id="s3_1">
<label>3.1</label>
<title>PCA</title>
<p>K-L transformation is one of the main processes of the PCA method. It is necessary to use K-L transformation to realize facial image compression and reconstruction. The K-L transformation method is classical and easy to implement. The basic PCA method first selects some image as training image before facial image compression. Assuming that the image to be trained has a size of <inline-formula id="ieqn-11">
<!--<alternatives><inline-graphic xlink:href="ieqn-11.png"/><tex-math id="tex-ieqn-11"><![CDATA[${N^2} \times {N^2}$]]></tex-math>--><mml:math id="mml-ieqn-11"><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, the pixels of all its columns can be joined end to end. In this way, each image can be stretched into a column vector of length <inline-formula id="ieqn-12">
<!--<alternatives><inline-graphic xlink:href="ieqn-12.png"/><tex-math id="tex-ieqn-12"><![CDATA[${N^2}$]]></tex-math>--><mml:math id="mml-ieqn-12"><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> which can be viewed as a point in <inline-formula id="ieqn-13">
<!--<alternatives><inline-graphic xlink:href="ieqn-13.png"/><tex-math id="tex-ieqn-13"><![CDATA[${N^2}$]]></tex-math>--><mml:math id="mml-ieqn-13"><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> dimensional space. Because the training image has a lot of similarities between each other, the vector on the shearing section is also different. The vector in high-dimensional space distribution is not random or chaotic, principal component analysis has the very strong correlation between each other, which uses a low-dimensional subspace to describe image. Assuming that the spatial description of these training image sets contains <inline-formula id="ieqn-14">
<!--<alternatives><inline-graphic xlink:href="ieqn-14.png"/><tex-math id="tex-ieqn-14"><![CDATA[$m$]]></tex-math>--><mml:math id="mml-ieqn-14"><mml:mi>m</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> sets of images, let <inline-formula id="ieqn-15">
<!--<alternatives><inline-graphic xlink:href="ieqn-15.png"/><tex-math id="tex-ieqn-15"><![CDATA[${X_i},i \in \left\{ {1,2,3, \ldots ,m} \right\}$]]></tex-math>--><mml:math id="mml-ieqn-15"><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> to be the image vector of <inline-formula id="ieqn-16">
<!--<alternatives><inline-graphic xlink:href="ieqn-16.png"/><tex-math id="tex-ieqn-16"><![CDATA[$i$]]></tex-math>--><mml:math id="mml-ieqn-16"><mml:mi>i</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> training sample, and <inline-formula id="ieqn-17">
<!--<alternatives><inline-graphic xlink:href="ieqn-17.png"/><tex-math id="tex-ieqn-17"><![CDATA[$x = [{x_1},{x_2}, \ldots ,{x_m}]$]]></tex-math>--><mml:math id="mml-ieqn-17"><mml:mi>x</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>m</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:math>
<!--</alternatives>--></inline-formula>, <inline-formula id="ieqn-18">
<!--<alternatives><inline-graphic xlink:href="ieqn-18.png"/><tex-math id="tex-ieqn-18"><![CDATA[$u$]]></tex-math>--><mml:math id="mml-ieqn-18"><mml:mi>u</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> is the average image vector of all training sample images, namely:</p>
<p><disp-formula id="eqn-10">
<label>(7)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-10.png"/><tex-math id="tex-eqn-10"><![CDATA[$${{u}} = {1 \over {{M}}}\sum\limits_{{{i}} = 1}^{{M}} {{{{x}}_{{i}}}}$$]]></tex-math>--><mml:math id="mml-eqn-10" display="block"><mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mrow></mml:munderover><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>PCA requires the population dispersion matrix of the training sample set, which named the covariance matrix:</p>
<p><disp-formula id="eqn-11">
<label>(8)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-11.png"/><tex-math id="tex-eqn-11"><![CDATA[$$\sum = E[(x - u){(x - u)^T}]$$]]></tex-math>--><mml:math id="mml-eqn-11" display="block"><mml:mo>&#x2211;</mml:mo><mml:mo>&#x003D;</mml:mo><mml:mi>E</mml:mi><mml:mo stretchy="false">[</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>u</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>u</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>It is a matrix with dimension <inline-formula id="ieqn-19">
<!--<alternatives><inline-graphic xlink:href="ieqn-19.png"/><tex-math id="tex-ieqn-19"><![CDATA[${N^2} \times {N^2}$]]></tex-math>--><mml:math id="mml-ieqn-19"><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and the principal component analysis method needs to calculate its eigenvalues and orthogonal normalized eigenvectors. Since <inline-formula id="ieqn-20">
<!--<alternatives><inline-graphic xlink:href="ieqn-20.png"/><tex-math id="tex-ieqn-20"><![CDATA[${N^2}$]]></tex-math>--><mml:math id="mml-ieqn-20"><mml:mrow><mml:msup><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> will be very large in practical application, it is very difficult to directly calculate the formula. For this reason, SVD decomposition can well solve this problem.</p>
<sec id="s3_1_1">
<label>3.1.1</label>
<title>SVD Decomposition</title>
<p>SVD decomposition is a common method to deal with matrices with high dimensions. SVD decomposition can effectively decompose high-dimensional matrices into low-dimensional space. Through SVD decomposition, we can solve the eigenvalues of the high-dimensional matrix easily. The following is the exact theory related to SVD decomposition.</p>
<p>If <inline-formula id="ieqn-21">
<!--<alternatives><inline-graphic xlink:href="ieqn-21.png"/><tex-math id="tex-ieqn-21"><![CDATA[$A$]]></tex-math>--><mml:math id="mml-ieqn-21"><mml:mi>A</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> is <inline-formula id="ieqn-22">
<!--<alternatives><inline-graphic xlink:href="ieqn-22.png"/><tex-math id="tex-ieqn-22"><![CDATA[$n \times r$]]></tex-math>--><mml:math id="mml-ieqn-22"><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>r</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> dimensional matrix of rank R, then there are two orthogonal matrices:</p>
<p><disp-formula id="eqn-12">
<label>(9)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-12.png"/><tex-math id="tex-eqn-12"><![CDATA[$$U = \left( {{u_1},{u_2}, \cdots ,{u_r}} \right) \in {\Re ^{n \times r}}$$]]></tex-math>--><mml:math id="mml-eqn-12" display="block"><mml:mi>U</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mi mathvariant="normal">&#x211C;</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-13">
<label>(10)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-13.png"/><tex-math id="tex-eqn-13"><![CDATA[$$V = \left( {{v_1},{v_2}, \cdots ,{v_r}} \right) \in {\Re ^{n \times r}}$$]]></tex-math>--><mml:math id="mml-eqn-13" display="block"><mml:mi>V</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mi mathvariant="normal">&#x211C;</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-14">
<label>(11)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-14.png"/><tex-math id="tex-eqn-14"><![CDATA[$$\Lambda = dig\left( {{\lambda _1},{\lambda _2}, \cdots ,{\lambda _r}} \right) \in {\Re ^{n \times r}}$$]]></tex-math>--><mml:math id="mml-eqn-14" display="block"><mml:mi mathvariant="normal">&#x039B;</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mi>r</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mi mathvariant="normal">&#x211C;</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-16">
<label>(12)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-16.png"/><tex-math id="tex-eqn-16"><![CDATA[$$A = U{A^{{1 \over 2}}}{V^T}$$]]></tex-math>--><mml:math id="mml-eqn-16" display="block"><mml:mi>A</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mi>U</mml:mi><mml:mrow><mml:msup><mml:mi>A</mml:mi><mml:mrow><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mi>V</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>where, <inline-formula id="ieqn-23">
<!--<alternatives><inline-graphic xlink:href="ieqn-23.png"/><tex-math id="tex-ieqn-23"><![CDATA[${\lambda _i}(i = 1,2, \cdots ,r)$]]></tex-math>--><mml:math id="mml-ieqn-23"><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mi>r</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
<!--</alternatives>--></inline-formula> is the non-zero eigenvalue of matrix <inline-formula id="ieqn-24">
<!--<alternatives><inline-graphic xlink:href="ieqn-24.png"/><tex-math id="tex-ieqn-24"><![CDATA[$A{A^T}$]]></tex-math>--><mml:math id="mml-ieqn-24"><mml:mi>A</mml:mi><mml:mrow><mml:msup><mml:mi>A</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>and <inline-formula id="ieqn-25">
<!--<alternatives><inline-graphic xlink:href="ieqn-25.png"/><tex-math id="tex-ieqn-25"><![CDATA[${A^T}A$]]></tex-math>--><mml:math id="mml-ieqn-25"><mml:mrow><mml:msup><mml:mi>A</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mi>A</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>, <inline-formula id="ieqn-26">
<!--<alternatives><inline-graphic xlink:href="ieqn-26.png"/><tex-math id="tex-ieqn-26"><![CDATA[${u_i}$]]></tex-math>--><mml:math id="mml-ieqn-26"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-27">
<!--<alternatives><inline-graphic xlink:href="ieqn-27.png"/><tex-math id="tex-ieqn-27"><![CDATA[${v_i}$]]></tex-math>--><mml:math id="mml-ieqn-27"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> are eigenvectors of <inline-formula id="ieqn-28">
<!--<alternatives><inline-graphic xlink:href="ieqn-28.png"/><tex-math id="tex-ieqn-28"><![CDATA[$A{A^T}$]]></tex-math>--><mml:math id="mml-ieqn-28"><mml:mi>A</mml:mi><mml:mrow><mml:msup><mml:mi>A</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>and <inline-formula id="ieqn-29">
<!--<alternatives><inline-graphic xlink:href="ieqn-29.png"/><tex-math id="tex-ieqn-29"><![CDATA[${A^T}A$]]></tex-math>--><mml:math id="mml-ieqn-29"><mml:mrow><mml:msup><mml:mi>A</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mi>A</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>. The above decomposition becomes the Singular Value Decomposition (SVD) of matrix <italic>A</italic>, and the<inline-formula id="ieqn-30">
<!--<alternatives><inline-graphic xlink:href="ieqn-30.png"/><tex-math id="tex-ieqn-30"><![CDATA[$\sqrt {{\lambda _{\rm{i}}}}$]]></tex-math>--><mml:math id="mml-ieqn-30"><mml:msqrt><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mrow><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:msqrt></mml:math>
<!--</alternatives>--></inline-formula> can be expressed as:</p>
<p><disp-formula id="eqn-18">
<label>(13)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-18.png"/><tex-math id="tex-eqn-18"><![CDATA[$$\sum = {1 \over {{M}}}\sum\limits_{{\rm{i}} = 1}^{{M}} {({{{x}}_{{i}}} - {{u}})} {({{{x}}_{{i}}} - {{u}})^{{T}}} = {1 \over {{M}}}{{X}}{{{X}}^{{T}}}$$]]></tex-math>--><mml:math id="mml-eqn-18" display="block"><mml:mo>&#x2211;</mml:mo><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mrow></mml:munderover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mrow><mml:mi>u</mml:mi></mml:mrow></mml:mrow><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:mrow><mml:mrow><mml:mi>X</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mi>X</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>Therefore, construct the matrix:</p>
<p><disp-formula id="eqn-20">
<label>(14)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-20.png"/><tex-math id="tex-eqn-20"><![CDATA[$$R = {X^T}X \in {\Re ^{M \times M}}$$]]></tex-math>--><mml:math id="mml-eqn-20" display="block"><mml:mi>R</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:msup><mml:mi>X</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mi>X</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mi mathvariant="normal">&#x211C;</mml:mi><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>It is easy to find its eigenvalue <inline-formula id="ieqn-31">
<!--<alternatives><inline-graphic xlink:href="ieqn-31.png"/><tex-math id="tex-ieqn-31"><![CDATA[${\lambda _i}$]]></tex-math>--><mml:math id="mml-ieqn-31"><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and corresponding orthogonal normalized eigenvector <inline-formula id="ieqn-32">
<!--<alternatives><inline-graphic xlink:href="ieqn-32.png"/><tex-math id="tex-ieqn-32"><![CDATA[${v_i}(i = 1,2, \cdots ,M)$]]></tex-math>--><mml:math id="mml-ieqn-32"><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mi>M</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
<!--</alternatives>--></inline-formula>. From the above inference, the orthogonal normalized eigenvector is:</p>
<p><disp-formula id="eqn-22">
<label>(15)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-22.png"/><tex-math id="tex-eqn-22"><![CDATA[$${u_i} = {1 \over {\sqrt {{\lambda _i}} }}X{v_i},i = 1,2, \cdots ,M$$]]></tex-math>--><mml:math id="mml-eqn-22" display="block"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msqrt><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow><mml:mi>X</mml:mi><mml:mrow><mml:msub><mml:mi>v</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mi>M</mml:mi></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>Arranging the eigenvalues from large to small: <inline-formula id="ieqn-33">
<!--<alternatives><inline-graphic xlink:href="ieqn-33.png"/><tex-math id="tex-ieqn-33"><![CDATA[${\lambda _1} \ge {\lambda _2} \ge \cdots \ge {\lambda _M},$]]></tex-math>--><mml:math id="mml-ieqn-33"><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>&#x2265;</mml:mo><mml:mrow><mml:msub><mml:mi>&#x03BB;</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow><mml:mo>,</mml:mo></mml:math>
<!--</alternatives>--></inline-formula> their corresponding eigenvector is <inline-formula id="ieqn-34">
<!--<alternatives><inline-graphic xlink:href="ieqn-34.png"/><tex-math id="tex-ieqn-34"><![CDATA[${u_i}$]]></tex-math>--><mml:math id="mml-ieqn-34"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, in this way, each face image can be projected into a subspace of {<inline-formula id="ieqn-35">
<!--<alternatives><inline-graphic xlink:href="ieqn-35.png"/><tex-math id="tex-ieqn-35"><![CDATA[${u_1},{u_2}, \cdots ,{u_M}$]]></tex-math>--><mml:math id="mml-ieqn-35"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>} spans. Therefore, each face image is commensurate with a point in the subspace. Similarly, any point in the subspace corresponds to an image. With such {<inline-formula id="ieqn-36">
<!--<alternatives><inline-graphic xlink:href="ieqn-36.png"/><tex-math id="tex-ieqn-36"><![CDATA[${u_1},{u_2}, \cdots ,{u_M}$]]></tex-math>--><mml:math id="mml-ieqn-36"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>} span subspace, any face image can be projected onto it and a set of coordinate coefficients are obtained, which show the position of the image in the subspace. In other words, any face image can be represented as a linear combination of {<inline-formula id="ieqn-37">
<!--<alternatives><inline-graphic xlink:href="ieqn-37.png"/><tex-math id="tex-ieqn-37"><![CDATA[${u_1},{u_2}, \cdots ,{u_M}$]]></tex-math>--><mml:math id="mml-ieqn-37"><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>} and its weighted coefficient is the expansion coefficient of K-L transformation, which can also be called the algebraic feature of the image.</p>
<p>For any face image <inline-formula id="ieqn-38">
<!--<alternatives><inline-graphic xlink:href="ieqn-38.png"/><tex-math id="tex-ieqn-38"><![CDATA[$f$]]></tex-math>--><mml:math id="mml-ieqn-38"><mml:mi>f</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> to be compressed, its coefficient vector can be obtained by projecting it into the feature subspace:</p>
<p><disp-formula id="eqn-24">
<label>(16)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-24.png"/><tex-math id="tex-eqn-24"><![CDATA[$$y = {U^T}\left( {f - u} \right),U = \left( {{u_1},{u_2}, \cdots ,{u_M}} \right)$$]]></tex-math>--><mml:math id="mml-eqn-24" display="block"><mml:mi>y</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:msup><mml:mi>U</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>f</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>u</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>U</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>u</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>The resulting coefficient vector can be thought as a compression. Since the coefficient vector dimension <inline-formula id="ieqn-39">
<!--<alternatives><inline-graphic xlink:href="ieqn-39.png"/><tex-math id="tex-ieqn-39"><![CDATA[$m$]]></tex-math>--><mml:math id="mml-ieqn-39"><mml:mi>m</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> is usually much less than <italic>f</italic>, thus it saves storage space greatly. It can be transformed back from <inline-formula id="ieqn-40">
<!--<alternatives><inline-graphic xlink:href="ieqn-40.png"/><tex-math id="tex-ieqn-40"><![CDATA[$u$]]></tex-math>--><mml:math id="mml-ieqn-40"><mml:mi>u</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>:</p>
<p><disp-formula id="eqn-26">
<label>(17)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-26.png"/><tex-math id="tex-eqn-26"><![CDATA[$$\mathop { f^{\hskip -.5pc\vskip -.3pc\frown}= Uy + u}$$]]></tex-math>--><mml:math id="mml-eqn-26" display="block"><mml:mover><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x2322;</mml:mo></mml:mover><mml:mo>&#x003D;</mml:mo><mml:mi>U</mml:mi><mml:mi>y</mml:mi><mml:mo>&#x002B;</mml:mo><mml:mi>u</mml:mi></mml:math>
<!--</alternatives>--></disp-formula></p>
</sec>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Improved Algorithm</title>
<p>Basic principally component analysis method has some disadvantages. When the face image illumination position changes, basic PCA cannot capture these changes effectively. Studies have shown that the basic PCA can capture some of the most simply consistency between image hardly, unless the information is included in training image. In addition, the basic PCA will stretch the pixels of the image in some way (usually the first place of each column is connected) into a vector with high dimension. When the image size is bigger, the vector dimension after stretching will be very prominent, not to mention the covariance matrix between the training image. Although the SVD decomposition can be utilized for approximating the feature image, which avoid the emergence of large covariance matrix, it is not accurate in many cases. Due to the deficiency in the PCA method, an improved method, named 2DPCA is proposed in this paper.</p>
<p>Let&#x2019;s <inline-formula id="ieqn-41">
<!--<alternatives><inline-graphic xlink:href="ieqn-41.png"/><tex-math id="tex-ieqn-41"><![CDATA[$X$]]></tex-math>--><mml:math id="mml-ieqn-41"><mml:mi>X</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> present a <inline-formula id="ieqn-42">
<!--<alternatives><inline-graphic xlink:href="ieqn-42.png"/><tex-math id="tex-ieqn-42"><![CDATA[$n$]]></tex-math>--><mml:math id="mml-ieqn-42"><mml:mi>n</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> dimensional normalized column vector. The 2DPCA algorithm take the image <inline-formula id="ieqn-43">
<!--<alternatives><inline-graphic xlink:href="ieqn-43.png"/><tex-math id="tex-ieqn-43"><![CDATA[$A$]]></tex-math>--><mml:math id="mml-ieqn-43"><mml:mi>A</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> (<inline-formula id="ieqn-44">
<!--<alternatives><inline-graphic xlink:href="ieqn-44.png"/><tex-math id="tex-ieqn-44"><![CDATA[$A$]]></tex-math>--><mml:math id="mml-ieqn-44"><mml:mi>A</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> matrix of <inline-formula id="ieqn-45">
<!--<alternatives><inline-graphic xlink:href="ieqn-45.png"/><tex-math id="tex-ieqn-45"><![CDATA[$m \times n$]]></tex-math>--><mml:math id="mml-ieqn-45"><mml:mo>&#x00D7;</mml:mo><mml:mi>n</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>) according to the formula:</p>
<p><disp-formula id="eqn-28">
<label>(18)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-28.png"/><tex-math id="tex-eqn-28"><![CDATA[$${Y} = {AX}$$]]></tex-math>--><mml:math id="mml-eqn-28" display="block"><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mi>X</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>we going to project it onto <inline-formula id="ieqn-46">
<!--<alternatives><inline-graphic xlink:href="ieqn-46.png"/><tex-math id="tex-ieqn-46"><![CDATA[$X$]]></tex-math>--><mml:math id="mml-ieqn-46"><mml:mi>X</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>. Thus, we get <inline-formula id="ieqn-47">
<!--<alternatives><inline-graphic xlink:href="ieqn-47.png"/><tex-math id="tex-ieqn-47"><![CDATA[$M$]]></tex-math>--><mml:math id="mml-ieqn-47"><mml:mi>M</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> dimensional projection vector <inline-formula id="ieqn-48">
<!--<alternatives><inline-graphic xlink:href="ieqn-48.png"/><tex-math id="tex-ieqn-48"><![CDATA[$Y$]]></tex-math>--><mml:math id="mml-ieqn-48"><mml:mi>Y</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>, which is called the projection eigenvector of image <inline-formula id="ieqn-49">
<!--<alternatives><inline-graphic xlink:href="ieqn-49.png"/><tex-math id="tex-ieqn-49"><![CDATA[$A$]]></tex-math>--><mml:math id="mml-ieqn-49"><mml:mi>A</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>. Finding a good projection direction <inline-formula id="ieqn-50">
<!--<alternatives><inline-graphic xlink:href="ieqn-50.png"/><tex-math id="tex-ieqn-50"><![CDATA[$X$]]></tex-math>--><mml:math id="mml-ieqn-50"><mml:mi>X</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> in the 2DPCA method is a key step, and the strength of the projection vector <inline-formula id="ieqn-51">
<!--<alternatives><inline-graphic xlink:href="ieqn-51.png"/><tex-math id="tex-ieqn-51"><![CDATA[$X$]]></tex-math>--><mml:math id="mml-ieqn-51"><mml:mi>X</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> can be determined by the dispersion degree after training samples are projected on it. The higher the dispersion of the sample after projection, the better the projection direction <inline-formula id="ieqn-52">
<!--<alternatives><inline-graphic xlink:href="ieqn-52.png"/><tex-math id="tex-ieqn-52"><![CDATA[$X$]]></tex-math>--><mml:math id="mml-ieqn-52"><mml:mi>X</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>.</p>
<p>Through the study we know that we can use the trace of the covariance of the projected vector to describe the dispersion degree of the projected sample. That is:</p>
<p><disp-formula id="eqn-30">
<label>(19)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-30.png"/><tex-math id="tex-eqn-30"><![CDATA[$$J\left( X \right) = tr\left( {{S_x}} \right)$$]]></tex-math>--><mml:math id="mml-eqn-30" display="block"><mml:mi>J</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>X</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mi>t</mml:mi><mml:mi>r</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>where, <inline-formula id="ieqn-53">
<!--<alternatives><inline-graphic xlink:href="ieqn-53.png"/><tex-math id="tex-ieqn-53"><![CDATA[${S_x}$]]></tex-math>--><mml:math id="mml-ieqn-53"><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represents the covariance of the vector after training samples are projected on it; <inline-formula id="ieqn-54">
<!--<alternatives><inline-graphic xlink:href="ieqn-54.png"/><tex-math id="tex-ieqn-54"><![CDATA[$tr\left( {{S_x}} \right)$]]></tex-math>--><mml:math id="mml-ieqn-54"><mml:mi>t</mml:mi><mml:mi>r</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is the trace of <inline-formula id="ieqn-55">
<!--<alternatives><inline-graphic xlink:href="ieqn-55.png"/><tex-math id="tex-ieqn-55"><![CDATA[${S_x}$]]></tex-math>--><mml:math id="mml-ieqn-55"><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>.</p>
<p>The physical meaning of the maximization equation is to find a projection direction that maximizes the dispersion between the vectors after all training samples are projected on it. The covariance matrix of <inline-formula id="ieqn-56">
<!--<alternatives><inline-graphic xlink:href="ieqn-56.png"/><tex-math id="tex-ieqn-56"><![CDATA[${S_x}$]]></tex-math>--><mml:math id="mml-ieqn-56"><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> can be expressed as:</p>
<p><disp-formula id="eqn-32">
<label>(20)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-32.png"/><tex-math id="tex-eqn-32"><![CDATA[$${S_x} = E\left( {Y - E\left( Y \right)} \right){\left( {Y - E\left( Y \right)} \right)^T}$$]]></tex-math>--><mml:math id="mml-eqn-32" display="block"><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>Y</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>Y</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>Y</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>Y</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-34">
<label>(21)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-34.png"/><tex-math id="tex-eqn-34"><![CDATA[$$\hskip -1.1pc\eqalignb{\matrix{= E\left[ {AX - E\left( {AX} \right)} \right]{\left[ {AX - E\left( {AX} \right)} \right]^T} \cr= E\left[ {\left( {A - E\left( A \right)} \right)X} \right]\left[ {A - E\left( A \right)} \right)X{{]}}^T \cr tr\left( {{S_x}} \right) = {X^T}\left[{E{{\left( {A - E\left( A \right)} \right)}^T}\left( {A - EA} \right)} \right]X}}$$]]></tex-math>--><mml:math id="mml-eqn-34" display="block"><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mo>&#x003D;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mi>X</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mi>X</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mi>X</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mi>X</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x003D;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>X</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>t</mml:mi><mml:mi>r</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>S</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:msup><mml:mi>X</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mi>A</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mi>X</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-36">
<label>(22)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-36.png"/><tex-math id="tex-eqn-36"><![CDATA[$${G_t} = E\left[ {{{\left( {A - E\left( A \right)} \right)}^T}\left( {A - E\left( A \right)} \right)} \right]$$]]></tex-math>--><mml:math id="mml-eqn-36" display="block"><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>A</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>E</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>A</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>The matrix <inline-formula id="ieqn-57">
<!--<alternatives><inline-graphic xlink:href="ieqn-57.png"/><tex-math id="tex-ieqn-57"><![CDATA[${G_t}$]]></tex-math>--><mml:math id="mml-ieqn-57"><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is called the image covariance matrix. It&#x2019;s easy to know by definition that <inline-formula id="ieqn-58">
<!--<alternatives><inline-graphic xlink:href="ieqn-58.png"/><tex-math id="tex-ieqn-58"><![CDATA[${G_t}$]]></tex-math>--><mml:math id="mml-ieqn-58"><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is a nonnegative definite matrix for <inline-formula id="ieqn-59">
<!--<alternatives><inline-graphic xlink:href="ieqn-59.png"/><tex-math id="tex-ieqn-59"><![CDATA[$n \times n$]]></tex-math>--><mml:math id="mml-ieqn-59"><mml:mo>&#x00D7;</mml:mo><mml:mi>n</mml:mi></mml:math>
<!--</alternatives>--></inline-formula>. The training sample image can be used to directly calculate <inline-formula id="ieqn-60">
<!--<alternatives><inline-graphic xlink:href="ieqn-60.png"/><tex-math id="tex-ieqn-60"><![CDATA[${G_t}$]]></tex-math>--><mml:math id="mml-ieqn-60"><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>. Suppose there are <inline-formula id="ieqn-61">
<!--<alternatives><inline-graphic xlink:href="ieqn-61.png"/><tex-math id="tex-ieqn-61"><![CDATA[$M$]]></tex-math>--><mml:math id="mml-ieqn-61"><mml:mi>M</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> training image samples, <inline-formula id="ieqn-62">
<!--<alternatives><inline-graphic xlink:href="ieqn-62.png"/><tex-math id="tex-ieqn-62"><![CDATA[$n \times n$]]></tex-math>--><mml:math id="mml-ieqn-62"><mml:mi>m</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>n</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> matrix <inline-formula id="ieqn-63">
<!--<alternatives><inline-graphic xlink:href="ieqn-63.png"/><tex-math id="tex-ieqn-63"><![CDATA[${A_j}\left( {j = 1,2,3, \ldots ,m} \right)$]]></tex-math>--><mml:math id="mml-ieqn-63"><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>m</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represents the <inline-formula id="ieqn-64">
<!--<alternatives><inline-graphic xlink:href="ieqn-64.png"/><tex-math id="tex-ieqn-64"><![CDATA[$j$]]></tex-math>--><mml:math id="mml-ieqn-64"><mml:mi>j</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> training image, and <inline-formula id="ieqn-65">
<!--<alternatives><inline-graphic xlink:href="ieqn-65.png"/><tex-math id="tex-ieqn-65"><![CDATA[$\bar A$]]></tex-math>--><mml:math id="mml-ieqn-65"><mml:mrow><mml:mover><mml:mi>A</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represents the average image of all training samples.</p>
<p><disp-formula id="eqn-38">
<label>(23)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-38.png"/><tex-math id="tex-eqn-38"><![CDATA[$$\bar A = {1 \over M}\sum\limits_{j = 1}^M {{A_j}}$$]]></tex-math>--><mml:math id="mml-eqn-38" display="block"><mml:mrow><mml:mover><mml:mi>A</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>M</mml:mi></mml:mfrac></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:mrow><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-40">
<label>(24)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-40.png"/><tex-math id="tex-eqn-40"><![CDATA[$${G_t} = {1 \over M}\sum\limits_{j = 1}^M {{{\left( {{A_j} - \bar A} \right)}^T}} \left( {{A_j} - \bar A} \right)$$]]></tex-math>--><mml:math id="mml-eqn-40" display="block"><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mi>M</mml:mi></mml:mfrac></mml:mrow><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>j</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>M</mml:mi></mml:munderover><mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mover><mml:mi>A</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mi>T</mml:mi></mml:msup></mml:mrow></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mover><mml:mi>A</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><disp-formula id="eqn-42">
<label>(25)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-42.png"/><tex-math id="tex-eqn-42"><![CDATA[$$J\left( X \right) = {X^T}{G_t}X$$]]></tex-math>--><mml:math id="mml-eqn-42" display="block"><mml:mi>J</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>X</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:msup><mml:mi>X</mml:mi><mml:mi>T</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow><mml:mi>X</mml:mi></mml:math>
<!--</alternatives>--></disp-formula></p>
<p><xref ref-type="disp-formula" rid="eqn-42">Eq. (25)</xref> is generalized criterion. The normalized vector <inline-formula id="ieqn-66">
<!--<alternatives><inline-graphic xlink:href="ieqn-66.png"/><tex-math id="tex-ieqn-66"><![CDATA[${X_{opt}}$]]></tex-math>--><mml:math id="mml-ieqn-66"><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> that maximizes <inline-formula id="ieqn-67">
<!--<alternatives><inline-graphic xlink:href="ieqn-67.png"/><tex-math id="tex-ieqn-67"><![CDATA[$J(X)$]]></tex-math>--><mml:math id="mml-ieqn-67"><mml:mi>J</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>X</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
<!--</alternatives>--></inline-formula> is called the optimal projection axis. According to literature, we can know that <inline-formula id="ieqn-68">
<!--<alternatives><inline-graphic xlink:href="ieqn-68.png"/><tex-math id="tex-ieqn-68"><![CDATA[${X_{opt}}$]]></tex-math>--><mml:math id="mml-ieqn-68"><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mrow><mml:mi>o</mml:mi><mml:mi>p</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is the eigenvector corresponding to the maximum eigenvalue of <inline-formula id="ieqn-69">
<!--<alternatives><inline-graphic xlink:href="ieqn-69.png"/><tex-math id="tex-ieqn-69"><![CDATA[${G_t}$]]></tex-math>--><mml:math id="mml-ieqn-69"><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>. Generally speaking, it&#x2019;s not enough to have one optimal projection direction. It is necessary to find a series of projection directions, the set of projection directions is <inline-formula id="ieqn-70">
<!--<alternatives><inline-graphic xlink:href="ieqn-70.png"/><tex-math id="tex-ieqn-70"><![CDATA[$\{ {X_1},{X_2}, \cdots ,{X_d}\}$]]></tex-math>--><mml:math id="mml-ieqn-70"><mml:mo stretchy="false" fence="false">{</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mi>d</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false" fence="false">}</mml:mo></mml:math>
<!--</alternatives>--></inline-formula>, which satisfies the principle of maximizing the <inline-formula id="ieqn-71">
<!--<alternatives><inline-graphic xlink:href="ieqn-71.png"/><tex-math id="tex-ieqn-71"><![CDATA[$J(X)$]]></tex-math>--><mml:math id="mml-ieqn-71"><mml:mi>J</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>X</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math>
<!--</alternatives>--></inline-formula>:</p>
<p><disp-formula id="eqn-44">
<label>(26)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-44.png"/><tex-math id="tex-eqn-44"><![CDATA[$$\left\{ {\matrix{ {\left\{ {{X_1},{X_2}, \cdots ,{X_d}} \right\} = {\rm{argmax}}J\left( X \right)} \hfill \cr {X_i^T{X_j} = 0,i \ne j,i,j = 1,2, \cdots ,d} \hfill \cr } } \right.$$]]></tex-math>--><mml:math id="mml-eqn-44" display="block"><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd columnalign="left"><mml:mrow><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mi>d</mml:mi></mml:msub></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">g</mml:mi><mml:mi mathvariant="normal">m</mml:mi><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">x</mml:mi></mml:mrow></mml:mrow><mml:mi>J</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>X</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd columnalign="left"><mml:mrow><mml:msubsup><mml:mi>X</mml:mi><mml:mi>i</mml:mi><mml:mi>T</mml:mi></mml:msubsup><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x2260;</mml:mo><mml:mi>j</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo fence="true" stretchy="true" symmetric="true"></mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>In fact, in the projection direction set <inline-formula id="ieqn-72">
<!--<alternatives><inline-graphic xlink:href="ieqn-72.png"/><tex-math id="tex-ieqn-72"><![CDATA[$\{ {X_1},{X_2}, \cdots ,{X_d}\}$]]></tex-math>--><mml:math id="mml-ieqn-72"><mml:mo stretchy="false" fence="false">{</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mi>X</mml:mi><mml:mi>d</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false" fence="false">}</mml:mo></mml:math>
<!--</alternatives>--></inline-formula>, satisfying the above principle is the orthogonal eigenvectors which corresponds to the first maximum eigenvalue of <inline-formula id="ieqn-73">
<!--<alternatives><inline-graphic xlink:href="ieqn-73.png"/><tex-math id="tex-ieqn-73"><![CDATA[${G_t}$]]></tex-math>--><mml:math id="mml-ieqn-73"><mml:mrow><mml:msub><mml:mi>G</mml:mi><mml:mi>t</mml:mi></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>.</p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Experimental Results</title>
<p>This section mainly summarizes the experimental results of the above algorithms, including the degree of image compression and the size of the root mean square error in image reconstruction. This paper mainly assesses the quality of several algorithms based on the size of the reconstruction error.</p>
<p>From <xref ref-type="table" rid="table-1">Tab. 1</xref>, we can see that PCA algorithm can achieve an image compression ratio of about 3, and the image of the ORL database can have a very stable effect. As shown in <xref ref-type="table" rid="table-2">Tab. 2</xref>, when noise is added, there will be no difference in compression effect.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Image compression ratio</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>IMAGE</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>CR</td>
<td>3.05</td>
<td>3.01</td>
<td>3.00</td>
<td>2.97</td>
<td>2.98</td>
<td>3.03</td>
<td>3.05</td>
<td>3.00</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Image compression ratio after adding noise</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>IMAGE</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<td>GN</td>
<td>3.00</td>
<td>3.05</td>
<td>3.02</td>
<td>2.99</td>
<td>3.00</td>
<td>3.02</td>
<td>3.01</td>
<td>3.00</td>
</tr>
<tr>
<td>SAPN</td>
<td>3.00</td>
<td>3.01</td>
<td>3.00</td>
<td>2.99</td>
<td>2.99</td>
<td>3.03</td>
<td>3.02</td>
<td>3.00</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>From the above table, it shows that no matter what noise is added to PCA image compression. It has no significant influence on the image compression result. But adding noise will have a huge impact on image reconstruction. <xref ref-type="table" rid="table-3">Tab. 3</xref> shows the mean square deviation value of image reconstruction data adds noise.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>The root-mean-square error of the image after adding noise</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>IMAGE</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
</tr>
</thead>
<tbody>
<tr>
<td>SAPN</td>
<td>0.05</td>
<td>30.30</td>
<td>29.18</td>
<td>29.36</td>
<td>29.67</td>
<td>30.31</td>
<td>31.19</td>
<td>29.88</td>
<td>29.27</td>
</tr>
<tr>
<td>SAPN</td>
<td>0.10</td>
<td>43.51</td>
<td>42.82</td>
<td>42.59</td>
<td>43.66</td>
<td>42.85</td>
<td>42.18</td>
<td>43.64</td>
<td>42.22</td>
</tr>
<tr>
<td>SAPN</td>
<td>0.15</td>
<td>54.72</td>
<td>54.33</td>
<td>53.37</td>
<td>54.58</td>
<td>54.50</td>
<td>54.68</td>
<td>55.08</td>
<td>54.68</td>
</tr>
<tr>
<td>GN</td>
<td>0.01</td>
<td>25.43</td>
<td>25.61</td>
<td>25.44</td>
<td>25.77</td>
<td>25.78</td>
<td>23.67</td>
<td>23.42</td>
<td>25.68</td>
</tr>
<tr>
<td>GN</td>
<td>0.02</td>
<td>35.21</td>
<td>35.54</td>
<td>34.78</td>
<td>34.50</td>
<td>34.69</td>
<td>34.46</td>
<td>36.67</td>
<td>36.21</td>
</tr>
<tr>
<td>GN</td>
<td>0.03</td>
<td>41.43</td>
<td>42.12</td>
<td>42.22</td>
<td>42.41</td>
<td>42.57</td>
<td>40.19</td>
<td>40.68</td>
<td>42.45</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In <xref ref-type="table" rid="table-4">Tab. 4</xref>, the mean value of Gaussian noise is all 0 by default. From <xref ref-type="table" rid="table-4">Tab. 4</xref>, we can see that the 2DPCA algorithm also have better results when processing noisy image. Compared with PCA algorithm, 2DPCA algorithm has lower root-mean-square error under the same noise condition. From <xref ref-type="table" rid="table-5">Tab. 5</xref>, we can see that compared with Mat PCA, 2DPCA also has lower root-mean-square error under the same noise.</p>

<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>2DPCA root-mean-square error after adding noise</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>IMAGE</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
</tr>
</thead>
<tbody>
<tr>
<td>SAPN</td>
<td>0.05</td>
<td>16.95</td>
<td>18.94</td>
<td>17.99</td>
<td>20.08</td>
<td>18.17</td>
<td>16.76</td>
<td>18.86</td>
<td>19.81</td>
</tr>
<tr>
<td>SAPN</td>
<td>0.10</td>
<td>28.96</td>
<td>28.95</td>
<td>29.08</td>
<td>27.05</td>
<td>27.54</td>
<td>27.88</td>
<td>28.12</td>
<td>29.64</td>
</tr>
<tr>
<td>SAPN</td>
<td>0.15</td>
<td>35.10</td>
<td>35.16</td>
<td>35.48</td>
<td>35.08</td>
<td>32.66</td>
<td>34.17</td>
<td>34.84</td>
<td>34.84</td>
</tr>
<tr>
<td>GN</td>
<td>0.01</td>
<td>21.18</td>
<td>19.96</td>
<td>21.27</td>
<td>20.11</td>
<td>19.28</td>
<td>21.53</td>
<td>21.69</td>
<td>17.26</td>
</tr>
<tr>
<td>GN</td>
<td>0.02</td>
<td>24.07</td>
<td>22.99</td>
<td>25.52</td>
<td>24.25</td>
<td>25.16</td>
<td>22.47</td>
<td>24.54</td>
<td>24.22</td>
</tr>
<tr>
<td>GN</td>
<td>0.03</td>
<td>27.97</td>
<td>27.59</td>
<td>27.16</td>
<td>28.56</td>
<td>29.00</td>
<td>24.19</td>
<td>29.14</td>
<td>28.64</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>After adding noise to Mat PCA, the image RMS error</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>IMAGE</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
</tr>
</thead>
<tbody>
<tr>
<td>SPAN</td>
<td>0.05</td>
<td>24.55</td>
<td>23.43</td>
<td>24.65</td>
<td>23.24</td>
<td>23.33</td>
<td>22.01</td>
<td>24.73</td>
<td>24.25</td>
</tr>
<tr>
<td>SPAN</td>
<td>0.10</td>
<td>33.62</td>
<td>33.57</td>
<td>33.43</td>
<td>33.92</td>
<td>32.54</td>
<td>33.12</td>
<td>32.65</td>
<td>32.31</td>
</tr>
<tr>
<td>SPAN</td>
<td>0.15</td>
<td>40.31</td>
<td>40.76</td>
<td>40.25</td>
<td>39.45</td>
<td>40.12</td>
<td>40.36</td>
<td>39.94</td>
<td>39.18</td>
</tr>
<tr>
<td>GN</td>
<td>0.01</td>
<td>19.55</td>
<td>19.65</td>
<td>20.18</td>
<td>19.58</td>
<td>20.52</td>
<td>20.12</td>
<td>20.13</td>
<td>19.26</td>
</tr>
<tr>
<td>GN</td>
<td>0.02</td>
<td>26.46</td>
<td>27.21</td>
<td>27.84</td>
<td>27.98</td>
<td>27.36</td>
<td>27.35</td>
<td>27.42</td>
<td>27.43</td>
</tr>
<tr>
<td>GN</td>
<td>0.03</td>
<td>32.32</td>
<td>32.02</td>
<td>32.15</td>
<td>32.52</td>
<td>33.26</td>
<td>33.65</td>
<td>33.32</td>
<td>32.21</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusion</title>
<p>This article provides an image reconstruction and compression algorithm based on principal component analysis and its improved algorithm. PCA is effective to reduce the dimension of data and minimize the error between the extracted components and the original data, so it can be used to data compression and feature extraction. Especially with the development of multimedia image data information technology, abundant image media contains a lot of information. In order to store and transmit these image data effectively, more and more attention is being paid to image compression technology. The image compression and reconstruction based on PCA and its improved algorithm is proposed in this paper. The experimental results demonstrate that the implementation method is simple. It can realize image compression effectively and restore different data images according to the number of principal components. It also satisfies the needs of different levels of image compression and reconstruction.</p>
</sec>
</body>
<back><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> This work was partially supported by the National Natural Science Foundation of China (61876089, 61876185, 61902281, 61375121), the Opening Project of Jiangsu Key Laboratory of Data Science and Smart Software (No. 2019DS301), the Science and Technology Program of Jiangsu Province Construction System (2020JH08), and the Priority Academic Program Development of Jiangsu Higher Education Institutions.</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1">
<label>1</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>D.</given-names> 
<surname>Tsiotas</surname></string-name> and <string-name>
<given-names>S.</given-names> 
<surname>Polyzos</surname></string-name>
</person-group>, &#x201C;
<article-title>The complexity in the study of spatial networks: An epistemological approach</article-title>,&#x201D; 
<source>Networks &#x0026; Spatial Economics</source>, vol. 
<volume>18</volume>, no. 
<issue>1</issue>, pp. 
<fpage>1</fpage>&#x2013;
<lpage>32</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-2">
<label>2</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>B. Y.</given-names> 
<surname>Cai</surname></string-name> and <string-name>
<given-names>J.</given-names> 
<surname>Xu</surname></string-name>
</person-group>, &#x201C;
<article-title>Digital image compression based on BEMD and PCA</article-title>,&#x201D; 
<source>Computer Engineering and Applications</source>, vol. 
<volume>22</volume>, no. 
<issue>1</issue>, pp. 
<fpage>335</fpage>&#x2013;
<lpage>337</lpage>, 
<year>2011</year>.</mixed-citation>
</ref>
<ref id="ref-3">
<label>3</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Zhang</surname></string-name>, <string-name>
<given-names>X. F.</given-names> 
<surname>Song</surname></string-name> and <string-name>
<given-names>D. W.</given-names> 
<surname>Gong</surname></string-name>
</person-group>, &#x201C;
<article-title>A return-cost-based binary firefly algorithm for feature selection</article-title>,&#x201D; 
<source>Information Sciences</source>, vol. 
<volume>11</volume>, no. 
<issue>2</issue>, pp. 
<fpage>418</fpage>&#x2013;
<lpage>419</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-4">
<label>4</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Zhang</surname></string-name>, <string-name>
<given-names>D. W.</given-names> 
<surname>Gong</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Hu</surname></string-name> and <string-name>
<given-names>W. Q.</given-names> 
<surname>Zhang</surname></string-name>
</person-group>, &#x201C;
<article-title>Feature selection algorithm based on bare bones particle swarm optimization</article-title>,&#x201D; 
<source>Neurocomputing</source>, vol. 
<volume>14</volume>, no. 
<issue>8</issue>, pp. 
<fpage>150</fpage>&#x2013;
<lpage>157</lpage>, 
<year>2015</year>.</mixed-citation>
</ref>
<ref id="ref-5">
<label>5</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Zhang</surname></string-name>, <string-name>
<given-names>D. W.</given-names> 
<surname>Gong</surname></string-name> and <string-name>
<given-names>J.</given-names> 
<surname>Cheng</surname></string-name>
</person-group>, &#x201C;
<article-title>Multi-objective particle swarm optimization approach for cost-based feature selection in classification</article-title>,&#x201D; 
<source>IEEE-ACM Transactions on Computational Biology and Bioinformatics</source>, vol. 
<volume>14</volume>, no. 
<issue>1</issue>, pp. 
<fpage>64</fpage>&#x2013;
<lpage>75</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-6">
<label>6</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>M. L.</given-names> 
<surname>Zheng</surname></string-name>, <string-name>
<given-names>P. Z.</given-names> 
<surname>Zhang</surname></string-name> and <string-name>
<given-names>W. W.</given-names> 
<surname>Guo</surname></string-name>
</person-group>, &#x201C;
<article-title>Super resolution reconstruction of face image based on learning</article-title>,&#x201D; 
<source>Computer Engineering and Applications</source>, vol. 
<volume>20</volume>, no. 
<issue>5</issue>, pp. 
<fpage>122</fpage>&#x2013;
<lpage>130</lpage>, 
<year>2013</year>.</mixed-citation>
</ref>
<ref id="ref-7">
<label>7</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>H.</given-names> 
<surname>Jiang</surname></string-name>
</person-group>, &#x201C;
<article-title>Image compression and reconstruction of principal component analysis</article-title>,&#x201D; 
<source>Electronic Design Engineering</source>, vol. 
<volume>20</volume>, no. 
<issue>5</issue>, pp. 
<fpage>14</fpage>&#x2013;
<lpage>18</lpage>, 
<year>2012</year>.</mixed-citation>
</ref>
<ref id="ref-8">
<label>8</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>D.</given-names> 
<surname>Zheng</surname></string-name>, <string-name>
<given-names>Z.</given-names> 
<surname>Ran</surname></string-name>, <string-name>
<given-names>Z.</given-names> 
<surname>Liu</surname></string-name>, <string-name>
<given-names>L.</given-names> 
<surname>Li</surname></string-name> and <string-name>
<given-names>L.</given-names> 
<surname>Tian</surname></string-name>
</person-group>, &#x201C;
<article-title>An efficient bar code image recognition algorithm for sorting system</article-title>,&#x201D; 
<source>Computers Materials &#x0026; Continua</source>, vol. 
<volume>64</volume>, no. 
<issue>3</issue>, pp. 
<fpage>1885</fpage>&#x2013;
<lpage>1895</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-9">
<label>9</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>D.</given-names> 
<surname>Wu</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Liu</surname></string-name>, <string-name>
<given-names>Z.</given-names> 
<surname>Xu</surname></string-name> and <string-name>
<given-names>W.</given-names> 
<surname>Shang</surname></string-name>
</person-group>, &#x201C;
<article-title>Design and development of unmanned surface vehicle for meteorological monitoring</article-title>,&#x201D; 
<source>Intelligent Automation &#x0026; Soft Computing</source>, vol. 
<volume>26</volume>, no. 
<issue>5</issue>, pp. 
<fpage>1123</fpage>&#x2013;
<lpage>1138</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-10">
<label>10</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Xue</surname></string-name>, <string-name>
<given-names>T.</given-names> 
<surname>Tang</surname></string-name>, <string-name>
<given-names>W.</given-names> 
<surname>Pang</surname></string-name> and <string-name>
<given-names>A. X.</given-names> 
<surname>Liu</surname></string-name>
</person-group>, &#x201C;
<article-title>Self-adaptive parameter and strategy based particle swarm optimization for large-scale feature selection problems with multiple classifiers</article-title>,&#x201D; 
<source>Applied Soft Computing</source>, vol. 
<volume>88</volume>, no. 
<issue>4</issue>, pp. 
<fpage>1</fpage>&#x2013;
<lpage>12</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-11">
<label>11</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Xue</surname></string-name>, <string-name>
<given-names>B.</given-names> 
<surname>Xue</surname></string-name> and <string-name>
<given-names>M.</given-names> 
<surname>Zhang</surname></string-name>
</person-group>, &#x201C;
<article-title>Self-adaptive particle swarm optimization for large-scale feature selection in classification</article-title>,&#x201D; 
<source>ACM Transactions on Knowledge Discovery from Data</source>, vol. 
<volume>13</volume>, no. 
<issue>5</issue>, pp. 
<fpage>1</fpage>&#x2013;
<lpage>27</lpage>, 
<year>2019</year>.</mixed-citation>
</ref>
<ref id="ref-12">
<label>12</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Xue</surname></string-name>, <string-name>
<given-names>J. M.</given-names> 
<surname>Jiang</surname></string-name>, <string-name>
<given-names>B. P.</given-names> 
<surname>Zhao</surname></string-name> and <string-name>
<given-names>T. H.</given-names> 
<surname>Ma</surname></string-name>
</person-group>, &#x201C;
<article-title>A self-adaptive artificial bee colony algorithm based on global best for global optimization</article-title>,&#x201D; 
<source>Soft Computing</source>, vol. 
<volume>22</volume>, no. 
<issue>9</issue>, pp. 
<fpage>2935</fpage>&#x2013;
<lpage>2952</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-13">
<label>13</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>X.</given-names> 
<surname>Yu</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Chu</surname></string-name>, <string-name>
<given-names>F.</given-names> 
<surname>Jiang</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Guo</surname></string-name> and <string-name>
<given-names>D. W.</given-names> 
<surname>Gong</surname></string-name>
</person-group>, &#x201C;
<article-title>SVMs classification based two-side cross domain collaborative filtering by inferring intrinsic user and item features</article-title>,&#x201D; 
<source>Knowledge-Based Systems</source>, vol. 
<volume>14</volume>, no. 
<issue>1</issue>, pp. 
<fpage>80</fpage>&#x2013;
<lpage>91</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-14">
<label>14</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>T.</given-names> 
<surname>Liu</surname></string-name> and <string-name>
<given-names>F. B.</given-names> 
<surname>Yang</surname></string-name>
</person-group>, &#x201C;
<article-title>Application of principal component analysis in image compression</article-title>,&#x201D; 
<source>Journal of Natural Science</source>, vol. 
<volume>24</volume>, no. 
<issue>4</issue>, pp. 
<fpage>24</fpage>&#x2013;
<lpage>28</lpage>, 
<year>2008</year>.</mixed-citation>
</ref>
<ref id="ref-15">
<label>15</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Xue</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Tang</surname></string-name>, <string-name>
<given-names>X.</given-names> 
<surname>Xu</surname></string-name>, <string-name>
<given-names>J.</given-names> 
<surname>Liang</surname></string-name> and <string-name>
<given-names>F.</given-names> 
<surname>Neri</surname></string-name>
</person-group>, &#x201C;
<article-title>Multi-objective feature selection with missing data in classification</article-title>,&#x201D; 
<source>IEEE Transactions on Emerging Topics in Computational Intelligence.</source> 
<comment>DOI 10.1109/TETCI.2021.3074147</comment>.</mixed-citation>
</ref>
<ref id="ref-16">
<label>16</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Xue</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Wang</surname></string-name>, <string-name>
<given-names>J.</given-names> 
<surname>Liang</surname></string-name> and <string-name>
<given-names>A.</given-names> 
<surname>Slowik</surname></string-name>
</person-group>, &#x201C;
<article-title>A self-adaptive mutation neural architecture search algorithm based on blocks</article-title>,&#x201D; 
<source>IEEE Computational Intelligence Magazine.</source> 
<comment>DOI 10.1109/MCI.2021.3084435</comment>.</mixed-citation>
</ref>
<ref id="ref-17">
<label>17</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Xue</surname></string-name>, <string-name>
<given-names>H.</given-names> 
<surname>Zhu</surname></string-name> and <string-name>
<given-names>J.</given-names> 
<surname>Liang</surname></string-name>
</person-group>, &#x201C;
<article-title>Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification</article-title>,&#x201D; 
<source>Knowledge Based Systems.</source> 
<comment>DOI 10.1016/j.knosys.2021.107218</comment>.</mixed-citation>
</ref>
<ref id="ref-18">
<label>18</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>A.</given-names> 
<surname>Adebayo</surname></string-name>, <string-name>
<given-names>S.</given-names> 
<surname>Misra</surname></string-name>, <string-name>
<given-names>L.</given-names> 
<surname>Fern&#x00E1;ndez-Sanz</surname></string-name> and <string-name>
<given-names>A.</given-names> 
<surname>Olusola</surname></string-name>
</person-group>, &#x201C;
<article-title>Genetic algorithm and tabu search memory with course sandwiching (gats_cs) for university examination timetabling</article-title>,&#x201D; 
<source>Intelligent Automation &#x0026; Soft Computing</source>, vol. 
<volume>26</volume>, no. 
<issue>3</issue>, pp. 
<fpage>385</fpage>&#x2013;
<lpage>396</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-19">
<label>19</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>B. T.</given-names> 
<surname>Hu</surname></string-name> and <string-name>
<given-names>J. W.</given-names> 
<surname>Wang</surname></string-name>
</person-group>, &#x201C;
<article-title>Deep learning for distinguishing computer generated images and natural images: A survey</article-title>,&#x201D; 
<source>Journal of Information Hiding and Privacy Protection</source>, vol. 
<volume>2</volume>, no. 
<issue>2</issue>, pp. 
<fpage>37</fpage>&#x2013;
<lpage>47</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>