<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CSSE</journal-id>
<journal-id journal-id-type="nlm-ta">CSSE</journal-id>
<journal-id journal-id-type="publisher-id">CSSE</journal-id>
<journal-title-group>
<journal-title>Computer Systems Science &#x0026; Engineering</journal-title>
</journal-title-group>
<issn pub-type="ppub">0267-6192</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">16340</article-id>
<article-id pub-id-type="doi">10.32604/csse.2021.016340</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A Holographic Diffraction Label Recognition Algorithm Based on Fusion Double Tensor Features</article-title><alt-title alt-title-type="left-running-head">A Holographic Diffraction Label Recognition Algorithm Based on Fusion Double Tensor Features</alt-title><alt-title alt-title-type="right-running-head">A Holographic Diffraction Label Recognition Algorithm Based on Fusion Double Tensor Features</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western">
<surname>Li</surname>
<given-names>Li</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western">
<surname>Cui</surname>
<given-names>Chen</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western">
<surname>Lu</surname>
<given-names>Jianfeng</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-4" contrib-type="author" corresp="yes">
<name name-style="western">
<surname>Zhang</surname>
<given-names>Shanqing</given-names>
</name>
<xref ref-type="aff" rid="aff-1">1</xref>
<email>sqzhang@hdu.edu.cn</email>
</contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western">
<surname>Chang</surname>
<given-names>Ching-Chun</given-names>
</name>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
<aff id="aff-1">
<label>1</label><institution>Hangzhou Dianzi University</institution>, <addr-line>Hangzhou, 310018</addr-line>, <country>China</country></aff>
<aff id="aff-2">
<label>2</label><institution>Zhejiang Police College</institution>, <addr-line>Hangzhou, 310018</addr-line>, <country>China</country></aff>
<aff id="aff-3">
<label>3</label><institution>University of Warwick</institution>, <addr-line>Coventry, CV4 7AL</addr-line>, <country>UK</country></aff>
</contrib-group><author-notes><corresp id="cor1">&#x002A;Corresponding Author: Shanqing Zhang. Email: <email>sqzhang@hdu.edu.cn</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-05-12">
<day>12</day>
<month>5</month>
<year>2021</year>
</pub-date>
<volume>38</volume>
<issue>3</issue>
<fpage>291</fpage>
<lpage>303</lpage>
<history>
<date date-type="received">
<day>30</day>
<month>12</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>2</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2021 Li et al.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Li et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CSSE_16340.pdf"></self-uri>
<abstract>
<p>As an efficient technique for anti-counterfeiting, holographic diffraction labels has been widely applied to various fields. Due to their unique feature, traditional image recognition algorithms are not ideal for the holographic diffraction label recognition. Since a tensor preserves the spatiotemporal features of an original sample in the process of feature extraction, in this paper we propose a new holographic diffraction label recognition algorithm that combines two tensor features. The HSV (Hue Saturation Value) tensor and the HOG (Histogram of Oriented Gradient) tensor are used to represent the color information and gradient information of holographic diffraction label, respectively. Meanwhile, the tensor decomposition is performed by high order singular value decomposition, and tensor decomposition matrices are obtained. Taking into consideration of the different recognition capabilities of decomposition matrices, we design a decomposition matrix similarity fusion strategy using a typical correlation analysis algorithm and projection from similarity vectors of different decomposition matrices to the PCA (Principal Component Analysis) sub-space , then, the sub-space performs KNN (K-Nearest Neighbors) classification is performed. The effectiveness of our fusion strategy is verified by experiments. Our double tensor recognition algorithm complements the recognition capability of different tensors to produce better recognition performance for the holographic diffraction label system.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Label recognition</kwd>
<kwd>holographic diffraction</kwd>
<kwd>fusion double tensor</kwd>
<kwd>matrix similarity</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>With the rapid development of printing technology, new types of product labels are used. Holographic diffraction labels have been chosen by many manufacturers due to their unique anti-counterfeiting feature. With the popularity of smartphones, there is an increasing demand for image recognition using mobile phones. Different image features are shown in different illumination environments due to the unique physical feature of holographic diffraction labels. Traditional image recognition algorithms are not ideal for holographic diffraction label recognition.</p>
<p>In this study, tensor is used to represent data to preserve the optically variable data of a diffraction image. Tensor has been widely used in signal and image processing [<xref ref-type="bibr" rid="ref-1">1</xref>&#x2013;<xref ref-type="bibr" rid="ref-3">3</xref>], factor analysis [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-5">5</xref>], and voice communication [<xref ref-type="bibr" rid="ref-6">6</xref>]. Since a tensor can maintains the structure of the original data, tensor analysis has appealed to researchers. Most datasets can be represented by matrices and efficiently analyzed by singular value decomposition (SVD) [<xref ref-type="bibr" rid="ref-7">7</xref>]. However, some specific datasets such as sequence images, video and text cannot be represented by matrices directly, so additional operations are required. For example, when SVD decomposition cannot be used directly, tensor decomposition such as Tucker decomposition is required [<xref ref-type="bibr" rid="ref-8">8</xref>]. Vasilescu et al. [<xref ref-type="bibr" rid="ref-9">9</xref>] constructed face images into two-dimensional tensors for face recognition.</p>
<p>Low-dimension sub-space learning methods have been expanded to tensor representation, such as tensor principal component analysis [<xref ref-type="bibr" rid="ref-10">10</xref>], tensor linear discriminant analysis [<xref ref-type="bibr" rid="ref-11">11</xref>], and multilinear discriminant analysis [<xref ref-type="bibr" rid="ref-12">12</xref>]. Stoudenmire et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] proposed a supervised tensor-learning framework that can directly process high order tensor data.</p>
<p>Information from changing illumination of holographic diffractive labels is lost if it is represented using matrix [<xref ref-type="bibr" rid="ref-14">14</xref>]. In addition, smartphone camera captures the jitter and rotational interference. A double tensor is used herein to represent the features of holographic diffraction labels. Most tensor-based image recognition methods directly represent the original data of the image as a tensor and do not include feature extraction. An appropriate feature extraction method makes it possible that the original image is represented as a tensor through new features [<xref ref-type="bibr" rid="ref-15">15</xref>], which results in better recognition performance. Moreover, the original data and the extracted features can complement each other, further improving the accuracy of classification [<xref ref-type="bibr" rid="ref-16">16</xref>]. Taking into consideration of the features of holographic images under different illumination, we propose a holographic label classification method that combines an HSV (Hue Saturation Value) tensor with a HOG (Histogram of Oriented Gradient) feature tensor. Accurate classification and identification of label images are achieved by similarity measurement of both tensors.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Proposed Method</title>
<p>A color image has three channels of RGB and can be represented as a tensor intrinsically. A holographic label has different color information for different illuminations because of its light-varying feature. In order to preserve the color information of an image, the holographic image is converted from the RGB to the HSV color space and is further represented as an HSV tensor [<xref ref-type="bibr" rid="ref-17">17</xref>]. The HOG tensor of a label is constructed on the extracted HOG features [<xref ref-type="bibr" rid="ref-18">18</xref>]. In order to measure the similarity between the tensors, high-order singular value decomposition (HOSVD) is used to obtain the decomposition matrix of tensor expansion matrices [<xref ref-type="bibr" rid="ref-19">19</xref>], and the typical correlation coefficients of the decomposition matrix are calculated using Canonical correlation analysis (CCA) to obtain the similarity vector. Because different decomposition matrices have different classification capabilities, we propose a fusion strategy to perform principal component analysis (PCA) dimension reduction for the similarity vector. Finally, the nearest neighbor algorithm is used for classification. The flowchart of our algorithm is shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Flowchart of double tensor recognition algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-1.png"/>
</fig>
<sec id="s2_1">
<label>2.1</label>
<title>Image Preprocessing</title>
<p>The tilt and rotation of an image taken by a mobile phone always causes incorrect recognition. In order to ensure the accuracy of the classification, rotation correction using edge detection and the Hough transformation are performed for all input images.</p>
<p>In order to remove interference from the background in a label image, the grayscale image is converted into a binary image using the maximum OTSU. Then Canny edge detection is performed on the binary image. The traditional Canny operator performs Gaussian smoothing on the original image in the process of edge detection. However, the influence of noise is related to the distance of the noise point from the center after Gaussian smoothing. It causes image edge blurring [<xref ref-type="bibr" rid="ref-20">20</xref>] and impacts the image correction effect [<xref ref-type="bibr" rid="ref-21">21</xref>]. Median filtering is used to preserve the edge information of an image. The test results are shown in <xref ref-type="fig" rid="fig-2">Fig. 2c</xref>. The binary image obtained by edge detection goes through the Hough rotation correction, and the results are shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>The image with rotation correction (a) Grayscale (b) Traditional Canny algorithm (c) Improved Canny algorithm (d) Binary map (e) Hough detection line (f) Rotation correction</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-2.png"/>
</fig>
<p>After the rotation correction, the original RGB color space is converted into a HSV space and normalized into a third-order HSV tensor <inline-formula id="ieqn-1">
<!--<alternatives><inline-graphic xlink:href="ieqn-1.tif"/><tex-math id="tex-ieqn-1"><![CDATA[{\rm \; C} \in {{\rm i}^{{\rm w} \times {\rm h} \times 3}}]]></tex-math>--><mml:math id="mml-ieqn-1"><mml:mrow><mml:mspace width="thickmathspace"></mml:mspace><mml:mi mathvariant="normal">C</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">w</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mi mathvariant="normal">h</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, where <inline-formula id="ieqn-2">
<!--<alternatives><inline-graphic xlink:href="ieqn-2.tif"/><tex-math id="tex-ieqn-2"><![CDATA[{\rm w}]]></tex-math>--><mml:math id="mml-ieqn-2"><mml:mrow><mml:mi mathvariant="normal">w</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-3">
<!--<alternatives><inline-graphic xlink:href="ieqn-3.tif"/><tex-math id="tex-ieqn-3"><![CDATA[{\rm h}]]></tex-math>--><mml:math id="mml-ieqn-3"><mml:mrow><mml:mi mathvariant="normal">h</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represent the width and height of the image, respectively.</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Generation of HOG Tensor</title>
<p>Image features are extracted using HOG descriptors. In contrast to traditional HOG feature extraction algorithms, a faster HOG feature extraction method [<xref ref-type="bibr" rid="ref-22">22</xref>] is used to obtain the same descriptors as the original HOG.</p>
<p>The size of a normalized image is given as <inline-formula id="ieqn-4">
<!--<alternatives><inline-graphic xlink:href="ieqn-4.tif"/><tex-math id="tex-ieqn-4"><![CDATA[{\rm w} \times {\rm h}]]></tex-math>--><mml:math id="mml-ieqn-4"><mml:mrow><mml:mi mathvariant="normal">w</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mi mathvariant="normal">h</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, and the image is divided into <inline-formula id="ieqn-5">
<!--<alternatives><inline-graphic xlink:href="ieqn-5.tif"/><tex-math id="tex-ieqn-5"><![CDATA[\left( {{\rm w}/{\rm bsize}} \right) \times \left( {{\rm h}/{\rm bsize}} \right)]]></tex-math>--><mml:math id="mml-ieqn-5"><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="normal">w</mml:mi></mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">b</mml:mi><mml:mi mathvariant="normal">s</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">z</mml:mi><mml:mi mathvariant="normal">e</mml:mi></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="normal">h</mml:mi></mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">b</mml:mi><mml:mi mathvariant="normal">s</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">z</mml:mi><mml:mi mathvariant="normal">e</mml:mi></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> sub-blocks, where <inline-formula id="ieqn-6">
<!--<alternatives><inline-graphic xlink:href="ieqn-6.tif"/><tex-math id="tex-ieqn-6"><![CDATA[{\rm bsize}]]></tex-math>--><mml:math id="mml-ieqn-6"><mml:mrow><mml:mi mathvariant="normal">b</mml:mi><mml:mi mathvariant="normal">s</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">z</mml:mi><mml:mi mathvariant="normal">e</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is the size of each sub-block. The gradient of the image is calculated and the gradient direction histogram of each sub-block is constructed using the four-way normalization method. This normalization method is shown in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>. Each block is generated by the normalization of the four adjacent sub-blocks using vector <inline-formula id="ieqn-7">
<!--<alternatives><inline-graphic xlink:href="ieqn-7.tif"/><tex-math id="tex-ieqn-7"><![CDATA[{\rm v}]]></tex-math>--><mml:math id="mml-ieqn-7"><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> as superposition of the positive direction in the histogram. The normalization of the block is defined as follows:</p>
<p><disp-formula id="eqn-1">
<label>(1)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-1.png"/><tex-math id="tex-eqn-1"><![CDATA[{\rm v} = {\rm v}/\sqrt {\left| {\left| {\rm v} \right|} \right|_2^2 + {{\epsilon }^2}}]]></tex-math>--><mml:math id="mml-eqn-1" display="block"><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:msqrt><mml:msubsup><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup><mml:mo>&#x002B;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi>&#x03B5;</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:msqrt></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>where <inline-formula id="ieqn-8">
<!--<alternatives><inline-graphic xlink:href="ieqn-8.tif"/><tex-math id="tex-ieqn-8"><![CDATA[{\left| {\left| {\rm v} \right|} \right|_2}]]></tex-math>--><mml:math id="mml-ieqn-8"><mml:mrow><mml:msub><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow><mml:mo>|</mml:mo></mml:mrow></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is the second norm of vector <inline-formula id="ieqn-9">
<!--<alternatives><inline-graphic xlink:href="ieqn-9.tif"/><tex-math id="tex-ieqn-9"><![CDATA[{\rm v}]]></tex-math>--><mml:math id="mml-ieqn-9"><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>,<inline-formula id="ieqn-10">
<!--<alternatives><inline-graphic xlink:href="ieqn-10.tif"/><tex-math id="tex-ieqn-10"><![CDATA[{ \; \epsilon } > 0]]></tex-math>--><mml:math id="mml-ieqn-10"><mml:mrow><mml:mspace width="thickmathspace"></mml:mspace><mml:mi>&#x03B5;</mml:mi></mml:mrow><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:math>
<!--</alternatives>--></inline-formula>.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>The normalization of blocks in the image</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-3.png"/>
</fig>
<p>Each block yields four different normalization results, <inline-formula id="ieqn-11">
<!--<alternatives><inline-graphic xlink:href="ieqn-11.tif"/><tex-math id="tex-ieqn-11"><![CDATA[{\rm nOrients}]]></tex-math>--><mml:math id="mml-ieqn-11"><mml:mrow><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">O</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">e</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">t</mml:mi><mml:mi mathvariant="normal">s</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is the number of directions (bins) in the histogram, and each block gets one HOG descriptor of <inline-formula id="ieqn-12">
<!--<alternatives><inline-graphic xlink:href="ieqn-12.tif"/><tex-math id="tex-ieqn-12"><![CDATA[{\rm nOrients} \times 4]]></tex-math>--><mml:math id="mml-ieqn-12"><mml:mrow><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">O</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">e</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">t</mml:mi><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn></mml:math>
<!--</alternatives>--></inline-formula> in length, as shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>HOG feature extraction</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-4.png"/>
</fig>
<p>A third-order tensor <inline-formula id="ieqn-13">
<!--<alternatives><inline-graphic xlink:href="ieqn-13.tif"/><tex-math id="tex-ieqn-13"><![CDATA[{\rm G} \in {{\rm i}^{{\rm W} \times {\rm H} \times \left( {{\rm nOrients} \times 4} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-13"><mml:mrow><mml:mi mathvariant="normal">G</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">W</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mi mathvariant="normal">H</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">O</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">e</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">t</mml:mi><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mn>4</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is obtained using HOG feature extraction on the original image, where <inline-formula id="ieqn-14">
<!--<alternatives><inline-graphic xlink:href="ieqn-14.tif"/><tex-math id="tex-ieqn-14"><![CDATA[{\rm W} = {\rm w}/{\rm bsize}]]></tex-math>--><mml:math id="mml-ieqn-14"><mml:mrow><mml:mi mathvariant="normal">W</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mi mathvariant="normal">w</mml:mi></mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">b</mml:mi><mml:mi mathvariant="normal">s</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">z</mml:mi><mml:mi mathvariant="normal">e</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, <inline-formula id="ieqn-15">
<!--<alternatives><inline-graphic xlink:href="ieqn-15.tif"/><tex-math id="tex-ieqn-15"><![CDATA[{\rm H} = {\rm h}/{\rm bsize}]]></tex-math>--><mml:math id="mml-ieqn-15"><mml:mrow><mml:mi mathvariant="normal">H</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mi mathvariant="normal">h</mml:mi></mml:mrow><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">b</mml:mi><mml:mi mathvariant="normal">s</mml:mi><mml:mi mathvariant="normal">i</mml:mi><mml:mi mathvariant="normal">z</mml:mi><mml:mi mathvariant="normal">e</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, respectively.</p>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Similarity Between the Double Tensors</title>
<p>The obtained HOG tensor and HSV tensor are the primary features of an image. These primary features are decomposed into orthogonal matrices using HOSVD algorithm. The similarity between the decomposition matrices of the test sample and the training sample are measured using CCA [<xref ref-type="bibr" rid="ref-23">23</xref>,<xref ref-type="bibr" rid="ref-24">24</xref>].</p>
<sec id="s2_3_1">
<label>2.3.1</label>
<title>Generation of Decomposition Matrix.</title>
<p>A tensor is decomposed into decomposition matrices using HOSVD. First, a high-order tensor is expanded into a two-dimensional matrix. An <inline-formula id="ieqn-16">
<!--<alternatives><inline-graphic xlink:href="ieqn-16.tif"/><tex-math id="tex-ieqn-16"><![CDATA[{{\rm N}^{{\rm th}}}]]></tex-math>--><mml:math id="mml-ieqn-16"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">N</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">t</mml:mi><mml:mi mathvariant="normal">h</mml:mi></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> order tensor <inline-formula id="ieqn-17">
<!--<alternatives><inline-graphic xlink:href="ieqn-17.tif"/><tex-math id="tex-ieqn-17"><![CDATA[{\rm A} \in {{\rm i}^{{{\rm s}_1} \times {{\rm s}_2} \times {{\rm s}_3} \times \cdots \times {{\rm s}_{\rm N}}}}]]></tex-math>--><mml:math id="mml-ieqn-17"><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mn>3</mml:mn></mml:msub></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is expanded into a series of two-dimensional matrices <inline-formula id="ieqn-18">
<!--<alternatives><inline-graphic xlink:href="ieqn-18.tif"/><tex-math id="tex-ieqn-18"><![CDATA[\left\{ {{{\rm A}_{\left( {\rm k} \right)}}} \right\}_{{\rm k} = 1}^{\rm N}]]></tex-math>--><mml:math id="mml-ieqn-18"><mml:msubsup><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="normal">N</mml:mi></mml:mrow></mml:msubsup></mml:math>
<!--</alternatives>--></inline-formula> using the modulo-N expanding, where the size of <inline-formula id="ieqn-19">
<!--<alternatives><inline-graphic xlink:href="ieqn-19.tif"/><tex-math id="tex-ieqn-19"><![CDATA[{{\rm A}_{\left( {\rm k} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-19"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is <inline-formula id="ieqn-20">
<!--<alternatives><inline-graphic xlink:href="ieqn-20.tif"/><tex-math id="tex-ieqn-20"><![CDATA[{{\rm s}_{\rm k}} \times {\prod }_{{\rm i} \ne {\rm k}} {{\rm S}_{\rm i}}]]></tex-math>--><mml:math id="mml-ieqn-20"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:munder><mml:mrow><mml:mo movablelimits="false">&#x220F;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mo>&#x2260;</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow></mml:mrow></mml:munder><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">S</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>. For example, a third-order tensor <inline-formula id="ieqn-21">
<!--<alternatives><inline-graphic xlink:href="ieqn-21.tif"/><tex-math id="tex-ieqn-21"><![CDATA[{\rm A} \in {{\rm i}^{4 \times 3 \times 5}}]]></tex-math>--><mml:math id="mml-ieqn-21"><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>5</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is expanded into matrix <inline-formula id="ieqn-22">
<!--<alternatives><inline-graphic xlink:href="ieqn-22.tif"/><tex-math id="tex-ieqn-22"><![CDATA[{{\rm A}_{\left( 1 \right)}} \in {{\rm {\mathbb R}}^{4 \times 15}},{{\rm A}_{\left( 2 \right)}} \in {{\rm {\mathbb R}}^{3 \times 20}},{{\rm A}_{\left( 3 \right)}} \in {{\rm {\mathbb R}}^{5 \times 12}}]]></tex-math>--><mml:math id="mml-ieqn-22"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mn>4</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>15</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>20</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mn>3</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mn>5</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>12</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, as shown in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Modulo-N unfold of a third-order tensor</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-5.png"/>
</fig>
<p>HOSVD decomposition of an expanded matrix is represented as follows:</p>
<p><disp-formula id="eqn-2">
<label>(2)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-2.png"/><tex-math id="tex-eqn-2"><![CDATA[{{\rm A}_{\left( {\rm k} \right)}} = {{\rm U}_{\left( {\rm k} \right)}}\mathop \sum \nolimits_{\left( {\rm k} \right)} {\rm V}_{\left( {\rm k} \right)}^{\rm T}]]></tex-math>--><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">U</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow><mml:msub><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub><mml:mo>&#x2061;</mml:mo><mml:msubsup><mml:mrow><mml:mi mathvariant="normal">V</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msubsup></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>where <inline-formula id="ieqn-23">
<!--<alternatives><inline-graphic xlink:href="ieqn-23.tif"/><tex-math id="tex-ieqn-23"><![CDATA[\mathop \sum \nolimits_{\left( k \right)}]]></tex-math>--><mml:math id="mml-ieqn-23"><mml:msub><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:math>
<!--</alternatives>--></inline-formula> represents a diagonal matrix, <inline-formula id="ieqn-24">
<!--<alternatives><inline-graphic xlink:href="ieqn-24.tif"/><tex-math id="tex-ieqn-24"><![CDATA[{{\rm U}_{\left( {\rm k} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-24"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">U</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-25">
<!--<alternatives><inline-graphic xlink:href="ieqn-25.tif"/><tex-math id="tex-ieqn-25"><![CDATA[{{\rm V}_{\left( {\rm k} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-25"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">V</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> are the orthogonal matrices that can be spanned to column space and row space of <inline-formula id="ieqn-26">
<!--<alternatives><inline-graphic xlink:href="ieqn-26.tif"/><tex-math id="tex-ieqn-26"><![CDATA[{{\rm A}_{\left( {\rm k} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-26"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, respectively. Factor matrix <inline-formula id="ieqn-27">
<!--<alternatives><inline-graphic xlink:href="ieqn-27.tif"/><tex-math id="tex-ieqn-27"><![CDATA[{{\rm V}_{\left( {\rm k} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-27"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">V</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is an orthogonal matrix and correlated to the non-zero singular values of <inline-formula id="ieqn-28">
<!--<alternatives><inline-graphic xlink:href="ieqn-28.tif"/><tex-math id="tex-ieqn-28"><![CDATA[{{\rm A}_{\left( {\rm k} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-28"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">A</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>. The decomposition matrix <inline-formula id="ieqn-29">
<!--<alternatives><inline-graphic xlink:href="ieqn-29.tif"/><tex-math id="tex-ieqn-29"><![CDATA[{{\rm V}_{\left( {\rm k} \right)}}]]></tex-math>--><mml:math id="mml-ieqn-29"><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">V</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is regarded as a point in the Glesman manifold, so it represents the mapping of the primary feature tensor to the Glesman manifold, and the tensor similarity can be calculated in the Glesman manifold for tensor classification identification. Three different mappings in the manifold are obtained for the HSV tensor and HOG tensor, respectively.</p>
</sec>
<sec id="s2_3_2">
<label>2.3.2</label>
<title>CCA Similarity Measurement.</title>
<p>CCA is used to measure the similarity between tensor decomposition matrices. For random vector <inline-formula id="ieqn-30">
<!--<alternatives><inline-graphic xlink:href="ieqn-30.tif"/><tex-math id="tex-ieqn-30"><![CDATA[{\rm x} \in {{\rm i}^{\rm m}}]]></tex-math>--><mml:math id="mml-ieqn-30"><mml:mrow><mml:mi mathvariant="normal">x</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">m</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-31">
<!--<alternatives><inline-graphic xlink:href="ieqn-31.tif"/><tex-math id="tex-ieqn-31"><![CDATA[{\rm y} \in {{\rm i}^{\rm n}}]]></tex-math>--><mml:math id="mml-ieqn-31"><mml:mrow><mml:mi mathvariant="normal">y</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">n</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, optimization goal of CCA [<xref ref-type="bibr" rid="ref-23">23</xref>] is used to find vector <inline-formula id="ieqn-32">
<!--<alternatives><inline-graphic xlink:href="ieqn-32.tif"/><tex-math id="tex-ieqn-32"><![CDATA[{\rm u} \in {{\rm i}^{\rm m}}]]></tex-math>--><mml:math id="mml-ieqn-32"><mml:mrow><mml:mi mathvariant="normal">u</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">m</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-33">
<!--<alternatives><inline-graphic xlink:href="ieqn-33.tif"/><tex-math id="tex-ieqn-33"><![CDATA[{\rm v} \in {{\rm i}^{\rm n}}]]></tex-math>--><mml:math id="mml-ieqn-33"><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">n</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, so that the correlation between <inline-formula id="ieqn-34">
<!--<alternatives><inline-graphic xlink:href="ieqn-34.tif"/><tex-math id="tex-ieqn-34"><![CDATA[{{\rm u}^{\rm T}}{\rm x}]]></tex-math>--><mml:math id="mml-ieqn-34"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">u</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mi mathvariant="normal">x</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-35">
<!--<alternatives><inline-graphic xlink:href="ieqn-35.tif"/><tex-math id="tex-ieqn-35"><![CDATA[{{\rm v}^{\rm T}}y]]></tex-math>--><mml:math id="mml-ieqn-35"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mi>y</mml:mi></mml:math>
<!--</alternatives>--></inline-formula> is maximized. It is defined in <xref ref-type="disp-formula" rid="eqn-3">Eq. (3)</xref>.</p>
<p><disp-formula id="eqn-3">
<label>(3)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-3.png"/><tex-math id="tex-eqn-3"><![CDATA[{\rm \rho } = \max {\rm corr}\left( {{{\rm u}^{\rm T}}{\rm x},{{\rm v}^{\rm T}}{\rm y}} \right)]]></tex-math>--><mml:math id="mml-eqn-3" display="block"><mml:mrow><mml:mi>&#x03C1;</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mi mathvariant="normal">c</mml:mi><mml:mi mathvariant="normal">o</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">r</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">u</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mi mathvariant="normal">x</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mi mathvariant="normal">y</mml:mi></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>where <inline-formula id="ieqn-36">
<!--<alternatives><inline-graphic xlink:href="ieqn-36.tif"/><tex-math id="tex-ieqn-36"><![CDATA[{\rm u}]]></tex-math>--><mml:math id="mml-ieqn-36"><mml:mrow><mml:mi mathvariant="normal">u</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-37">
<!--<alternatives><inline-graphic xlink:href="ieqn-37.tif"/><tex-math id="tex-ieqn-37"><![CDATA[{\rm v}]]></tex-math>--><mml:math id="mml-ieqn-37"><mml:mrow><mml:mi mathvariant="normal">v</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represent two typical variables, <inline-formula id="ieqn-38">
<!--<alternatives><inline-graphic xlink:href="ieqn-38.tif"/><tex-math id="tex-ieqn-38"><![CDATA[{\rm \rho }]]></tex-math>--><mml:math id="mml-ieqn-38"><mml:mrow><mml:mi>&#x03C1;</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is the typical correlation, and <inline-formula id="ieqn-39">
<!--<alternatives><inline-graphic xlink:href="ieqn-39.tif"/><tex-math id="tex-ieqn-39"><![CDATA[{\rm corr}\left( {{\rm X},{\rm Y}} \right)]]></tex-math>--><mml:math id="mml-ieqn-39"><mml:mrow><mml:mi mathvariant="normal">c</mml:mi><mml:mi mathvariant="normal">o</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">r</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:mi mathvariant="normal">X</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="normal">Y</mml:mi></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is the correlation between <inline-formula id="ieqn-40">
<!--<alternatives><inline-graphic xlink:href="ieqn-40.tif"/><tex-math id="tex-ieqn-40"><![CDATA[{\rm X}]]></tex-math>--><mml:math id="mml-ieqn-40"><mml:mrow><mml:mi mathvariant="normal">X</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-41">
<!--<alternatives><inline-graphic xlink:href="ieqn-41.tif"/><tex-math id="tex-ieqn-41"><![CDATA[{\rm Y}]]></tex-math>--><mml:math id="mml-ieqn-41"><mml:mrow><mml:mi mathvariant="normal">Y</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>. The typical correlation between matrices <inline-formula id="ieqn-42">
<!--<alternatives><inline-graphic xlink:href="ieqn-42.tif"/><tex-math id="tex-ieqn-42"><![CDATA[{\rm X} \in {{\rm i}^{{\rm N} \times {{\rm m}_1}}}]]></tex-math>--><mml:math id="mml-ieqn-42"><mml:mrow><mml:mi mathvariant="normal">X</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">N</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">m</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-43">
<!--<alternatives><inline-graphic xlink:href="ieqn-43.tif"/><tex-math id="tex-ieqn-43"><![CDATA[{\rm Y} \in {{\rm i}^{{\rm N} \times {{\rm m}_2}}}]]></tex-math>--><mml:math id="mml-ieqn-43"><mml:mrow><mml:mi mathvariant="normal">Y</mml:mi></mml:mrow><mml:mo>&#x2208;</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">N</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">m</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> is defined in <xref ref-type="disp-formula" rid="eqn-4">Eq. (4)</xref>. The typical correlation of two decomposition matrices is calculated using MATLAB function <inline-formula id="ieqn-44">
<!--<alternatives><inline-graphic xlink:href="ieqn-44.tif"/><tex-math id="tex-ieqn-44"><![CDATA[{\rm \; canoncorr}()]]></tex-math>--><mml:math id="mml-ieqn-44"><mml:mrow><mml:mspace width="thickmathspace"></mml:mspace><mml:mi mathvariant="normal">c</mml:mi><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">o</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">c</mml:mi><mml:mi mathvariant="normal">o</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">r</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:math>
<!--</alternatives>--></inline-formula>.</p>
<p><disp-formula id="eqn-4">
<label>(4)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-4.png"/><tex-math id="tex-eqn-4"><![CDATA[{\rm \rho } = \max {{\rm X}^{{\rm &#x0027;T}}}{\rm {Y}^{\prime}},{\rm where\; {X}^{\prime}} = {\rm Xu},{\rm {Y}^{\prime}} = {\rm Yv}]]></tex-math>--><mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:mi>&#x03C1;</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">X</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mo>&#x2032;</mml:mo></mml:msup><mml:mi>T</mml:mi></mml:mrow></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="normal">w</mml:mi><mml:mi mathvariant="normal">h</mml:mi><mml:mi mathvariant="normal">e</mml:mi><mml:mi mathvariant="normal">r</mml:mi><mml:mi mathvariant="normal">e</mml:mi><mml:mspace width="thickmathspace"></mml:mspace><mml:msup><mml:mrow><mml:mi mathvariant="normal">X</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mi mathvariant="normal">X</mml:mi><mml:mi mathvariant="normal">u</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">Y</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2032;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mi mathvariant="normal">Y</mml:mi><mml:mi mathvariant="normal">v</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>Six decomposition matrices <inline-formula id="ieqn-45">
<!--<alternatives><inline-graphic xlink:href="ieqn-45.tif"/><tex-math id="tex-ieqn-45"><![CDATA[\left\{ {{{\rm V}_{\left( {\rm i} \right)}}} \right\}_{{\rm i} = 1}^6]]></tex-math>--><mml:math id="mml-ieqn-45"><mml:msubsup><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">V</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">i</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mn>6</mml:mn></mml:msubsup></mml:math>
<!--</alternatives>--></inline-formula> are obtained for the HSV and HOG tensors, and the similarity between the two samples is represented as a 6-dimensional vector <inline-formula id="ieqn-46">
<!--<alternatives><inline-graphic xlink:href="ieqn-46.tif"/><tex-math id="tex-ieqn-46"><![CDATA[{\rm \psi } = \left( {{{\rm \varphi }_1},{{\rm \varphi }_2}, \cdots ,{{\rm \varphi }_6}} \right)]]></tex-math>--><mml:math id="mml-ieqn-46"><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C6;</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C6;</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C6;</mml:mi></mml:mrow><mml:mn>6</mml:mn></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>.</p>
</sec>
<sec id="s2_3_3">
<label>2.3.3</label>
<title>Similarity Fusion</title>
<p>Six typical correlations are obtained based on calculating the similarity of holographic labels described in the previous sections. The summation of all six typical correlations may be simply used as the similarity between the samples. However, different decomposition matrices of a tensor contain different information and have different distinctive capabilities. Therefore, each decomposition matrix serves as an independent unit, and an effective method is proposed to fuse these similarities. The process is shown in <xref ref-type="fig" rid="fig-6">Fig. 6</xref>.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>The process of the decomposition matrix similarity fusion</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-6.png"/>
</fig>
<p>The similarity vectors between the test sample and the training samples are represented as <inline-formula id="ieqn-47">
<!--<alternatives><inline-graphic xlink:href="ieqn-47.tif"/><tex-math id="tex-ieqn-47"><![CDATA[{{\rm \psi }_1},{{\rm \psi }_2}, \cdots ,{{\rm \psi }_{\rm t}}]]></tex-math>--><mml:math id="mml-ieqn-47"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> respectively. Principal component analysis is used to find the best projection sub-space in order to determine the space within the largest fusion vector. The select training is implemented on PCA sub-space. For a series of similarity vectors, their mean values are defined using <xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref>:</p>
<p><disp-formula id="eqn-5">
<label>(5)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-5.png"/><tex-math id="tex-eqn-5"><![CDATA[ \bar{\rm \psi } = \displaystyle{1 \over {\rm t}}\mathop \sum \limits_{{\rm k} = 1}^{\rm t} {{\rm \psi }_{\rm k}}]]></tex-math>--><mml:math id="mml-eqn-5" display="block"><mml:mrow><mml:mrow><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mstyle scriptlevel="0" displaystyle="true"><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow></mml:mfrac></mml:mrow><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow></mml:munderover><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mstyle></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>The scatter matrix is calculated using <xref ref-type="disp-formula" rid="eqn-6">Eq. (6)</xref>:</p>
<p><disp-formula id="eqn-6">
<label>(6)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-6.png"/><tex-math id="tex-eqn-6"><![CDATA[{\rm S} = \mathop \sum \limits_{{\rm k} = 1}^{\rm t} \left( {{{\rm \psi }_{\rm k}} - \bar {\rm \psi }} \right){\left( {{{\rm \psi }_{\rm k}} - \bar {\rm \psi }} \right)^{\rm T}}]]></tex-math>--><mml:math id="mml-eqn-6" display="block"><mml:mrow><mml:mi mathvariant="normal">S</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:munderover><mml:mrow><mml:mo movablelimits="false">&#x2211;</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow></mml:munderover><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mrow><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x2212;</mml:mo><mml:mrow><mml:mrow><mml:mover><mml:mi>&#x03C8;</mml:mi><mml:mo stretchy="false">&#x00AF;</mml:mo></mml:mover></mml:mrow></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>The scatter matrix is decomposed using <xref ref-type="disp-formula" rid="eqn-7">Eq. (7)</xref>:</p>
<p><disp-formula id="eqn-7">
<label>(7)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-7.png"/><tex-math id="tex-eqn-7"><![CDATA[{\rm S} = { \Phi }{\rm {\mathbb A}}{{\Phi }^{\rm T}}]]></tex-math>--><mml:math id="mml-eqn-7" display="block"><mml:mrow><mml:mi mathvariant="normal">S</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03A6;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">A</mml:mi></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="normal">&#x03A6;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>The diagonal matrix <inline-formula id="ieqn-48">
<!--<alternatives><inline-graphic xlink:href="ieqn-48.tif"/><tex-math id="tex-ieqn-48"><![CDATA[{\rm {\mathbb A}}]]></tex-math>--><mml:math id="mml-ieqn-48"><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="double-struck">A</mml:mi></mml:mrow></mml:mrow></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> consists of eigenvalues of <inline-formula id="ieqn-49">
<!--<alternatives><inline-graphic xlink:href="ieqn-49.tif"/><tex-math id="tex-ieqn-49"><![CDATA[{\rm S}]]></tex-math>--><mml:math id="mml-ieqn-49"><mml:mrow><mml:mi mathvariant="normal">S</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>, the column vectors of <inline-formula id="ieqn-50">
<!--<alternatives><inline-graphic xlink:href="ieqn-50.tif"/><tex-math id="tex-ieqn-50"><![CDATA[{ \Phi }]]></tex-math>--><mml:math id="mml-ieqn-50"><mml:mrow><mml:mi mathvariant="normal">&#x03A6;</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> are the corresponding eigenvectors. The PCA sub-space consists of the corresponding feature vectors of the <inline-formula id="ieqn-51">
<!--<alternatives><inline-graphic xlink:href="ieqn-51.tif"/><tex-math id="tex-ieqn-51"><![CDATA[{\rm \rho }]]></tex-math>--><mml:math id="mml-ieqn-51"><mml:mrow><mml:mi>&#x03C1;</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> largest eigenvalues. A test similarity vector is generated using <xref ref-type="disp-formula" rid="eqn-8">Eq. (8)</xref>:</p>
<p><disp-formula id="eqn-8">
<label>(8)</label>
<!--<alternatives>
<graphic mimetype="image" mime-subtype="png" xlink:href="eqn-8.png"/><tex-math id="tex-eqn-8"><![CDATA[{{\rm \psi }_{\rm T}} = {\rm \{ }\max \left( {{{\rm \psi }_{{\rm j},1}}} \right),\max \left( {{{\rm \psi }_{{\rm j},2}}} \right), \cdots ,\max \left( {{{\rm \psi }_{{\rm j},6}}} \right){\rm |\; j} = 1,2, \cdots ,{\rm t}\}]]></tex-math>--><mml:math id="mml-eqn-8" display="block"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">T</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mrow><mml:mo stretchy="false" fence="false">{</mml:mo></mml:mrow><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">j</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">j</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">j</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mn>6</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mspace width="thickmathspace"></mml:mspace><mml:mi mathvariant="normal">j</mml:mi></mml:mrow><mml:mo>&#x003D;</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow><mml:mo stretchy="false" fence="false">}</mml:mo></mml:math>
<!--</alternatives>--></disp-formula></p>
<p>where <inline-formula id="ieqn-52">
<!--<alternatives><inline-graphic xlink:href="ieqn-52.tif"/><tex-math id="tex-ieqn-52"><![CDATA[\max \left( {{{\rm \psi }_{{\rm j},1}}} \right)]]></tex-math>--><mml:math id="mml-ieqn-52"><mml:mo movablelimits="true" form="prefix">max</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">j</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represents the maximum value of the first column of the training vector. Because a larger <inline-formula id="ieqn-53">
<!--<alternatives><inline-graphic xlink:href="ieqn-53.tif"/><tex-math id="tex-ieqn-53"><![CDATA[{\rm \psi }]]></tex-math>--><mml:math id="mml-ieqn-53"><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represents higher similarity, the maximum value of each dimension serves as the test vector. All vectors <inline-formula id="ieqn-54">
<!--<alternatives><inline-graphic xlink:href="ieqn-54.tif"/><tex-math id="tex-ieqn-54"><![CDATA[{{\rm \psi }_1},{{\rm \psi }_2}, \cdots ,{{\rm \psi }_{\rm t}}]]></tex-math>--><mml:math id="mml-ieqn-54"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> and <inline-formula id="ieqn-55">
<!--<alternatives><inline-graphic xlink:href="ieqn-55.tif"/><tex-math id="tex-ieqn-55"><![CDATA[{{\rm \psi }_{\rm t}}]]></tex-math>--><mml:math id="mml-ieqn-55"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> are projected to the PCA sub-space and classified using the nearest neighbor algorithm.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Experimental Results</title>
<p>The dataset used in this study contains 200 holographic diffraction labels with an image size of <inline-formula id="ieqn-56">
<!--<alternatives><inline-graphic xlink:href="ieqn-56.tif"/><tex-math id="tex-ieqn-56"><![CDATA[512 \times 512]]></tex-math>--><mml:math id="mml-ieqn-56"><mml:mn>512</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mn>512</mml:mn></mml:math>
<!--</alternatives>--></inline-formula>. Meanwhile, since different image features are shown in different illumination environments due to the unique physical feature of holographic diffraction labels, we expand the dataset to 800 by applying an affine transformation of 90 degrees to the original labels. These original images are decreased in size to 256 &#x00D7; 256 and converted to HSV color space as the original tensor of the samples. HOG tensor extraction is performed using the original image. The diffraction labels in the dataset are classified into 8 categories according to lighting environment, with each category containing 800 labels. Some test images with different lighting environments are shown in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>. The algorithm is realized using MATLAB R2012a.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Samples with different lighting environments</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-7.png"/>
</fig>
<sec id="s3_1">
<label>3.1</label>
<title>The Advantages of the Fusion Decomposition Matrix</title>
<p>The advantages of the similarity algorithm of the fusion decomposition matrix are analyzed in this study. First, a classification experiment is performed using the decomposition matrices for the HSV tensor, and the recognition results are shown in <xref ref-type="table" rid="table-1">Tab. 1</xref>. Then, the summation strategy was used in their experiments [<xref ref-type="bibr" rid="ref-25">25</xref>]. As shown in <xref ref-type="table" rid="table-1">Tab. 1</xref>, the recognition rate using the fusion strategy is 87.96% and better than that of the decomposition matrix alone. The recognition rate of our fusion algorithm is 93.61%, which indicates good recognition effect.</p>

<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Comparison of recognition rate</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Recognition algorithm</th>
<th>Recognition rate</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>decomposition matrix 1</bold></td>
<td><bold>37.68%</bold></td>
</tr>
<tr>
<td><bold>decomposition matrix 2</bold></td>
<td><bold>38.26%</bold></td>
</tr>
<tr>
<td><bold>decomposition matrix 3</bold></td>
<td><bold>73.33%</bold></td>
</tr>
<tr>
<td><bold>sum [<xref ref-type="bibr" rid="ref-25">25</xref>]</bold></td>
<td><bold>87.96%</bold></td>
</tr>
<tr>
<td><bold>ours algorithm</bold></td>
<td><bold>93.61%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The recognition abilities of different decomposition matrices are not the same but it was not considered in their study. Our strategy overcomes this shortcoming. As shown in <xref ref-type="table" rid="table-1">Tab. 1</xref>, the sample recognitions of decomposition matrices 1, 2, and 3 are different among each other. The typical correlation coefficients of each decomposition matrix are summed into one similarity vector. The similarity vector is projected to the PCA sub-space and classified by the K-nearest neighbor algorithm. The similarity of the decomposition matrices between our algorithm and the summation method is shown in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>. There are five training samples and one test sample.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>The similarity of the decomposition matrices between two algorithms</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-8.png"/>
</fig>
<p>Calculating the typical correlation between the test sample and the training samples, we obtain five three-dimensional similarity vectors <inline-formula id="ieqn-57">
<!--<alternatives><inline-graphic xlink:href="ieqn-57.tif"/><tex-math id="tex-ieqn-57"><![CDATA[{{\rm \psi }_1},{{\rm \psi }_2}, \cdots ,{{\rm \psi }_5}]]></tex-math>--><mml:math id="mml-ieqn-57"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x22EF;</mml:mo><mml:mo>,</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>5</mml:mn></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula>. Each element in the vectors is the similarity between the test sample and the training sample of an individual decomposition matrix. As shown in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>, the similarity results, obtained by using the summation strategy of decomposition matrix are very close. The same similarity results are obtained in 5 training samples using the fusion strategy. On the other hand, our algorithm uses the PCA to project the similarity vector to another sub-space and enlarges the distance between different samples. The similarity vector of the test sample is represented as the largest similarity between the set of samples and the test sample. <inline-formula id="ieqn-58">
<!--<alternatives><inline-graphic xlink:href="ieqn-58.tif"/><tex-math id="tex-ieqn-58"><![CDATA[{{\rm \psi }_1}]]></tex-math>--><mml:math id="mml-ieqn-58"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represents the sample vectors in the sub-space, <inline-formula id="ieqn-59">
<!--<alternatives><inline-graphic xlink:href="ieqn-59.tif"/><tex-math id="tex-ieqn-59"><![CDATA[{{\rm \psi }_{\rm t}}]]></tex-math>--><mml:math id="mml-ieqn-59"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x03C8;</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>
<!--</alternatives>--></inline-formula> represents the maximum similarity vector in the sub-space. The projected sample vectors do not overlap with each other. Our fusion strategy has good performance and its recognition rate reaches 93.61% for the dataset.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>The Complementarity of the Double Tensor Feature</title>
<p>The complementarity of the double tensor is tested. First, only the HSV tensors of the original data are used for holographic image recognition in the dataset. The confusion matrix of the HSV tensor recognition is shown in <xref ref-type="fig" rid="fig-9">Fig. 9a</xref>. The identification of the HSV tensor is not good in distinguishing between &#x201C;Label 10&#x201D; and other holographic labels. Many &#x201C;Label 10&#x201D; labels are mis-identified as &#x201C;Label 6&#x201D; or &#x201C;Label 7&#x201D;. Second, the tensors constructed using the HOG features are tested with the same dataset, and the confusion matrix is shown in <xref ref-type="fig" rid="fig-9">Fig. 9b</xref>. The HOG tensor has a high recognition rate (95%) for &#x201C;Label 10&#x201D;. The two tensors have complementary effect although the HOG tensor is worse than HSV tensor in other categories.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>The fusion matrix for identification using different tensors (a) Confusion matrix using HSV tensor (b) Confusion matrix using HOG tensor (c) Confusion matrix using the double tensor</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="fig-9.png"/>
</fig>
<p>The typical correlation coefficients of the HSV tensor and the HOG tensor are combined based on the above experiment. A confusion matrix is obtained using this double tensor, as shown in <xref ref-type="fig" rid="fig-9">Fig. 9c</xref>. The double tensor balances the inconsistent recognition results of holographic labels, to some extent, the misidentification of one tensor may be masked by the other tensor, resulting in the complementary effect being generally better than each effect alone. The recognition results using each tensor and the double tensor are shown in <xref ref-type="table" rid="table-2">Tab. 2</xref>. The recognition accuracy of the double tensor is improved greatly.</p>

<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Recognition results using single tensor and the double tensor</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Method</th>
<th>Recognition rate</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>HSV tensor</bold></td>
<td><bold>77.68%</bold></td>
</tr>
<tr>
<td><bold>HOG tensor</bold></td>
<td><bold>68.26%</bold></td>
</tr>
<tr>
<td><bold>Double tensor</bold></td>
<td><bold>93.61%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Algorithm Comparison</title>
<p>Our algorithm is compared with the algorithm proposed in [<xref ref-type="bibr" rid="ref-17">17</xref>,<xref ref-type="bibr" rid="ref-26">26</xref>] with the same dataset after rotation, cropping, and illumination change of the sample, respectively. The recognition results of the holographic labels are shown in <xref ref-type="table" rid="table-3">Tabs. 3</xref>&#x2013;<xref ref-type="table" rid="table-5">5</xref>. The experimental results show that our algorithm is robust to rotation and illumination changes. The HSV tensor in the double tensor contains the color information of the sample, and the HOG tensor represents the gradient information. A higher recognition rate is achieved because of their complementation. However, cropping causes loss of sample information, it results in misjudgment. If the cropping parameters are small, the recognition accuracy remains higher than the expanded SIFT in [<xref ref-type="bibr" rid="ref-17">17</xref>,<xref ref-type="bibr" rid="ref-26">26</xref>].</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Comparison of recognition after sample scaling</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th></th>
<th>50%</th>
<th>70%</th>
<th>90%</th>
<th>130%</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>Wu et al. [<xref ref-type="bibr" rid="ref-26">26</xref>]</bold></td>
<td><bold>89.92%</bold></td>
<td><bold>90.23%</bold></td>
<td><bold>92.00%</bold></td>
<td><bold>92.00%</bold></td>
</tr>
<tr>
<td><bold>N. Danapur et al. [<xref ref-type="bibr" rid="ref-17">17</xref>]</bold></td>
<td><bold>91.86%</bold></td>
<td><bold>92.66%</bold></td>
<td><bold>93.04%</bold></td>
<td><bold>93.36%</bold></td>
</tr>
<tr>
<td><bold>Our algorithm</bold></td>
<td><bold>93.21%</bold></td>
<td><bold>93.55%</bold></td>
<td><bold>93.17%</bold></td>
<td><bold>94.36%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>

<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Comparison of recognition after sample cropping</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th></th>
<th>10%</th>
<th>20%</th>
<th>50%</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>Wu et al. [<xref ref-type="bibr" rid="ref-26">26</xref>]</bold></td>
<td><bold>91.60%</bold></td>
<td><bold>90.13%</bold></td>
<td><bold>80.49%</bold></td>
</tr>
<tr>
<td><bold>N. Danapur et al. [<xref ref-type="bibr" rid="ref-17">17</xref>]</bold></td>
<td><bold>91.80%</bold></td>
<td><bold>86.64%</bold></td>
<td><bold>70.59%</bold></td>
</tr>
<tr>
<td><bold>Our algorithm</bold></td>
<td><bold>92.99%</bold></td>
<td><bold>87.37%</bold></td>
<td><bold>75.64%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>

<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>Comparison of recognition after sample illumination angle change</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th></th>
<th>0<inline-formula id="ieqn-60">
<!--<alternatives><inline-graphic xlink:href="ieqn-60.tif"/><tex-math id="tex-ieqn-60"><![CDATA[^\circ]]></tex-math>--><mml:math id="mml-ieqn-60"><mml:msup><mml:mi></mml:mi><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math>
<!--</alternatives>--></inline-formula></th>
<th>45<inline-formula id="ieqn-61">
<!--<alternatives><inline-graphic xlink:href="ieqn-61.tif"/><tex-math id="tex-ieqn-61"><![CDATA[^\circ]]></tex-math>--><mml:math id="mml-ieqn-61"><mml:msup><mml:mi></mml:mi><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math>
<!--</alternatives>--></inline-formula></th>
<th>90<inline-formula id="ieqn-62">
<!--<alternatives><inline-graphic xlink:href="ieqn-62.tif"/><tex-math id="tex-ieqn-62"><![CDATA[^\circ]]></tex-math>--><mml:math id="mml-ieqn-62"><mml:msup><mml:mi></mml:mi><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math>
<!--</alternatives>--></inline-formula></th>
<th>135<inline-formula id="ieqn-63">
<!--<alternatives><inline-graphic xlink:href="ieqn-63.tif"/><tex-math id="tex-ieqn-63"><![CDATA[^\circ]]></tex-math>--><mml:math id="mml-ieqn-63"><mml:msup><mml:mi></mml:mi><mml:mo>&#x2218;</mml:mo></mml:msup></mml:math>
<!--</alternatives>--></inline-formula></th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>Wu et al. [<xref ref-type="bibr" rid="ref-26">26</xref>]</bold></td>
<td><bold>92.24%</bold></td>
<td><bold>93.30%</bold></td>
<td><bold>93.06%</bold></td>
<td><bold>92.43%</bold></td>
</tr>
<tr>
<td><bold>N. Danapur et al. [<xref ref-type="bibr" rid="ref-17">17</xref>]</bold></td>
<td><bold>94.10%</bold></td>
<td><bold>94.99%</bold></td>
<td><bold>95.01%</bold></td>
<td><bold>93.32%</bold></td>
</tr>
<tr>
<td><bold>Our algorithm</bold></td>
<td><bold>94.45%</bold></td>
<td><bold>95.08%</bold></td>
<td><bold>95.08%</bold></td>
<td><bold>93.41%</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Conclusions</title>
<p>An algorithm for holographic diffraction label recognition using a complementary double tensor is proposed. First, an approach is proposed to generate the HOG feature tensor that combines the HSV tensor of the original data to obtain the double tensor. Then, the double tensor is decomposed using HOSVD to obtain the double tensor decomposition matrix. Finally, typical correlation analysis is used to calculate the similarity between the decomposition matrices. The similarity of the decomposition matrix is fused according to different recognition capabilities, and the similarity vectors are projected to a PCA sub-space for classification. The algorithm makes up for the deficiency of the original data tensor, improves recognition rate, does not require advanced training process, and has high computational efficiency. The experimental results have shown that the double tensor fusion algorithm is capable of performing efficient recognition for holographic diffraction labels.</p>
</sec>
</body>
<back>
<ack>
<p>We thank the anonymous reviewers and editors for their very constructive comments.</p>
</ack><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> This work was mainly supported by Public Welfare Technology and Industry Project of Zhejiang Provincial Science Technology Department. (No. LGG18F020013, No. LGG19F020016, LGF21F020006).</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1">
<label>[1]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>L. D.</given-names> 
<surname>Lathauwer</surname></string-name>, <string-name>
<given-names>D. M.</given-names> 
<surname>Bart</surname></string-name> and <string-name>
<given-names>J.</given-names> 
<surname>Vandewalle</surname></string-name>
</person-group>, &#x201C;
<article-title>A multilinear singular value decomposition</article-title>,&#x201D; 
<source>SIAM Journal on Matrix Analysis and Applications</source>, vol. 
<volume>21</volume>, no. 
<issue>4</issue>, pp. 
<fpage>1253</fpage>&#x2013;
<lpage>1278</lpage>, 
<year>2000</year>.</mixed-citation>
</ref>
<ref id="ref-2">
<label>[2]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>L.</given-names> 
<surname>Geng</surname></string-name>, <string-name>
<given-names>C.</given-names> 
<surname>Cui</surname></string-name>, <string-name>
<given-names>Q.</given-names> 
<surname>Guo</surname></string-name>, <string-name>
<given-names>S.</given-names> 
<surname>Niu</surname></string-name>, <string-name>
<given-names>G.</given-names> 
<surname>Zhang</surname></string-name> <etal>et al.</etal>
</person-group><italic>,</italic> &#x201C;
<article-title>Robust core tensor dictionary learning with modified gaussian mixture model for multispectral image restoration</article-title>,&#x201D; 
<source>Computers, Materials &#x0026; Continua</source>, vol. 
<volume>65</volume>, no. 
<issue>1</issue>, pp. 
<fpage>913</fpage>&#x2013;
<lpage>928</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-3">
<label>[3]</label><mixed-citation publication-type="book">
<person-group person-group-type="author"><string-name>
<given-names>L. T.</given-names> 
<surname>Thanh</surname></string-name> and <string-name>
<given-names>D. N. H.</given-names> 
<surname>Thanh</surname></string-name>
</person-group>, &#x201C;<chapter-title>An adaptive local thresholding roads segmentation method for satellite aerial images with normalized HSV and LAB color models</chapter-title>,&#x201D; 
<source>Intelligent Computing in Engineering</source>. 
<publisher-loc>Singapore</publisher-loc>: 
<publisher-name>Springer</publisher-name>, pp. 
<fpage>865</fpage>&#x2013;
<lpage>872</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-4">
<label>[4]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>L. R.</given-names> 
<surname>Tucker</surname></string-name>
</person-group>, &#x201C;
<article-title>Some mathematical notes on three-mode factor analysis</article-title>,&#x201D; 
<source>Psychometrika</source>, vol. 
<volume>31</volume>, no. 
<issue>3</issue>, pp. 
<fpage>279</fpage>&#x2013;
<lpage>311</lpage>, 
<year>1966</year>.</mixed-citation>
</ref>
<ref id="ref-5">
<label>[5]</label><mixed-citation publication-type="conf-proc">
<person-group person-group-type="author"><string-name>
<given-names>X. Y.</given-names> 
<surname>Zhang</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Xin</surname></string-name> and <string-name>
<given-names>C.</given-names> 
<surname>Lawrence</surname></string-name>
</person-group>, &#x201C;
<article-title>Nonlocal low-rank tensor factor analysis for image restoration</article-title>,&#x201D; in <conf-name>Proc. IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, 
<conf-loc>Utah, USA</conf-loc>: 
<conf-name>Salt Lake City</conf-name>, pp. 
<fpage>8232</fpage>&#x2013;
<lpage>8241</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-6">
<label>[6]</label><mixed-citation publication-type="conf-proc">
<person-group person-group-type="author"><string-name>
<given-names>P.</given-names> 
<surname>Roy</surname></string-name>, <string-name>
<given-names>B.</given-names> 
<surname>Parabattina</surname></string-name> and <string-name>
<given-names>D.</given-names> 
<surname>Pradip</surname></string-name>
</person-group>, &#x201C;
<article-title>Gender detection from human voice using tensor analysis</article-title>,&#x201D; in <conf-name>Proc. SLTU &#x0026; CCURL</conf-name>, 
<conf-loc>Marseille, France</conf-loc>, pp. 
<fpage>211</fpage>&#x2013;
<lpage>217</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-7">
<label>[7]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>B.</given-names> 
<surname>Wang</surname></string-name> and <string-name>
<given-names>P.</given-names> 
<surname>Zhao</surname></string-name>
</person-group>, &#x201C;
<article-title>An adaptive image watermarking method combining SVD and wang-landau sampling in DWT domain</article-title>,&#x201D; 
<source>Mathematics</source>, vol. 
<volume>8</volume>, no. 
<issue>5</issue>, pp. 
<fpage>691</fpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-8">
<label>[8]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>O. A.</given-names> 
<surname>Malik</surname></string-name> and <string-name>
<given-names>S.</given-names> 
<surname>Becker</surname></string-name>
</person-group>, &#x201C;
<article-title>Low-rank tucker decomposition of large tensors using tensorsketch</article-title>,&#x201D; 
<source>Advances in Neural Information Processing Systems</source>, vol. 
<volume>31</volume>, pp. 
<fpage>10096</fpage>&#x2013;
<lpage>10106</lpage>, 
<year>2018</year>.</mixed-citation>
</ref>
<ref id="ref-9">
<label>[9]</label><mixed-citation publication-type="conf-proc">
<person-group person-group-type="author"><string-name>
<given-names>M. A. O.</given-names> 
<surname>Vasilescu</surname></string-name> and <string-name>
<given-names>D.</given-names> 
<surname>Terzopoulos</surname></string-name>
</person-group>, &#x201C;
<article-title> Multilinear subspace analysis of image ensembles</article-title>,&#x201D; in <conf-name>Proc. IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, 
<conf-loc>Madison, Wisconsin, USA</conf-loc>, pp. 
<fpage>II</fpage>&#x2013;
<lpage>93</lpage>, 
<year>2003</year>.</mixed-citation>
</ref>
<ref id="ref-10">
<label>[10]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>L.</given-names> 
<surname>Feng</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Liu</surname></string-name>, <string-name>
<given-names>L.</given-names> 
<surname>Chen</surname></string-name>, <string-name>
<given-names>X.</given-names> 
<surname>Zhang</surname></string-name> and <string-name>
<given-names>C.</given-names> 
<surname>Zhu</surname></string-name>
</person-group>, &#x201C;
<article-title>Robust block tensor principal component analysis</article-title>,&#x201D; 
<source>Signal Processing</source>, vol. 
<volume>166</volume>, no. 
<issue>13</issue>, pp. 
<fpage>107271</fpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-11">
<label>[11]</label><mixed-citation publication-type="other">
<person-group person-group-type="author"><string-name>
<given-names>D.</given-names> 
<surname>Cai</surname></string-name>, <string-name>
<given-names>X. F.</given-names> 
<surname>He</surname></string-name> and <string-name>
<given-names>J. W.</given-names> 
<surname>Han</surname></string-name>
</person-group>, 
<article-title>Subspace Learning Based on Tensor Analysis</article-title>. 
<year>2005</year>. [Online]. Available at: <uri>http://hdl.handle.net/2142/11025</uri>.</mixed-citation>
</ref>
<ref id="ref-12">
<label>[12]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>D. T.</given-names> 
<surname>Tran</surname></string-name>, <string-name>
<given-names>M.</given-names> 
<surname>Gabbouj</surname></string-name> and <string-name>
<given-names>A.</given-names> 
<surname>Iosifidis</surname></string-name>
</person-group>, &#x201C;
<article-title>Multilinear class-specific discriminant analysis</article-title>,&#x201D; 
<source>Pattern Recognition Letters</source>, vol. 
<volume>100</volume>, pp. 
<fpage>131</fpage>&#x2013;
<lpage>136</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-13">
<label>[13]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>E.</given-names> 
<surname>Stoudenmire</surname></string-name> and <string-name>
<given-names>D. J.</given-names> 
<surname>Schwab</surname></string-name>
</person-group>, &#x201C;
<article-title>Supervised learning with tensor networks</article-title>,&#x201D; 
<source>Advances in Neural Information Processing Systems</source>, vol. 
<volume>29</volume>, pp. 
<fpage>4799</fpage>&#x2013;
<lpage>4807</lpage>, 
<year>2016</year>.</mixed-citation>
</ref>
<ref id="ref-14">
<label>[14]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>J.</given-names> 
<surname>Hagemann</surname></string-name> and <string-name>
<given-names>T.</given-names> 
<surname>Salditt</surname></string-name>
</person-group>, &#x201C;
<article-title>The fluence-resolution relationship in holographic and coherent diffractive imaging</article-title>,&#x201D; 
<source>Journal of Applied Crystallography</source>, vol. 
<volume>50</volume>, no. 
<issue>2</issue>, pp. 
<fpage>531</fpage>&#x2013;
<lpage>538</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-15">
<label>[15]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>R. Y.</given-names> 
<surname>Chen</surname></string-name>, <string-name>
<given-names>L. L.</given-names> 
<surname>Pan</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Zhou</surname></string-name> and <string-name>
<given-names>Q. H.</given-names> 
<surname>Lei</surname></string-name>
</person-group>, &#x201C;
<article-title>Image retrieval based on deep feature extraction and reduction with improved CNN and PCA</article-title>,&#x201D; 
<source>Journal of Information Hiding and Privacy Protection</source>, vol. 
<volume>2</volume>, no. 
<issue>2</issue>, pp. 
<fpage>67</fpage>&#x2013;
<lpage>76</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-16">
<label>[16]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>S.</given-names> 
<surname>Bakheet</surname></string-name>
</person-group>, &#x201C;
<article-title>An SVM framework for malignant melanoma detection based on optimized HOG features</article-title>,&#x201D; 
<source>Computation</source>, vol. 
<volume>5</volume>, no. 
<issue>4</issue>, pp. 
<fpage>4</fpage>&#x2013;
<lpage>17</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-17">
<label>[17]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>N.</given-names> 
<surname>Danapur</surname></string-name>, <string-name>
<given-names>S. A. A.</given-names> 
<surname>Dizaj</surname></string-name> and <string-name>
<given-names>V.</given-names> 
<surname>Rostami</surname></string-name>
</person-group>, &#x201C;
<article-title>An efficient image retrieval based on an integration of HSV, RLBP, and CENTRIST features using ensemble classifier learning</article-title>,&#x201D; 
<source>Multimedia Tools and Applications</source>, vol. 
<volume>79</volume>, no. 
<issue>33</issue>, pp. 
<fpage>24463</fpage>&#x2013;
<lpage>24486</lpage>, 
<year>2020</year>.</mixed-citation>
</ref>
<ref id="ref-18">
<label>[18]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>Y.</given-names> 
<surname>Wang</surname></string-name>, <string-name>
<given-names>X.</given-names> 
<surname>Zhu</surname></string-name> and <string-name>
<given-names>B.</given-names> 
<surname>Wu</surname></string-name>
</person-group>, &#x201C;
<article-title>Automatic detection of individual oil palm trees from UAV images using HOG features and an SVM classifier</article-title>,&#x201D; 
<source>International Journal of Remote Sensing</source>, vol. 
<volume>40</volume>, no. 
<issue>19</issue>, pp. 
<fpage>7356</fpage>&#x2013;
<lpage>7370</lpage>, 
<year>2019</year>.</mixed-citation>
</ref>
<ref id="ref-19">
<label>[19]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>N.</given-names> 
<surname>Jayashree</surname></string-name> and <string-name>
<given-names>R. S.</given-names> 
<surname>Bhuvaneswaran</surname></string-name>
</person-group>, &#x201C;
<article-title>A robust image watermarking scheme using z-transform, discrete wavelet transform and bidiagonal singular value decomposition</article-title>,&#x201D; 
<source>Computers, Materials &#x0026; Continua</source>, vol. 
<volume>58</volume>, no. 
<issue>1</issue>, pp. 
<fpage>263</fpage>&#x2013;
<lpage>285</lpage>, 
<year>2019</year>.</mixed-citation>
</ref>
<ref id="ref-20">
<label>[20]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>L. L.</given-names> 
<surname>Liu</surname></string-name> and <string-name>
<given-names>X. H.</given-names> 
<surname>Liang</surname></string-name>
</person-group>, &#x201C;
<article-title>QR code image correction based on improved canny operator and Hough transform</article-title>,&#x201D; 
<source>Electronic Design Engineering</source>, vol. 
<volume>25</volume>, no. 
<issue>19</issue>, pp. 
<fpage>183</fpage>&#x2013;
<lpage>186</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
<ref id="ref-21">
<label>[21]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>W.</given-names> 
<surname>Ma</surname></string-name>, <string-name>
<given-names>J.</given-names> 
<surname>Qin</surname></string-name>, <string-name>
<given-names>X.</given-names> 
<surname>Xiang</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Tan</surname></string-name>, <string-name>
<given-names>Y.</given-names> 
<surname>Luo</surname></string-name> <etal>et al.</etal>
</person-group><italic>,</italic> &#x201C;
<article-title>Adaptive median filtering algorithm based on divide and conquer and its application in captcha recognition</article-title>,&#x201D; 
<source>Computers, Materials &#x0026; Continua</source>, vol. 
<volume>58</volume>, no. 
<issue>3</issue>, pp. 
<fpage>665</fpage>&#x2013;
<lpage>677</lpage>, 
<year>2019</year>.</mixed-citation>
</ref>
<ref id="ref-22">
<label>[22]</label><mixed-citation publication-type="conf-proc">
<person-group person-group-type="author"><string-name>
<given-names>T.</given-names> 
<surname>Vo</surname></string-name>, <string-name>
<given-names>D.</given-names> 
<surname>Tran</surname></string-name>, <string-name>
<given-names>W.</given-names> 
<surname>Ma</surname></string-name> and <string-name>
<given-names>K.</given-names> 
<surname>Nguyen</surname></string-name>
</person-group>, &#x201C;
<article-title>Improved hog descriptors in image classification with CP decomposition</article-title>,&#x201D; in <conf-name>Proc. ICNIP</conf-name>, 
<conf-loc>Berlin, Heidelberg, Germany</conf-loc>, pp. 
<fpage>384</fpage>&#x2013;
<lpage>391</lpage>, 
<year>2013</year>.</mixed-citation>
</ref>
<ref id="ref-23">
<label>[23]</label><mixed-citation publication-type="conf-proc">
<person-group person-group-type="author"><string-name>
<given-names>T. K.</given-names> 
<surname>Kim</surname></string-name> and <string-name>
<given-names>R.</given-names> 
<surname>Cipolla</surname></string-name>
</person-group>, &#x201C;
<article-title>Canonical correlation analysis of video volume tensors for action categorization and detection</article-title>,&#x201D; in <conf-name>Proc. IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, 
<conf-loc>Anchorage, Alaska, USA</conf-loc>, pp. 
<fpage>1415</fpage>&#x2013;
<lpage>1428</lpage>, 
<year>2008</year>.</mixed-citation>
</ref>
<ref id="ref-24">
<label>[24]</label><mixed-citation publication-type="conf-proc">
<person-group person-group-type="author"><string-name>
<given-names>T. K.</given-names> 
<surname>Kim</surname></string-name>, <string-name>
<given-names>S. F.</given-names> 
<surname>Wong</surname></string-name> and <string-name>
<given-names>R.</given-names> 
<surname>Cipolla</surname></string-name>
</person-group>, &#x201C;
<article-title>Tensor canonical correlation analysis for action classification</article-title>,&#x201D; in <conf-name>Proc. IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, 
<conf-loc>Minneapolis, Minnesota, USA</conf-loc>, pp. 
<fpage>1</fpage>&#x2013;
<lpage>8</lpage>, 
<year>2007</year>.</mixed-citation>
</ref>
<ref id="ref-25">
<label>[25]</label><mixed-citation publication-type="journal">
<person-group person-group-type="author"><string-name>
<given-names>A.</given-names> 
<surname>Ross</surname></string-name> and <string-name>
<given-names>A. K.</given-names> 
<surname>Jain</surname></string-name>
</person-group>, &#x201C;
<article-title>Information fusion in biometrics</article-title>,&#x201D; 
<source>Pattern Recognition Letters</source>, vol. 
<volume>24</volume>, no. 
<issue>13</issue>, pp. 
<fpage>2115</fpage>&#x2013;
<lpage>2125</lpage>, 
<year>2003</year>.</mixed-citation>
</ref>
<ref id="ref-26">
<label>[26]</label><mixed-citation publication-type="conf-proc">
<person-group person-group-type="author"><string-name>
<given-names>T.</given-names> 
<surname>Wu</surname></string-name>, <string-name>
<given-names>X.</given-names> 
<surname>Li</surname></string-name>, <string-name>
<given-names>B.</given-names> 
<surname>Wang</surname></string-name>, <string-name>
<given-names>J.</given-names> 
<surname>Yu</surname></string-name>, <string-name>
<given-names>P.</given-names> 
<surname>Li</surname></string-name> <etal>et al.</etal>
</person-group><italic>,</italic> &#x201C;
<article-title>A classification algorithm for hologram label based on improved sift features</article-title>,&#x201D; in <conf-name>Proc. ISPACS</conf-name>, 
<conf-loc>Xiamen, Fujian, China</conf-loc>, pp. 
<fpage>257</fpage>&#x2013;
<lpage>260</lpage>, 
<year>2017</year>.</mixed-citation>
</ref>
</ref-list>
</back>
</article>