<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">IASC</journal-id>
<journal-id journal-id-type="nlm-ta">IASC</journal-id>
<journal-id journal-id-type="publisher-id">IASC</journal-id>
<journal-title-group>
<journal-title>Intelligent Automation &#x0026; Soft Computing</journal-title>
</journal-title-group>
<issn pub-type="epub">2326-005X</issn>
<issn pub-type="ppub">1079-8587</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">24379</article-id>
<article-id pub-id-type="doi">10.32604/iasc.2022.024379</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Selective Cancellable Multi-Biometric Template Generation Scheme Based on Multi-Exposure Feature Fusion</article-title><alt-title alt-title-type="left-running-head">Selective Cancelable Multi-Biometric Template Generation Scheme Based on Multi-Exposure Feature Fusion</alt-title><alt-title alt-title-type="right-running-head">Selective Cancelable Multi-Biometric Template Generation Scheme Based on Multi-Exposure Feature Fusion</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Ayoup</surname><given-names>Ahmed M.</given-names></name>
<xref ref-type="aff" rid="aff-1">1</xref><email>ayoup.2012@hotmail.com</email>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Khalaf</surname><given-names>Ashraf A. M.</given-names></name>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Alraddady</surname><given-names>Fahad</given-names></name>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Abd El-Samie</surname><given-names>Fathi E.</given-names></name>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>El-Safai</surname><given-names>Walid</given-names></name>
<xref ref-type="aff" rid="aff-3">3</xref>
<xref ref-type="aff" rid="aff-5">5</xref>
</contrib>
<contrib id="author-6" contrib-type="author">
<name name-style="western"><surname>Serag Eldin</surname><given-names>Salwa M.</given-names></name>
<xref ref-type="aff" rid="aff-2">2</xref>
<xref ref-type="aff" rid="aff-4">4</xref>
</contrib>
<aff id="aff-1"><label>1</label><institution>Electrical Communications Engineering Department, Faculty of Engineering, Minia University</institution>, <addr-line>Minia, 61111</addr-line>, <country>Egypt</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of Computer Engineering, College of Computers and Information Technology, Taif University</institution>, <addr-line>Taif 21944</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-3"><label>3</label><institution>Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University</institution>, <addr-line>Menoufia, 32952</addr-line>, <country>Egypt</country></aff>
<aff id="aff-4"><label>4</label><institution>Department of Electronics and Electrical Communications Engineering, Faculty of Engineering, Tanta University</institution>, <addr-line>Tanta</addr-line>, <country>Egypt</country></aff>
<aff id="aff-5"><label>5</label><institution>Security Engineering Laboratory, Department of Computer Science, Prince Sultan University</institution>, <addr-line>Riyadh, 11586</addr-line>, <country>Saudi Arabia</country></aff>
</contrib-group><author-notes><corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Ahmed M. Ayoup. Email: <email>ayoup.2012@hotmail.com</email></corresp></author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-12-24"><day>24</day>
<month>12</month>
<year>2021</year></pub-date>
<volume>33</volume>
<issue>1</issue>
<fpage>549</fpage>
<lpage>565</lpage>
<history>
<date date-type="received"><day>15</day><month>10</month><year>2021</year></date>
<date date-type="accepted"><day>16</day><month>11</month><year>2021</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Ayoup et al.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Ayoup et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_IASC_24379.pdf"></self-uri>
<abstract>
<p>This article introduces a new cancellable multi-biometric system based on the combination of a selective encryption method and a deep-learning-based fusion technology. The biometric face image is treated with an automatic face segmentation algorithm (Viola-Jones), and the image of the selected eye is XORed with a PRNG (Pseudo Random Number Generator) matrix. The output array is used to create a primary biometric template. This process changes the histogram of the selected eye image. Arnold&#x2019;s Cat Map is used to superimpose the PRN pixels only on the pixels of the primary image. Arnold&#x2019;s cat map deformed eyes are encrypted using the Advanced Encryption Standard (AES) to encrypt the biometric data stored in the database. In addition, the AES master key is used for the same person in the identity verification process to verify the biometric identity. It is created from the fingers of the right hand, and the right eye is integrated into this process using deep learning technology. The deep learning fusion process can prevent attacks on the biometric system as a whole. In order to avoid damage to the eye or fingerprint images, the design considers the other eye and fingerprint images.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Viola-Jones Algorithm</kwd>
<kwd>PRNG</kwd>
<kwd>AES</kwd>
<kwd>Arnold&#x2019;s Cat Map</kwd>
<kwd>Deep Learning Fusion</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Biometric systems automatically recognize individuals based on unique characteristics or types of characteristics they possess. The development of biometric systems is based on fingerprints, facial features, voice, hand geometry, hand-writing, and retina. The biometric template database is protected to prevent unauthorized access. It is possible to realize secure dynamic transmission of biometric templates through strong encryption methods such as the Advanced Encryption Standard (AES) [<xref ref-type="bibr" rid="ref-1">1</xref>].</p>
<p>A long time is taken to encrypt all biometric data to speed up the encryption process, increase the strength of the encrypted templates, and achieve high security through partial or selective encryption of some parts of the original biometric image. As a result, the biometric acquisition device is not safe and may suffer a malfunction. Two types of error may exist and affect the False Acceptance Rate (FAR)and False Rejection Rate (FRR) of the system. When the device rejects an authorized personnel, the FRR is incremented, and when the device accepts an unauthorized personnel, the FAR is incremented.</p>
<p>A multi-modal biometric identification system aims to combine multiple physical features to reduce FRRandFAR. Multiple biometrics are mandatory in large international biometric databases to accommodate for newly developed security requirements other than uni-modal biometrics. A multi-biometric system recognizes people based on a single source of vital information [<xref ref-type="bibr" rid="ref-2">2</xref>]. These types of systems are often affected by several problems such as disturbed sensor data, lack of individuality, static representation, circumvention, and lack of comprehensiveness. Cancellable biometrics is one of the main trends for protecting biometric templates.</p>
<p>This article is split into five parts. In the first part, we introduce the concept of cancellable biometrics, in addition to biometric protection and privacy problems. In the second part, we introduce some associated research works in this field. The foremost contributions of the suggested approach are presented in the third part. The fourth part gives the simulation results, and finally, the fifth part presents the conclusion and future work<sans-serif>.</sans-serif></p>
<p>Biometric structures are stable authentication structures. Unfortunately, fraudsters have devised new methods for passing on the integrity of biometric structures. The first difficulty with biometric authentication is that everyone with prosthetic wax hands can make fake hands. It is possible to take a picture of a certain person in front of a digital camera to pass the face verification tool. It is also possible to use a lens to pass the iris scanner, etc. There are eight applicable assaults on a biometric system. <xref ref-type="fig" rid="fig-1">Fig. 1</xref> shows the validation of various assaults on a unusual agents of the biometric device and indicates the applicable assaults on the biometric devices, and the applicable solutions.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Verification of different biometric attacks [<xref ref-type="bibr" rid="ref-3">3</xref>]</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-1.png"/>
</fig>
<p>This work presents a cancellable multi-biometric system. The novelty of system compared to previous research is summarized as:<list list-type="order"><list-item>
<p>The fingerprint and iris features are used as the initial key for the AES standard. In addition, the merged key is converted to hexadecimal (HEX key) format to be added to the AES code.</p></list-item><list-item>
<p>The evaluation metrics such as correlation, entropy, etc., are higher and more favorable in security tests.</p></list-item><list-item>
<p>The system performance in the existence of noise, ROC, and AROC are good. This makes the proposed technology more suitable for real-time applications.</p></list-item><list-item>
<p>The proposed technology depends on Viola-Jones algorithm, which is a faster face detection technology. It produces good detection results suitable for machine learning applications. Moreover, the proposed algorithm enrolment is 3.28 s compared to the algorithm in [<xref ref-type="bibr" rid="ref-4">4</xref>] with an enrollment time of 7.42 s. In addition, the proposed algorithm is better than the algorithm in [<xref ref-type="bibr" rid="ref-5">5</xref>] in EER and AROC.</p></list-item><list-item>
<p>The proposed algorithm speeds up the database search process by generating the applicant key and using the key stored in the database to find out if the person is authenticated or not, and in the case of unauthorized persons, the authentication rejects the person.</p></list-item></list></p>
</sec>
<sec id="s2">
<label>2</label>
<title>Related Work</title>
<p>Numerous efforts have been performed by different researchers in the field of biometric security. Neha Singh et al. [<xref ref-type="bibr" rid="ref-3">3</xref>] introduced the idea of biometric security. At the same time, the authors used biometrics to confirm and check individual authentication. This formatting can be distorted if any unauthorized person knows it.</p>
<p>Rupesh wagh et al. [<xref ref-type="bibr" rid="ref-4">4</xref>] presented a security framework using a proprietary coding technology to scramble biometric templates. Cryptography is used for information security, and it is used in the framework of biometrics. The focus on biometric security is the main concern in this paper. Multimodal biometrics depends on two methods of fingerprint and iris recognition. The features are taken from the biometric traits. These features are positioned away using an accentuation plane combination, and the cross-linked vector is blended using distinct security innovations. The collection was completed using the methods of salient feature separation [<xref ref-type="bibr" rid="ref-4">4</xref>].</p>
<p>Mamta Ahlawat et al. [<xref ref-type="bibr" rid="ref-5">5</xref>] suggested a multiple biometrics framework depending on eye and face colors and have presented effective over-current uni-biometric systems. The combination of multiple biometrics enhances the performance efficiency of the biometric system [<xref ref-type="bibr" rid="ref-5">5</xref>]. Shweta Malhotra et al. [<xref ref-type="bibr" rid="ref-6">6</xref>] offered a way to deal with upgrading issues of the imperceptible watermarking with cryptography. The biometric characteristics are changed using undetectable watermarks and cryptography. The encryption algorithm utilized is solid and appropriate for interactive media and text information. It is possible to use encryption methods like AES and Modified Advanced Encryption Standard (MAES) [<xref ref-type="bibr" rid="ref-6">6</xref>].</p>
<p>Vincenzo Conti et al. [<xref ref-type="bibr" rid="ref-7">7</xref>] presented an inventive multiple-trait identity framework depending on iris and fingerprints. Ahmed Ayoup et al. [<xref ref-type="bibr" rid="ref-8">8</xref>] presented an Efficient Selective Image Encryption (ESIE) method based on a combination of pseudo-random number sequences. Arnold&#x2019;s Cat Map and AES technology have been used. This method allows randomness and good correlation of pseudo-random number sequences, Arnold&#x2019;s cat map speed, and AES reliability. Therefore, the presented ESIE technology aims to decrease the execution time of the encryption process. The selective encryption techniques are based on image compression. Moreover, the total runtime for the presented method is 7.44&#x2005;s for the selected 64 combinations of input face biometrics.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>The Proposed Cancellable Multi-Biometric Algorithm</title>
<p>The proposed technology follows six steps. First, the input face image is captured from the face sensor and processed by the Viola-Jones automatic face detection algorithm. It determines the position of the human face regardless of the size of the mouth, nose, left, and right eyes. In this paper, we select the left or right eyes to create the templates to be stored. To avoid attacksin the transfer of data from the sensor, the selected eye is XORed with the unique seed value (PRNG) to generate the primary image [<xref ref-type="bibr" rid="ref-8">8</xref>]. This process reduces the correlation between the original eye pixels and encrypted eye pixels. Moreover, it changes the histogram of the primary image and provides a spread over the entire band. The primary image is processed with Arnold&#x2019;s cat map. This primary selected region is encrypted with Arnold&#x2019;s cat map. It achieves more security. There is a drawback for the Arnold&#x2019;s cat map. It needs several rounds to achieve security.</p>
<p>To scramble the input image, the AES encryption algorithm is applied to override the final decision. The proposed initial key generation technology is based on the person&#x2019;s right iris and finger feature fusion using deep learning to obtain the initial key of the AES, which is stored with the cancellable eye template in the database as showns in <xref ref-type="fig" rid="fig-2">Figs. 2</xref> and <xref ref-type="fig" rid="fig-3">3</xref>.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Layout of the proposed cancellable biometric generation technique</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-2.png"/>
</fig>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>The proposed key generation technology</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-3.png"/>
</fig>
<p><italic>Step 1: Viola-Jones Face Detection Algorithm</italic></p>
<p>The first object recognition framework is the Viola-Jones segmentation algorithm, which provides a competitive real-time object recognition performance. It was proposed by Paul Viola and Michael Jones in 2001 [<xref ref-type="bibr" rid="ref-9">9</xref>&#x2013;<xref ref-type="bibr" rid="ref-13">13</xref>].</p>
<p>The Viola-Jones algorithm is utilized for the automatic segmentation of the face image, and this algorithm is used to identify and localize the human face regardless of its size, position, and circumferences. Face detection is a technique that detects the human face and ignores anything else, such as trees, bodies, and buildings. It is commonly used in mobile phone cameras, safety perimeters, and the list continues. The arithmetic speed increases due to the Haarand machine learning AdaBoost feature, and the face can be detected in a frame within milliseconds [<xref ref-type="bibr" rid="ref-14">14</xref>]. A model can be trained to detect from various categories of objects. First, the values of all pixels in a grayscale image are cumulatively black. Then, they are subtracted from the sums of the white squares.</p>
<p>Finally, the result is compared with a specified threshold. If the condition is met, then the feature is considered a hit. In this paper, a MATLAB built-in Application Programming Interface (API) is used to detect the face, upper body, nose, mouth, eyes, etc. In Jones face detection algorithm, the computer vision system toolkit contains vision. A sequential object detection system that detects objects based on the above-mentioned algorithm is used. MATLAB 2014 is used in this work, containing a computer vision system toolbox in the default toolbox list. This method of object recognition combines four key concepts:<list list-type="simple"><list-item><label>a)</label>
<p>Simple rectangular elements are defined as hair elements.</p></list-item><list-item><label>b)</label>
<p>The complete image is used for rapid feature recognition.</p></list-item><list-item><label>c)</label>
<p>The AdaBoost machine learning method is used (adaptive enhancement).</p></list-item><list-item><label>d)</label>
<p>A cascaded classifier is used to effectively combine several functions.</p></list-item></list></p>
<p><italic>Step 2: Generation of Pseudo-Random Numbers</italic></p>
<p>The PRNG depends on a Linear Feedback Shift Register (LFSR). It should have unpredictable values [<xref ref-type="bibr" rid="ref-15">15</xref>]. Under normal circumstances, the random number generator cannot predict the next number in the sequence. The initial seed yields unpredictable inputs and outputs (assuming that no-one knows the initial conditions). This characteristic is sufficient to meet the requirements of randomness.</p>
<p>The generator is a pseudo-random. It is also used in various cryptographic applications such as data and media encryption keys, and banking security. A maximum length linear feedback shift register creates <italic>M</italic> rows (i.e., it iterates over all possible 2<italic><sup>n</sup></italic>&#x2212;1 states). The sequence generated by this method is random, and the sequence period is (2<italic><sup>n</sup></italic>&#x2212;1), where <italic>n</italic> is the number of shift registers used in the circuit, or the number of polynomial generators <italic>P</italic>(<italic>x</italic>). The total number of internal random states generated by the LFSR is 2<italic><sup>n</sup></italic>&#x2212;1, depending on the order of the generator polynomial <italic>P</italic>(<italic>x</italic>).</p>
<p>The state of the shift register in clock pulse <italic>i</italic> is a vector <italic>b<sub>i</sub></italic> of finite length;<disp-formula id="eqn-1"><label>(1)</label>
<mml:math id="mml-eqn-1" display="block"><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
<p>The output <italic>c<sub>i</sub></italic> in the i-th term is the union of these states.<disp-formula id="eqn-2"><label>(2)</label>
<mml:math id="mml-eqn-2" display="block"><mml:msub><mml:mi>c</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mi mathvariant="normal">&#x0026;</mml:mi><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mi mathvariant="normal">&#x0026;</mml:mi><mml:mspace width="thickmathspace" /><mml:mo>&#x2026;</mml:mo><mml:mo>.</mml:mo><mml:mi mathvariant="normal">&#x0026;</mml:mi><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mi mathvariant="normal">&#x0026;</mml:mi><mml:msub><mml:mi>b</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
<p>The pseudo-random number sequence <italic>c<sub>i</sub></italic> of length <italic>M</italic> is expressed as follows:<disp-formula id="eqn-3"><label>(3)</label>
<mml:math id="mml-eqn-3" display="block"><mml:msub><mml:mi>C</mml:mi><mml:mi>M</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>c</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>c</mml:mi><mml:mn>3</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:msub><mml:mi>c</mml:mi><mml:mi>M</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math>
</disp-formula></p>
<p>For an <italic>M</italic>&#x2009;&#x00D7;&#x2009;<italic>M</italic> input image, a <italic>C<sub>M</sub></italic><sub>&#x00D7;<italic>M</italic></sub> pseudo-random number matrix must be generated, and the rows of the coding matrix <italic>C</italic><italic><sub>M</sub></italic><sub>&#x00D7;<italic>M</italic></sub> are pseudo-random numbers. The length of <italic>C<sub>M</sub></italic> in the random number sequence is <italic>M</italic>.<disp-formula id="eqn-4"><label>(4)</label>
<mml:math id="mml-eqn-4" display="block"><mml:msub><mml:mi>C</mml:mi><mml:mrow><mml:mi>M</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mspace width="thickmathspace" /><mml:mo>&#x003A;</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>r</mml:mi><mml:mi>c</mml:mi><mml:mi>l</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>h</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>t</mml:mi><mml:mi>b</mml:mi><mml:mi>y</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>&#x2217;</mml:mo></mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mtext>&#xA0;</mml:mtext><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mtext>&#xA0;</mml:mtext><mml:msub><mml:mi>C</mml:mi><mml:mi>M</mml:mi></mml:msub><mml:mspace width="thickmathspace" /><mml:mspace width="thickmathspace" /></mml:math>
</disp-formula>where <italic>i</italic> is the row index of the <italic>C<sub>M&#x00D7;M</sub></italic> matrix, and <italic>L</italic> is the pseudo-random sequence.</p>
<p>The output shifted versions have low auto-correlation, which improves the encryption process. When performing the XOR operation on the <italic>C</italic><italic><sub>M&#x00D7;M</sub> </italic>matrix, this very important property plays an important role in encrypting the input image, because it tends to increase entropy. The encrypted image has a low autocorrelation with the original image, and a uniform histogram.</p>
<p>An <italic>n</italic>-array linear feedback shift register is used to generate C<italic><sub>M</sub></italic> from the PRN sequence. For a simplean 32 &#x00D7;&#x2009;32 image, a 32-bit PRN sequence must be generated. In this case, <italic>n&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>5. The LFSR layers are used. The LFSR satisfies the generator polynomial <italic>P(x)</italic> in <xref ref-type="disp-formula" rid="eqn-5">Eq. (5)</xref> for the maximum-length sequence. We generate <italic>n&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>2<sup>5</sup>&#x2212;1 &#x003D; 31 internal random states. <xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows the block diagram of the developed LFSR with <italic>n&#x2009;&#x003D;&#x2009;</italic>5 layers. The <italic>C<sub>32</sub></italic> sequence has 31 random states. We add the first condition.<disp-formula id="eqn-5"><label>(5)</label>
<mml:math id="mml-eqn-5" display="block"><mml:mi>P</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mtext>&#xA0;</mml:mtext><mml:msup><mml:mi>x</mml:mi><mml:mn>7</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mn>5</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mi>x</mml:mi><mml:mn>3</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:mi>x</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:math>
</disp-formula></p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Layout of the designed <italic>n&#x2009;</italic>&#x003D;<italic>&#x2009;</italic>5 stage LFSR for generating a polynomial [<xref ref-type="bibr" rid="ref-15">15</xref>]</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-4.png"/>
</fig>
<p>We have a 32&#x2009;&#x00D7;&#x2009;32 PRN coding matrix. Each row of the <italic>C</italic><sub>32<italic>&#x00D7;</italic>32</sub> encoding matrix is a continuous version with 5 changes from the original <italic>C</italic><sub>32</sub> sequence. The difference between two adjacent PRN sequences in the coding matrix is 5. We minimize the correlation between adjacent pixels in the same column. <xref ref-type="fig" rid="fig-5">Fig. 5</xref> shows the autocorrelation between the original PRN sequence and its 5 consecutive offset versions.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Correlation among the original PRN sequence and its 5 consecutive modified versions [<xref ref-type="bibr" rid="ref-15">15</xref>]</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-5.png"/>
</fig>
<p><italic>Step 3: Utilization of PRN sequence for image encryption</italic></p>
<p>First, wegenerate the PRN code. The <italic>M</italic>&#x2009;&#x00D7;&#x2009;<italic>M</italic> eye image is XORed with the generated C<italic><sub>M&#x00D7;M</sub></italic> PRN matrix to generate an encrypted PRN main image. This process reduces the correlation between biometric eyes and encrypted eyes. We change the histogram and propagate the change.</p>
<p><italic>Step 4: Arnold&#x2019;s CatMap</italic></p>
<p>It is used for rearranging the digital image matrix, in a process called transformation. When encoding images, we adopt Arnold&#x2019;s [<xref ref-type="bibr" rid="ref-15">15</xref>&#x2013;<xref ref-type="bibr" rid="ref-17">17</xref>] cat map technology. It is a safer and more effective technology. The image gets a random change in the original pixel arrangement. Sometimes, the original image is inverted, and the histogram of the coded image matches the histogram of the original one [<xref ref-type="bibr" rid="ref-16">16</xref>].</p>
<p><italic>Step 5: Image Fusion Using Deep Learning</italic></p>
<p>The merging process is performed between the fingerprint and iris images with the original AES key. In this paper, the fusion of iris and fingerprint images based on deep learning is performed by Convolutional Neural Networks (CNNs) [<xref ref-type="bibr" rid="ref-18">18</xref>]. The convolutional layer contains filters used to perform two-dimensional (2D) convolution on the input image. The resulting properties of the convolutional layer vary depending on the convolution filters used. This concept is very suitable for tumor detection, because it can capture any subtle changes in the local activity levels in the image. <xref ref-type="fig" rid="fig-5">Fig. 5</xref> shows an example of what happens in the convolutional layer.</p>
<p>It is believed that the CNN can learn complex input-output associations through sufficient training data. The CNN achieves the optimal model parameters based on an optimization process, so that the prediction input of the loss function is as close as possible to the desired target. With the help of the transformation function <italic>f</italic>, the input <italic>x</italic> is assigned to the desired output <italic>y</italic>. The CNN can be trained to evaluate the function <italic>f</italic>, which minimizes the error between the expected output and the received output <italic>y</italic>. The goal is to minimize the loss function through the optimization process to achieve sufficient learning.</p>
<p><xref ref-type="fig" rid="fig-6">Fig. 6</xref> shows a general overview of the proposed method. First, the input images are converted to Y Cb Cr color channel data. The CNN is implemented on the luminance channel of the input image, because the structural details are displayed in this channel. Another reason is that the brightness changes in the luminance channel are more pronounced than in the chrominance channels (Cb and Cr). On the other hand, weighted mixing is applied on chrominance channels. The resulting luminance channel is combined with the chrominance channel generated by a weighed mixing.</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Example of the process taking place in the convolutional layer [<xref ref-type="bibr" rid="ref-19">19</xref>]</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-6.png"/>
</fig>
<p>The following subsections describe the network architecture, loss function, and learning process. The Deep-Fuse CNN fusion method shows that the learnability of the CNN is largely affected by the correct architecture and choice of the loss function. The proposed image fusion network architecture is shown in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>. The proposed architecture consists of three parts: feature extraction layer, merging layer, and reconstruction layer. <xref ref-type="fig" rid="fig-8">Fig. 8</xref> illustrates the fusion process with deep learning.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Suggested deep learning fusion [<xref ref-type="bibr" rid="ref-19">19</xref>]</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-7.png"/>
</fig>
<p>As shown in <xref ref-type="fig" rid="fig-8">Fig. 8</xref>, the underexposed and overexposed images (Y1 and Y2) are sent on different channels (channel 1 (C1) is composed of C11 and C21, and channel 2 (C2) is composed of C12 and C22), level (C11 and C12) contains 5&#x2009;&#x00D7;&#x2009;5 filters for extracting low-level objects such as edges and corners. The weights of the deflection channel are related; C11 and C12 (C21 and C22) have the same weight.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Suggested three-part architecture: feature extraction layer, fusion layer, and reconstruction layer [<xref ref-type="bibr" rid="ref-19">19</xref>]</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-8.png"/>
</fig>
<p>This architecture has three advantages. First, the network is forced to learn the same attributes for the input images, so that the output attributes of the C1 and C2 graphs have the same type of features. Hence, we can simply combine the corresponding images into an overlay. That is, the first function diagram (F11) in <xref ref-type="fig" rid="fig-1">Fig. 1</xref> and the first function diagram (F21) in <xref ref-type="fig" rid="fig-2">Fig. 2</xref> are added together, and this process is also applicable to the remaining function diagrams. We can combine these functions as needed. The network must calculate weights to combine them.</p>
<p>Experiments show that the function chain can also achieve similar results by increasing the number of filters and layers after C3 to increase the number of training iterations. This is understandable, because the network needs more iterations to determine the appropriate merging weights. In this configuration of binding weights, we force the network to learn filters that remain constant for brightness changes. In the case of coupled weights, the centers of several highly active filters surround the receptive field. These filters are learned to remove the average value of the neighborhood, which effectively keeps the brightness of the object constant. The number of learnable filters is halved. Convergence isquick, because the network has few parameters. The fusion layer combines features derived from C21 and C22.</p>
<p><italic>Step 6: AES Encryption</italic></p>
<p>Advanced Encryption Standard (AES) is an encryption algorithm for encryption security. For a 128-bit key, a brute-force attack requires 2<sup>128</sup> tests. In addition, the structure of the algorithm and the rounding function used in it ensures a high degree of resistance to linear and differential cryptanalysis.</p>
<p>The proposed authentication system addresses most of the threats for biometric authentication systems. First, the initial key created for the input operator is integrated with the right fingerprint, and the right eye features of the input operator using deep learning technology, and then the initial key is generated. The primary key matches the stored primary key template (IK) database. If it matches the stored primary key, the input subject is authenticated. For further authentication, the PRNG codes are fetched from the database; the corresponding template is generated, and, finally, the generated template is matched with the stored ones. <xref ref-type="fig" rid="fig-9">Fig. 9</xref> shows a schematic diagram of the suggested authentication technology.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>Proposed authentication system</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-9.png"/>
</fig>
</sec>
<sec id="s4">
<label>4</label>
<title>Simulation Experiments and Results</title>
<p>The planned cancellable multi-biometric technique is applied on any image format. The performance rating and analysis depends on some metrics such as genuine and impostor distributions, ROC curves, and encryption parameters. The 256&#x2009;&#x00D7;&#x2009;256 Lena image is used for unauthorized data with a size of 128&#x2009;&#x00D7;&#x2009;128 in <xref ref-type="fig" rid="fig-10">Fig. 10</xref>, and samples of faces for authorized data simulations are given in <xref ref-type="fig" rid="fig-11">Fig. 11</xref>. Examples of the cancellable face templates created using the suggested scheme are shown in <xref ref-type="table" rid="table-1">Tab. 1</xref>.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Lena unauthorized image</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-10.png"/>
</fig>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Samples of 128&#x2009;&#x00D7;&#x2009;128 face images [<xref ref-type="bibr" rid="ref-20">20</xref>]</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-11.png"/>
</fig>
<table-wrap id="table-1"><label>Table 1</label>
<caption>
<title>Output stages of the cancellable face template generation of the proposed cancellable biometric scheme</title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Input Face Biometrics</th>
<th align="left">Output Auto Segmentation</th>
<th align="left">Generated Right Eye Encrypted PRN</th>
<th align="left">Output of Arnold Permutation</th>
<th align="left">Output AES Encryption</th>
<th align="left">Generated Initial Key (IK) Fusion</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-1.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-2.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-3.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-4.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-5.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-6.png"/></td>
</tr>
<tr>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-7.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-8.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-9.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-10.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-11.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-12.png"/></td>
</tr>
<tr>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-13.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-14.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-15.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-16.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-17.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-18.png"/></td>
</tr>
<tr>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-19.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-20.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-21.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-22.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-23.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-24.png"/></td>
</tr>
<tr>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-25.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-26.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-27.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-28.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-29.png"/></td>
<td align="left"><inline-graphic xlink:href="IASC_24379-inline-30.png"/></td>
</tr>
</tbody>
</table>
</table-wrap>
<sec id="s4_1">
<label>4.1</label>
<title>Quality Assessment</title>
<p>Statistical tests of the proposed cancellable biometric system are applied on the individual face templates.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Performance Evaluation</title>
<p>The genuine and imposter distributions and the ROC curves are created in the presence of noise with exclusive levels. The ROC curve connection reveals the relation between the True Positive Rate (TPR) and the False Positive Rate (FPR) at exclusive threshold values. The FRR measures the probability that a face can be falsely rejected as an imposter facial sample, and the FPR measures the probability of accepting an imposter facial sample as being genuine [<xref ref-type="bibr" rid="ref-21">21</xref>,<xref ref-type="bibr" rid="ref-22">22</xref>]. <xref ref-type="fig" rid="fig-12">Fig. 12</xref> reveals the closeness to uniformity over a positive bandwidth, which is a preferred function for an excessive degree of protection in <xref ref-type="fig" rid="fig-13">Fig. 13</xref>. <xref ref-type="fig" rid="fig-14">Fig. 14</xref> shows good properites of Imposter and genuine distributions of the proposed scheme. <xref ref-type="fig" rid="fig-15">Fig. 15</xref> shows ROC curve for the proposed cancellable biometric. <xref ref-type="table" rid="table-2">Tab. 2</xref> measurement index for the unauthorized template in <xref ref-type="fig" rid="fig-10">Fig. 10</xref>. <xref ref-type="table" rid="table-3">Tabs. 3</xref>&#x2013;<xref ref-type="table" rid="table-6">6</xref> shows measurement index for the authorized templates of <xref ref-type="fig" rid="fig-11">Fig. 11</xref> (a,c,b,and d). <xref ref-type="table" rid="table-7">Tab. 7</xref> shows the assessment metrics in the presence of noise.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>Biometric templates. (a) Original templates [<xref ref-type="bibr" rid="ref-20">20</xref>] (b) Output cipheredtemplates</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-12.png"/>
</fig>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>(a) Original template histograms, (b) Ciphered template histograms</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-13.png"/>
</fig>
<fig id="fig-14">
<label>Figure 14</label>
<caption>
<title>Imposter and genuine distributions of the proposed scheme</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-14.png"/>
</fig>
<fig id="fig-15">
<label>Figure 15</label>
<caption>
<title>ROCcurve for the proposed cancellable biometric scheme</title></caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="IASC_24379-fig-15.png"/>
</fig>
<table-wrap id="table-2"><label>Table 2</label>
<caption>
<title>Measurement index for the unauthorized template</title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Assessment metrics</th>
<th align="center" colspan="2">Proposed scheme</th>
</tr>
<tr>
<th align="left">Right eye</th>
<th align="left">Left eye</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Ciphering time (sec)</td>
<td align="left">3.28</td>
<td align="left">3.30</td>
</tr>
<tr>
<td align="left">Entropy [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td align="left">7.4502</td>
<td align="left">7.9977</td>
</tr>
<tr>
<td align="left">Correlation among the original biometric and encrypted template [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td align="left">0.0603</td>
<td align="left">0.0013</td>
</tr>
<tr>
<td align="left">ID [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td align="left">0.9089</td>
<td align="left">0.9583</td>
</tr>
<tr>
<td align="left">NCPR [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td align="left">100</td>
<td align="left">100</td>
</tr>
<tr>
<td align="left">UACI [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td align="left">0</td>
<td align="left">0</td>
</tr>
<tr>
<td align="left">MDMF [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td align="left">0.9240</td>
<td align="left">0.9240</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-3"><label>Table 3</label>
<caption>
<title>Measurement index of the authorized templates in <xref ref-type="fig" rid="fig-11">Fig. 11a</xref></title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Assessment metrics</th>
<th align="center" colspan="2">Proposed scheme</th>
</tr>
<tr>
<th align="left">Right eye</th>
<th align="left">Left eye</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Ciphering time (sec)</td>
<td align="left">3.30</td>
<td align="left">3.35</td>
</tr>
<tr>
<td align="left">Entropy</td>
<td align="left">7.4594</td>
<td align="left">7.4423</td>
</tr>
<tr>
<td align="left">Correlation among the original biometric and encrypted template</td>
<td align="left">0.0190</td>
<td align="left">0.0013</td>
</tr>
<tr>
<td align="left">ID</td>
<td align="left">0.9661</td>
<td align="left">0.9714</td>
</tr>
<tr>
<td align="left">NCPR</td>
<td align="left">100</td>
<td align="left">99.6246</td>
</tr>
<tr>
<td align="left">UACI</td>
<td align="left">0</td>
<td align="left">27.7787</td>
</tr>
<tr>
<td align="left">MDMF</td>
<td align="left">0.9240</td>
<td align="left">0.9240</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-4"><label>Table 4</label>
<caption>
<title>Measurement index of the authorized templates in <xref ref-type="fig" rid="fig-11">Fig. 11c</xref></title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Assessment metrics</th>
<th align="center" colspan="2">Proposed scheme</th>
</tr>
<tr>
<th align="left">Right eye</th>
<th align="left">Left eye</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Ciphering time (sec)</td>
<td align="left">3.30</td>
<td align="left">3.35</td>
</tr>
<tr>
<td align="left">Entropy</td>
<td align="left">7.4235</td>
<td align="left">7.4826</td>
</tr>
<tr>
<td align="left">Correlation among the Original Biometric and Encrypted Template</td>
<td align="left">0.0649</td>
<td align="left">0.0278</td>
</tr>
<tr>
<td align="left">ID</td>
<td align="left">0.9036</td>
<td align="left">0.9505</td>
</tr>
<tr>
<td align="left">NCPR</td>
<td align="left">100</td>
<td align="left">100</td>
</tr>
<tr>
<td align="left">UACI</td>
<td align="left">0</td>
<td align="left">0</td>
</tr>
<tr>
<td align="left">MDMF</td>
<td align="left">0.9240</td>
<td align="left">0.7836</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-5"><label>Table 5</label>
<caption>
<title>Measurement index of the authorized templates in <xref ref-type="fig" rid="fig-11">Fig. 11b</xref></title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Assessment metrics</th>
<th align="center" colspan="2">Proposed scheme</th>
</tr>
<tr>
<th align="left">Right eye</th>
<th align="left">Left eye</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Ciphering time (sec)</td>
<td align="left">3.30</td>
<td align="left">3.35</td>
</tr>
<tr>
<td align="left">Entropy</td>
<td align="left">7.4454</td>
<td align="left">7.5130</td>
</tr>
<tr>
<td align="left">Correlation among the original biometric and encrypted template</td>
<td align="left">0.0041</td>
<td align="left">0.0102</td>
</tr>
<tr>
<td align="left">ID</td>
<td align="left">0.9531</td>
<td align="left">0.9427</td>
</tr>
<tr>
<td align="left">NCPR</td>
<td align="left">100</td>
<td align="left">100</td>
</tr>
<tr>
<td align="left">UACI</td>
<td align="left">0</td>
<td align="left">0</td>
</tr>
<tr>
<td align="left">MDMF</td>
<td align="left">0.6900</td>
<td align="left">0.7836</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-6"><label>Table 6</label>
<caption>
<title>Measurement index of the authorized templates in <xref ref-type="fig" rid="fig-11">Fig. 11d</xref></title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="2">Assessment metrics</th>
<th align="center" colspan="2">Proposed scheme</th>
</tr>
<tr>
<th align="left">Right eye</th>
<th align="left">Left eye</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Ciphering time (sec)</td>
<td align="left">3.30</td>
<td align="left">3.35</td>
</tr>
<tr>
<td align="left">Entropy</td>
<td align="left">7.3950</td>
<td align="left">7.4550</td>
</tr>
<tr>
<td align="left">Correlation among the original biometric and encrypted template</td>
<td align="left">0.0255</td>
<td align="left">0.0533</td>
</tr>
<tr>
<td align="left">ID</td>
<td align="left">0.9453</td>
<td align="left">0.9245</td>
</tr>
<tr>
<td align="left">NCPR</td>
<td align="left">100</td>
<td align="left">100</td>
</tr>
<tr>
<td align="left">UACI</td>
<td align="left">0</td>
<td align="left">0</td>
</tr>
<tr>
<td align="left">MDMF</td>
<td align="left">0.9240</td>
<td align="left">0.9240</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="table-7"><label>Table 7</label>
<caption>
<title>Score index of the proposed biometric scheme with different noise levels</title></caption>
<table><colgroup><col align="left"/><col align="left"/><col align="left"/>
</colgroup>
<thead>
<tr>
<th align="left">Variance of noise</th>
<th align="left">EER</th>
<th align="left">AROC</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">0.05</td>
<td align="left">0.0008</td>
<td align="left">0.999</td>
</tr>
<tr>
<td align="left">0.04</td>
<td align="left">0.0009</td>
<td align="left">0.996</td>
</tr>
<tr>
<td align="left">0.03</td>
<td align="left">0.0013</td>
<td align="left">0.995</td>
</tr>
<tr>
<td align="left">0.02</td>
<td align="left">0.0019</td>
<td align="left">0.996</td>
</tr>
<tr>
<td align="left">0.01</td>
<td align="left">0.0016</td>
<td align="left">0.995</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Conclusions and Future Work</title>
<p>This article investigated the current trends and open research issues of cancellable and hybrid biometric encryption systems. The combination of pseudo-random number sequences, Arnold&#x2019;s cat map, and the AES technique creates the cancellable templates. It allows randomness and good correlation properties of pseudo-random number sequences, Arnold&#x2019;s cat map, and AES encryption. Moreover, we have increased the strength of the enrollment phase using a selective encryption technology. It encodes a portion of the face biometrics, not the whole face image, and thus small storage space is used to store the personal data in the database. It is supposed that no one knows enough initial conditions to break the randomness of codes. In addition, the simulation results show that the proposed scheme offers higher entropy and lower correlation between the original eye and the encrypted template. Cancellable biometric results guarantee low EER. Deep learning integrates two biometrics into more representatives, reliable and detailed outputs using deep learning. In future directions, we can apply the proposed framework for medical image communication using machine learning techniques.</p>
</sec>
</body>
<back>
<ack>
<p>The authors thank the researchers of Taif University for their support. They support Project Number (TURSP-2020/214), Taif University, Taif, Saudi Arabia.</p>
</ack><fn-group>
<fn fn-type="other">
<p><bold>Funding Statement:</bold> This research was supported by Taif University Researchers Supporting Project Number (TURSP-2020/214), Taif University, Taif, Saudi Arbia (<uri xlink:href="https://www.tu.edu.sa">www.tu.edu.sa</uri>).</p>
</fn>
<fn fn-type="conflict">
<p><bold>Conflicts of Interest:</bold> The authors state that they have not disclosed any conflicts of interest relating to this research.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Stallings</surname></string-name></person-group>, <source>Cryptography and Network Security Principles and Practice</source>, <edition>5<sup>th</sup> ed</edition>, <publisher-loc>New York, USA</publisher-loc>, <publisher-name>Prentice Hall</publisher-name>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>POP</surname></string-name></person-group>, &#x201C;<article-title>Multi-modal biometric systems overview</article-title>,&#x201D; <source>Acta Technical Napocensis Electronics and Telecommunication</source>, vol. <volume>49</volume>, no. <issue>3</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>11</lpage>, <year>2008</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Sheena</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Sheena</surname></string-name></person-group>, &#x201C;<article-title>Review paper optimizing security of multi-modal biometric system</article-title>,&#x201D; <source>International Journal of Advanced Research in Computer Science and Software Engineering</source>, vol. <volume>15</volume>, no. <issue>3</issue>, pp. <fpage>93</fpage>&#x2013;<lpage>98</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Wagh</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Choudhari</surname></string-name></person-group>, &#x201C;<article-title>Analysis of multi-modal biometrics with security key</article-title>,&#x201D; <source>International Journal of Advanced Research in Computer Science and Software Engineering</source>, vol. <volume>8</volume>, no. <issue>3</issue>, pp. <fpage>1363</fpage>&#x2013;<lpage>1365</lpage>, <year>2013</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Ahlawat</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Kant</surname></string-name></person-group>, &#x201C;<article-title>A multi-modal approaches to enhance the performance of biometric system</article-title>,&#x201D; <source>International Journal of Innovations &#x0026; Advancement in Computer Science</source>, vol. <volume>6</volume>, no. <issue>4</issue>, pp. <fpage>41</fpage>&#x2013;<lpage>46</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Ratha</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Chikkerur</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Connell</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Bolle</surname></string-name></person-group>, &#x201C;<article-title>Generating cancellable fingerprint templates</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>29</volume>, no. <issue>4</issue>, pp. <fpage>561</fpage>&#x2013;<lpage>572</lpage>, <year>2007</year>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>V.</given-names> <surname>Conti</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Militello</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Sorbello</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Vitabile</surname></string-name></person-group>, &#x201C;<article-title>A frequency-based approach for features fusion in fingerprint and iris multi-modal biometric identification systems</article-title>,&#x201D; <source>IEEE Transactions on Systems, Man, and Cybernetics Part C: Applications and Reviews</source>, vol. <volume>40</volume>, no. <issue>4</issue>, pp. <fpage>384</fpage>&#x2013;<lpage>395</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Ayoup</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Hussein</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Attia</surname></string-name></person-group>, &#x201C;<article-title>Efficient selective image encryption</article-title>,&#x201D; <source>Multimedia Tools and Applications</source>, vol. <volume>75</volume>, no. <issue>24</issue>, pp. <fpage>17171</fpage>&#x2013;<lpage>17186</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Asker</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Elsharkawy</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Nassar</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Ayad</surname></string-name>, <string-name><given-names>F.</given-names> <surname>Abd El-Samie</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>A novel cancellable Iris template generation based on salting approach</article-title>,&#x201D; in <source>Multimedia Tools and Applications</source>, vol. <volume>80</volume>, no. <issue>3</issue>, pp. <fpage>3703</fpage>&#x2013;<lpage>3727</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Kamaldeep</surname></string-name></person-group>, &#x201C;<article-title>A review of various attack on biometrics system and their known solutions</article-title>,&#x201D; <source>International Journal of Computer Technology and Application</source>, vol. <volume>6</volume>, no. <issue>2</issue>, pp. <fpage>1980</fpage>&#x2013;<lpage>1992</lpage>, <year>2011</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Rein-Lien</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Abdel-Mottaleb</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Jain</surname></string-name></person-group>, &#x201C;<article-title>Face detection in color images</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>24</volume>, no. <issue>5</issue>, pp. <fpage>696</fpage>&#x2013;<lpage>706</lpage>, <year>2002</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Chaudhari</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Vanjare</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Thakkar</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Shah</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Kadam</surname></string-name></person-group>, &#x201C;<article-title>Intelligent surveillance and security system</article-title>,&#x201D; <source>International Journal of Innovative Research in Computer and Communication Engineering</source>, vol. <volume>3</volume>, no. <issue>3</issue>, pp. <fpage>2291</fpage>&#x2013;<lpage>2299</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Georghiades</surname></string-name>, <string-name><given-names>P.</given-names> <surname>Belhumeur</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Kriegman</surname></string-name></person-group>, &#x201C;<article-title>From few to many: Illumination cone models for face recognition under variable lighting and pose</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>23</volume>, no. <issue>6</issue>, pp. <fpage>643</fpage>&#x2013;<lpage>660</lpage>, <year>2001</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Wilson</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Fernandez</surname></string-name></person-group>,&#x201C;<article-title>Facial feature detection using haar classifiers</article-title>,&#x201D; <source>Journal of Computing Sciences in Colleges</source>, vol. <volume>21</volume>, no. <issue>4</issue>, pp. <fpage>127</fpage>&#x2013;<lpage>1330</lpage>, <year>2006</year>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Rowley</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Baluja</surname></string-name> and <string-name><given-names>T.</given-names> <surname>Kanade</surname></string-name></person-group>, &#x201C;<article-title>Neural network-based face detection</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>, vol. <volume>20</volume>, no. <issue>1</issue>, pp. <fpage>23</fpage>&#x2013;<lpage>38</lpage>, <year>1998</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Schindler</surname></string-name></person-group>, &#x201C;<article-title>Functionality classes and evaluation methodology for deterministic random number generators</article-title>,&#x201D; <source>Anwendungshinweise and Interpretation</source>, vol. <volume>2</volume>, no. <issue>5</issue>, pp. <fpage>5</fpage>&#x2013;<lpage>11</lpage>, <year>1999</year>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Lian</surname></string-name></person-group>, &#x201C;<article-title>Multimedia content encryption techniques and applications,&#x201D;</article-title> <source>CRC press, Taylor &#x0026; Francis Group</source>, <year>2009</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>Y.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>T.</given-names> <surname>Li</surname></string-name></person-group>, &#x201C;<article-title>Study on image encryption algorithm based on arnold transformation and chaotic system</article-title>,&#x201D; in <conf-name>Proc. Int. Conf. on Intelligent System Design and Engineering Application (ISDEA)</conf-name>, <conf-loc>Changsha, China</conf-loc>, pp. <fpage>449</fpage>&#x2013;<lpage>451</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names><surname>Wu</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Deng</surname></string-name> and <string-name><given-names>D.</given-names> <surname>He</surname></string-name></person-group>, &#x201C;<article-title>Arnold transformation algorithm and anti-arnold transformation algorithm</article-title>,&#x201D; in <conf-name>Proc. First IEEE Int. Conf. on Information Science and Engineering</conf-name>, <conf-loc>Nanjing, China</conf-loc>, pp. <fpage>1164</fpage>&#x2013;<lpage>1167</lpage>, <year>2009</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Ma</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Zeng</surname></string-name> and <string-name><given-names>Z.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Perceptual quality assessment for multi-exposure image fusion</article-title>,&#x201D; <source>IEEE Transactions on Image Processing</source>, vol. <volume>24</volume>, no. <issue>11</issue>, pp. <fpage>3345</fpage>&#x2013;<lpage>3356</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="web"><person-group person-group-type="author"><string-name><given-names>ORL</given-names> <surname>Database</surname></string-name></person-group>, [Online]. Available: <uri xlink:href="https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html">https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html</uri>, last access on 1&#x2013;06&#x2013;<year>2020</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>H.</given-names> <surname>Ahmed</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Kalash</surname></string-name> and <string-name><given-names>O.</given-names> <surname>Allah</surname></string-name></person-group>, &#x201C;<article-title>An efficient chaos-based feedback stream cipher (ECBFSC) for image encryption and decryption</article-title>,&#x201D; <source>Informatica</source>, vol. <volume>31</volume>, no. <issue>1</issue>, pp. <fpage>121</fpage>&#x2013;<lpage>120</lpage>, <year>2007</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>