<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">22524</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2022.022524</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Diabetic Retinopathy Detection Using Classical-Quantum Transfer Learning Approach and Probability Model</article-title>
<alt-title alt-title-type="left-running-head">Diabetic Retinopathy Detection Using Classical-Quantum Transfer Learning Approach and Probability Model</alt-title>
<alt-title alt-title-type="right-running-head">Diabetic Retinopathy Detection Using Classical-Quantum Transfer Learning Approach and Probability Model</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Mir</surname><given-names>Amna</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Yasin</surname><given-names>Umer</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Khan</surname><given-names>Salman Naeem</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib id="author-4" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Athar</surname><given-names>Atifa</given-names>
</name><xref ref-type="aff" rid="aff-3">3</xref><email>atifaathar@cuilahore.edu.pk</email>
</contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Jabeen</surname><given-names>Riffat</given-names>
</name><xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib id="author-6" contrib-type="author">
<name name-style="western"><surname>Aslam</surname><given-names>Sehrish</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<aff id="aff-1"><label>1</label><institution>Department of Physics, Comsats University Islamabad, Lahore Campus</institution>, <addr-line>Lahore, 54000</addr-line>, <country>Pakistan</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of Statistics, Comsats University Islamabad, Lahore Campus</institution>, <addr-line>Lahore, 54000</addr-line>, <country>Pakistan</country></aff>
<aff id="aff-3"><label>3</label><institution>Department of Computer Science, Comsats University Islamabad, Lahore Campus</institution>, <addr-line>Lahore, 54000</addr-line>, <country>Pakistan</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1">Corresponding Author: Atifa Athar. Email: <email>atifaathar@cuilahore.edu.pk</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2021-11-29">
<day>29</day>
<month>11</month>
<year>2021</year>
</pub-date>
<volume>71</volume>
<issue>2</issue>
<fpage>3733</fpage>
<lpage>3746</lpage>
<history>
<date date-type="received">
<day>10</day>
<month>8</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>10</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Mir et al.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Mir et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_22524.pdf"></self-uri>
<abstract>
<p>Diabetic Retinopathy (DR) is a common complication of diabetes mellitus that causes lesions on the retina that affect vision. Late detection of DR can lead to irreversible blindness. The manual diagnosis process of DR retina fundus images by ophthalmologists is time consuming and costly. While, Classical Transfer learning models are extensively used for computer aided detection of DR; however, their maintenance costs limits detection performance rate. Therefore, Quantum Transfer learning is a better option to address this problem in an optimized manner. The significance of Hybrid quantum transfer learning approach includes that it performs heuristically. Thus, our proposed methodology aims to detect DR using a hybrid quantum transfer learning approach. To build our model we extract the APTOS 2019 Blindness Detection dataset from Kaggle and used inception-V3 pre-trained classical neural network for feature extraction and Variational Quantum classifier for stratification and trained our model on Penny Lane default device, IBM Qiskit BasicAer device and Google Cirq Simulator device. Both models are built based on PyTorch machine learning library. We bring about a contrast performance rate between classical and quantum models. Our proposed model achieves an accuracy of 93%&#x2013;96% on the quantum hybrid model and 85% accuracy rate on the classical model. So, quantum computing can harness quantum machine learning to do work with power and efficiency that is not possible for classical computers.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Diabetic Retinopathy (DR)</kwd>
<kwd>quantum transfer learning</kwd>
<kwd>inception-V3</kwd>
<kwd>variational quantum circuit</kwd>
<kwd>image classification</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Diabetic retinopathy (DR) is the most common form of diabetic eye disease [<xref ref-type="bibr" rid="ref-1">1</xref>]. Diabetic retinopathy usually only affects people who have chronic diabetes (diagnosed or undiagnosed). Diabetic retinopathy causes an array of long-term systemic complications that have considerable impact on the patients as the disease typically affect individuals in their most reproductive years. The World Health Organization has declared that, in 2030 diabetes will be the most serious and 7th highest death causing disease across world. DR occurs due to the damage of tiny blood vessels in the retina due to chronic diabetics. This may cause hemorrhages, exudates and even swelling of the retina can cause blind spots blurry vision. Diabetic Retinopathy is a major cause of vision loss and blindness affecting millions of people across the globe. If DR is diagnosed early, it can be managed using available treatments. Regular eye fundus examination is necessary because DR do not present any symptoms at early stages. The retinal abnormalities in DR also include Hemorrhages (HM), &#x201C;Cotton wool&#x201D; spots, Microaneurysm (MA), Retinal neovascularization, hard exudates, which are clearly presented in <?A3B2 "fig1",5,"anchor"?><xref ref-type="fig" rid="fig-1">Fig. 1</xref>.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Normal eye and Infected eye [<xref ref-type="bibr" rid="ref-2">2</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-1.png"/>
</fig>
<p>In recent years, classical transfer learning approaches are used in the field of image classification, segmentation, and screening for DR. However, limited detection performance rates are hinderance to computer aided diagnostic. A breakthrough in the field of quantum computing can help in giving the ophthalmologist a second opinion to solve this problem by using hybrid quantum transfer learning approach. This quantum approach can result into more efficient detection of DR in patients as compared to the classical transfer learning [<xref ref-type="bibr" rid="ref-3">3</xref>,<xref ref-type="bibr" rid="ref-4">4</xref>]. Quantum transfer learning and Principal Component Analysis (PCA) is currently used in various medical diagnostics [<xref ref-type="bibr" rid="ref-2">2</xref>]. Zhang [<xref ref-type="bibr" rid="ref-5">5</xref>] used pathological images for Non-Hodgkin Lymphoma analysis. Similarly, classification of Arabic sign language is done using same hybrid approach [<xref ref-type="bibr" rid="ref-6">6</xref>]. Therefore, this work presents a hybrid approach of quantum learning model for DR detection.</p>
<p>This paper presents a hybrid approach for early detection of DR. We have compared the results of our three hybrid Quantum Transfer Learning models with one classical Transfer Learning model. We have labeled our data set into two categories i.e., no DR or DR. To build Quantum Transfer learning model, we have used inception-V3 [<xref ref-type="bibr" rid="ref-7">7</xref>] pre trained neural network for feature extraction and used quantum variational circuit for classification. Further our model is trained on penny lane default device, IBM Qiskit BasicAer device and Google Cirq Simulator device. We have built Classical transfer model on the same parameters and learning rates as defined in Quantum transfer learning model. Moreover, both models are based on pytorch machine learning Library. Our proposed model achieves an accuracy of 93%&#x2013;96% on hybrid models and 85% accuracy rate on classical model. Quantum transfer learning approach has many advantages over conventional diagnostic techniques. This approach has less probability of human error and it is found to be more efficient and rapid way of finding the lesions in retina. Quantum computing approaches are great for solving optimization problems as compared to classical computing approaches.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Literature Review</title>
<p>Transfer learning refers to a technique for predictive modeling on a different but somehow similar problem that can be used partially or entirely to accelerate the training and improve the performance of a model. It can train deep neural network with comparatively small size of data. If a previously trained artificial neural network is successful in solving a particular problem, it can be reused with some additional training to solve a problem. Let&#x2019;s consider a pre-trained deep neural network with the data set used for the solution of a problem. Transfer learning can be used to accelerates the training of neural networks as either a weight initialization scheme or feature extraction method that is retrained to solve a different or similar problem with a new dataset.</p>
<p>Quantum machine learning extends the concept of transfer learning, widely applied in modern machine learning algorithms, to the emerging context of hybrid neural networks composed of classical and quantum elements. In Quantum transfer learning we focus mainly on the paradigm in which a pre-trained classical network is modified and augmented by a final quantum layer. We can use any pre trained classical neural network according to our problem for feature extraction. To classify these features with the help of &#x201C;dressed quantum circuit&#x201D; we need to reduce output-dimensional feature vector to final dimensions with linear transformation [<xref ref-type="bibr" rid="ref-8">8</xref>]. We commonly use variational quantum classifier to classify output features from built-in neural network and variational quantum classifier used in this regard as presented in <?A3B2 "fig2",5,"anchor"?><xref ref-type="fig" rid="fig-2">Fig. 2</xref>.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Variational Quantum Classifier with embedding layers U(x) and variational circuit V(<inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>) and final measurements in classical output f(x)<inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>&#x03B5;</mml:mi></mml:math></inline-formula>c</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-2.png"/>
</fig>
<p>In <xref ref-type="fig" rid="fig-2">Fig. 2</xref> we have presented three basic components (embedding layers U(x), variational circuit V(<inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>) and final measurements) on which Variational Quantum Classifier (VQC) is built [<xref ref-type="bibr" rid="ref-9">9</xref>]. We have adopted a different method to get data into quantum computers and we have used four major data encoding techniques like: Basis Encoding, Amplitude Encoding, Angle Encoding and Higher order embedding.</p>
<p>One of the most important components in VQC is variational circuit. <?A3B2 "fig3",5,"anchor"?><xref ref-type="fig" rid="fig-3">Fig. 3</xref> presents variational circuit for one qubit operation.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Simple case of one Qubit</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-3.png"/>
</fig>
<p><disp-formula id="ueqn-1">
<mml:math id="mml-ueqn-1" display="block"><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:msub><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mrow><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x003E;=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mn>1</mml:mn></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd /></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x2212;</mml:mo><mml:mi>i</mml:mi><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mo>&#x2212;</mml:mo><mml:mi>i</mml:mi><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mo>&#x2212;</mml:mo><mml:mi>sin</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>cos</mml:mi><mml:mo>&#x2061;</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd /></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">e</mml:mi></mml:mrow><mml:mspace width="thinmathspace" /><mml:mtext>after operation</mml:mtext><mml:mo>&#x003A;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>&#x03C8;</mml:mi><mml:mo>&#x003E;=</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x003E;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd /></mml:mtr><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mi>o</mml:mi><mml:mspace width="thinmathspace" /><mml:mtext>now we can calculate the expectations value</mml:mtext><mml:mo>&#x003A;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03C8;</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mrow><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>&#x03C8;</mml:mi><mml:mo>&#x003E;=&#x003C;</mml:mo><mml:mn>0</mml:mn><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x2020;</mml:mo></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>&#x2020;</mml:mo></mml:mrow></mml:msup><mml:msub><mml:mi mathvariant="normal">&#x2202;</mml:mi><mml:mrow><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi mathvariant="normal">R</mml:mi></mml:mrow><mml:mrow><mml:mi>x</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mn>0</mml:mn><mml:mo>&#x003E;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd /></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="italic">A</mml:mi><mml:mi mathvariant="italic">f</mml:mi><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">r</mml:mi></mml:mrow><mml:mspace width="thinmathspace" /><mml:mtext>solving above equation we get</mml:mtext><mml:mo>&#x003A;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>=</mml:mo><mml:mrow><mml:mi mathvariant="normal">C</mml:mi><mml:mi mathvariant="normal">o</mml:mi><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">C</mml:mi><mml:mi mathvariant="normal">o</mml:mi><mml:mi mathvariant="normal">s</mml:mi></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mtext>Direct Solved</mml:mtext><mml:mo>)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd /></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:mi mathvariant="normal">I</mml:mi><mml:mi mathvariant="normal">t</mml:mi></mml:mrow><mml:mtext>'s&#xA0;shows that our variational circuit depend's</mml:mtext><mml:mspace width="thickmathspace" /><mml:mtext>on the two parameters like&#xA0;</mml:mtext><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mspace width="thinmathspace" /><mml:mi mathvariant="normal">&#x0026;</mml:mi><mml:mspace width="thinmathspace" /><mml:msub><mml:mi>&#x03B8;</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>In past, many works have been reported to solve DR problem by using classical machine learning approaches using different datasets. Mansour [<xref ref-type="bibr" rid="ref-10">10</xref>] used a deep convolution neural network using transfer learning for feature extraction when building a computer aided diagnosis for DR.</p>
<p>To avoid the time and resource consumption, Mohammadian [<xref ref-type="bibr" rid="ref-11">11</xref>] fine-tuned the Inception-V3 and Xception pre-trained models to classify the DR dataset into two classes. After using data augmentation to balance the dataset, an accuracy score of 87.12% on the Inception-V3, and 74.49% on the Xception model is reported. Wan et al. [<xref ref-type="bibr" rid="ref-12">12</xref>] implemented transfer learning and hyper parameter tuning on the pre-trained models AlexNet, VggNet-s, VggNet-16, VggNet-19, GoogleNet and RestNet using the Kaggle dataset and compared their performances. The highest accuracy score was that of VggNet-s model, which reached 95.68% when training with hyper-parameter tuning. Transfer learning was used to work around the problem of insufficient training dataset in for retinal vessel segmentation. Dutta et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] used 2000 fundus images to train a shallow feed forward neural network, deep neural network and VggNet-16 model. On a test dataset of 300 images, the shallow neural network scored an accuracy of 42%, and the deep neural network scored 86.3% while the VggNet-16 scored 78.3% accuracy.</p>
<p>It is quite evident from the majority of the work in diabetic retinopathy detection revolves around the use of various transfer learning models and performance comparison of these models. It is also observed that less emphasis has been given on improvement of quality of the diabetic retinopathy dataset which could lead to more accurate results. It is important to highlight the fact that the reliability of results generated from the transfer learning model depends on the features of the dataset. Google&#x2019;s recent achievement of quantum supremacy marked the first glimpse of this promised power. This is reminiscent of how machine learning evolved towards deep learning with the advent of new computational capabilities. These new algorithms use parameterized quantum transformations called parameterized quantum circuits (PQCs), Quantum Neural Networks (QNNs), Variational quantum circuits and Dressed quantum circuits. In analogy with classical transfer learning, the parameters of a variational quantum circuits are then optimized with respect to a cost function <italic>via</italic> either black-box optimization heuristics or gradient-based methods.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Limitations of Existing Works and Contributions</title>
<p>A tabular comparison has been outlined to discuss the limitations and contributions of the existing works.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Comparative analysis of existing works and their limitations and contributions</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Author</th>
<th>Adopted models</th>
<th>Experimental out-comes</th>
</tr>
</thead>
<tbody>
<tr>
<td>DL, Mohammadian [<xref ref-type="bibr" rid="ref-11">11</xref>]</td>
<td>Inception-V3 and Xception pre-trained models</td>
<td>Accuracy score of 87.12% and 74.49% achieved</td>
</tr>
<tr>
<td>Wan et al. [<xref ref-type="bibr" rid="ref-12">12</xref>]</td>
<td>Pre-trained models AlexNet, VggNet-s, VggNet-16, VggNet-19, GoogleNet and RestNet</td>
<td>Highest accuracy score was that of VggNet-s model, which reached 95.68%</td>
</tr>
<tr>
<td>Dutta et al. [<xref ref-type="bibr" rid="ref-13">13</xref>]</td>
<td>Shallow feed forward neural network, deep neural network and VggNet-16 model.</td>
<td>Accuracy of 42%, and 86.3% 78.3% accuracy achieved</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4">
<label>4</label>
<title>Material and Method</title>
<sec id="s4_1">
<label>4.1</label>
<title>Dataset</title>
<p>The dataset we used in our study is a publicly available retinal fundus images database from Kaggle (APTOS 2019 Blindness Detection) [<xref ref-type="bibr" rid="ref-14">14</xref>] consisting of 3662 images. This database is formed by technicians as they traveled in rural areas to take images for ophthalmologist&#x2019;s review regarding diagnosis. This process is time and resource consuming. Therefore, in current study this dataset is used to get computer aided ability to screen images without the help of ophthalmologists for timely detection of disease. The obtained dataset was weed up and a clean dataset was created. The resulting dataset is labeled into two categories. In order to train our model, we have used 789 non-DR and 738 DR images. Validation of our model is carried out by using 384 images from which 198 non-DR and 186 DR images of patients. We test out model on 1738 different and random images to evaluate performance of both types of models i.e., Classical transfer learning-based model and Quantum transfer learning based model. Distribution of training data with labels of Non-DR and DR is presented in <?A3B2 "fig4",5,"anchor"?><xref ref-type="fig" rid="fig-4">Fig. 4</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Distribution of training data with labels of Non-DR and DR</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-4.png"/>
</fig>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Quantum Devices</title>
<p>The Quantum Computing device used in our study is Penny-lane default device, IBM QiskitBasicaer and Google Cirq Simulator device. These simulators are noiseless to avoid any error.</p>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>Image Pre-Processing</title>
<p>In current work, APTOS 2019 Blindness Detection dataset is taken from Kaggle and labelled into two categories with the help of provided file in Kaggle documentation. Furthermore, resizing of imbalance images is done to 350 by 350. These images are further processed to remove extra black pixel part to covert image as input in our inception V3 pre-trained neural network. After this we have converted our images into tensor vector because machine learning model always input data in the form of vectors. We have done some normalization of parameters like ( [0.485,0.456,0.406]) to remove any misbalancing during resizing of images [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>].</p>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Proposed Hybrid Quantum Transfer Learning Model</title>
<p>We have proposed hybrid Quantum transfer learning model, in which we have used inception-v3 pre trained neural network for feature extraction. Inception-v3 is a pre-trained convolutional neural network model that is 48 layers deep that is used to reduce images to 2048-dimensional feature vector [<xref ref-type="bibr" rid="ref-1">1</xref>].</p>
<p>To classify these features with the help of 4-qubit &#x201C;dressed quantum circuit&#x201D; we have reduced 2048-dimensional feature vector to 4 dimensions with linear transformation [<xref ref-type="bibr" rid="ref-8">8</xref>]. Variational quantum classifier, built for our problem is presented in <?A3B2 "fig5",5,"anchor"?><xref ref-type="fig" rid="fig-5">Fig. 5</xref>.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Variational quantum classifier of (four qubits)</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-5.png"/>
</fig>
<p>Following steps are performed to build quantum classifier</p>
<p>1. Firstly, we have initialized 4 qubits in |0) state and then apply Hadamard (H) gate on these 4 qubits to make them in superposition state of zero and one [<xref ref-type="bibr" rid="ref-1">1</xref>].</p>
<p>2. Then we have applied, additional transformation to encode our classical data with unitary circuit. To perform this operation, we encoded our 4-dimensional feature vector as a parameters or weights into our circuit consisting of Ry(fi) gates and U (&#x03B1;, &#x03B2;, &#x03B8;, &#x03B3;) circuit.
<disp-formula id="ueqn-2">
<mml:math id="mml-ueqn-2" display="block"><mml:mi>&#x03F5;</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msubsup><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mi>&#x03C0;</mml:mi><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2223;</mml:mo><mml:mn>0000</mml:mn><mml:mo>&#x003E;</mml:mo><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>3. We have a sequence of trainable variational layers having an entanglement layer and a data encoding circuit. We have 3 CNOT gates in the entanglement layer which makes all qubits, entangled
<disp-formula id="ueqn-3">
<mml:math id="mml-ueqn-3" display="block"><mml:mi>Q</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mn>6</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mn>5</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2218;</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></disp-formula></p>
<p><disp-formula id="ueqn-4">
<mml:math id="mml-ueqn-4" display="block"><mml:mi>L</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>w</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x003A;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x003E;&#x2192;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">y</mml:mi></mml:mrow><mml:mo>&#x003E;=</mml:mo><mml:mrow><mml:mi mathvariant="normal">k</mml:mi></mml:mrow><mml:msubsup><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msubsup><mml:msub><mml:mi>R</mml:mi><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>w</mml:mi><mml:mi>k</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>x</mml:mi><mml:mo>&#x003E;</mml:mo></mml:math></disp-formula></p>
<p><disp-formula id="ueqn-5">
<mml:math id="mml-ueqn-5" display="block"><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mi>C</mml:mi><mml:mi>N</mml:mi><mml:mi>O</mml:mi><mml:mi>T</mml:mi><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mn>3</mml:mn><mml:mo>,</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2297;</mml:mo><mml:mi>C</mml:mi><mml:mi>N</mml:mi><mml:mi>O</mml:mi><mml:mi>T</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x2297;</mml:mo><mml:mi>C</mml:mi><mml:mi>N</mml:mi><mml:mi>O</mml:mi><mml:mi>T</mml:mi><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mn>4</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>4. In the end we have done measurements on each 4-qubits to get the expected value along the z-operator.</p>
<p><disp-formula id="ueqn-6">
<mml:math id="mml-ueqn-6" display="block"><mml:mi>M</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnalign="left" rowspacing="4pt" columnspacing="1em"><mml:mtr><mml:mtd><mml:mo>&#x003C;</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>Z</mml:mi><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003E;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x003C;</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mi>Z</mml:mi><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003E;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x003C;</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mi>Z</mml:mi><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003E;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x003C;</mml:mo><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>&#x2297;</mml:mo><mml:mi>Z</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x003E;</mml:mo></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p><?A3B2 "fig6",5,"anchor"?><xref ref-type="fig" rid="fig-6">Fig. 6</xref>, clearly presents the complete flow of our proposed model from first block A (inception V3) to its final measurement block (between Non-DR or DR (0,1)).</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Flowchart of the proposed Quantum Transfer learning model</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-6.png"/>
</fig>
</sec>
<sec id="s6">
<label>6</label>
<title>Experimental Evaluation</title>
<p>We have trained two models, first is classical model using classical transfer learning and second is quantum model using quantum transfer learning with the same training data set and parameters. We have setup learning rate 0.0004 which is same for both models and used Adam optimization algorithm and Cross Entropy function as activation function. Online google Colab notebook is being used to run our model.</p>
<p>We have evaluated our model with five basic standards: Accuracy, Precision, Recall, f1-score and specificity with the following formulas.</p>
<p><disp-formula id="eqn-1">
<label>(1)</label>
<mml:math id="mml-eqn-1" display="block"><mml:mrow><mml:mi mathvariant="italic">A</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">u</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p><disp-formula id="eqn-2">
<label>(2)</label>
<mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">n</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p><disp-formula id="eqn-3">
<label>(3)</label>
<mml:math id="mml-eqn-3" display="block"><mml:mrow><mml:mi mathvariant="italic">R</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p><disp-formula id="eqn-4">
<label>(4)</label>
<mml:math id="mml-eqn-4" display="block"><mml:mrow><mml:mi mathvariant="italic">S</mml:mi><mml:mi mathvariant="italic">p</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">f</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">t</mml:mi><mml:mi mathvariant="italic">y</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:msub><mml:mi>T</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p><disp-formula id="eqn-5">
<label>(5)</label>
<mml:math id="mml-eqn-5" display="block"><mml:mi>F</mml:mi><mml:mn>1</mml:mn><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mi mathvariant="italic">R</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="italic">R</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">a</mml:mi><mml:mi mathvariant="italic">l</mml:mi><mml:mi mathvariant="italic">l</mml:mi></mml:mrow><mml:mo>&#x00D7;</mml:mo><mml:mrow><mml:mi mathvariant="italic">P</mml:mi><mml:mi mathvariant="italic">r</mml:mi><mml:mi mathvariant="italic">e</mml:mi><mml:mi mathvariant="italic">c</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">s</mml:mi><mml:mi mathvariant="italic">i</mml:mi><mml:mi mathvariant="italic">o</mml:mi><mml:mi mathvariant="italic">n</mml:mi></mml:mrow></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p>Where 𝑇<sub>𝑝</sub> &#x003D; True Positive, 𝐹<sub>𝑃</sub> &#x003D; False Positive, 𝑇<sub>𝑁</sub> &#x003D; True Negative, 𝐹<sub>𝑁</sub> &#x003D; False Negative.</p>
<p><?A3B2 "fig7",5,"anchor"?><xref ref-type="fig" rid="fig-7">Fig. 7</xref> shows confusion matrix of our four models. First matrix is based on Classical Transfer learning model and other three matrices are Quantum Transfer learning models which are Trained on Google Cirq Simulator, IBM Qiskit BasicAer device and penny lane default device) [<xref ref-type="bibr" rid="ref-14">14</xref>].</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>Confusion matrix of classical model and hybrid quantum model in (a), (b), (c), (d)</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-7.png"/>
</fig>
<p>Performance of classical and hybrid quantum models on 5 epochs are presented in <?A3B2 "fig8",5,"anchor"?><xref ref-type="fig" rid="fig-8">Figs. 8</xref> and <?A3B2 "fig9",5,"anchor"?><xref ref-type="fig" rid="fig-9">9</xref>. Our results presented that accuracy rate of Classical model, Pannylane default device, IBM Qiskit BasicAer device and Google Cirq Simulator Device is found to be 85.14%, 91.48%, 93.25% and 94.11% respectively. Therefore, it is observed that Google Cirq Simulator device presented high accuracy rate of 94.11% as compared to other hybrid or classical models. Validation accuracy of our models is presented in <xref ref-type="fig" rid="fig-9">Fig. 9</xref> [<xref ref-type="bibr" rid="ref-15">15</xref>].</p>
<p><?A3B2 "tbl2",5,"anchor"?><xref ref-type="table" rid="table-2">Tab. 2</xref>. shows comparison based on five standard tests like Accuracy Rate, Precision Rate, Recall, F1 Score, specificity of both classical model and hybrid quantum models train on (PannyLane, BasicAer Qiskit and Cirq Simulator) for just 5 epochs. <xref ref-type="table" rid="table-2">Tab. 2</xref> also shows Criq Simulator device has superior performance on the basis of all performance merits as compared to other simulators and models including Classical computer and Pannylane.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Accuracy rate of classical model and hybrid quantum model</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-8.png"/>
</fig>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>(a) Validation Accuracy with number of epochs. (b) Validation loss of different quantum devices or classical computer with number of epochs</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-9.png"/>
</fig>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Comparison based on standard test</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th colspan="6"><bold>Quantum models and Classical models</bold><break/>(41.69% of data used during training)</th>
</tr>
<tr>
<td>Sr</td>
<td>Evaluation</td>
<td>PannyLane</td>
<td>BasicAer Qiskit</td>
<td>Cirq simulator</td>
<td>Classical computer</td>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Accuracy rate</td>
<td>91.48%</td>
<td>93.25%</td>
<td>94.11%</td>
<td>85.14%</td>
</tr>
<tr>
<td>2</td>
<td>Precision rate</td>
<td>94.74%</td>
<td>97.43%</td>
<td>95.59%</td>
<td>97.43%</td>
</tr>
<tr>
<td>3</td>
<td>Recall</td>
<td>87.96%</td>
<td>89.14%</td>
<td>92.10%</td>
<td>76.93%</td>
</tr>
<tr>
<td>4</td>
<td>F1 Score</td>
<td>91.22%</td>
<td>93.09%</td>
<td>93.80%</td>
<td>85.95%</td>
</tr>
<tr>
<td>5</td>
<td>Specificity</td>
<td>88.62%</td>
<td>89.59%</td>
<td>92.81%</td>
<td>74.35%</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In <?A3B2 "tbl3",5,"anchor"?><xref ref-type="table" rid="table-3">Tab. 3</xref>, we have compared the overall Model accuracy based on the labels of DR or Non-DR and performed evaluation for PannyLane, BasicAer Qiskit, Cirq Simulator and Classical Computer. It can be seen in <xref ref-type="table" rid="table-3">Tab. 3</xref> that overall accuracy for 0-No DR is found to be 95% and 88% for 1-DR. This shows that our transfer learning approach is capable of DR detection with much higher accuracy and less errors. This technique can further be linked with mobile applications to enable DR detection at local level with the help of a specialized doctor.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Comparison based on standard test</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th colspan="6"><bold>Quantum models and Classical model</bold><break/>(41.69% of data used during training)</th>
</tr>
<tr>
<td>Sr</td>
<td>Evaluation</td>
<td>PannyLane</td>
<td>BasicAer Qiskit</td>
<td>Cirq simulator</td>
<td>Classical computer</td>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0-No DR</td>
<td>95%</td>
<td>96%</td>
<td>95%</td>
<td>97%</td>
</tr>
<tr>
<td>2</td>
<td>1-DR</td>
<td>88%</td>
<td>89%</td>
<td>92%</td>
<td>76%</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s7">
<label>7</label>
<title>Statistical Distribution Study for Diabetic Retinopathy (DR) Data</title>
<p>The probability distributions are used in Statistics to make the detection of any change in the trend of the data. If a probability distribution is fitted accurately to the data, then it will be helpful to detect the change in data at early stage. In this section we tried to fit five different probability distributions such as Reflected Power function distribution, Kumarswamy Lehmann-2 Power function distribution (KL2PFD), Beta Lehmann-2 Power function distribution (BL2PFD), Weighted Power function distribution(WPFD) and Exponentiated Generalized Power function distribution (EGPFD) which are generated and used in medical diagnosis in literature [<xref ref-type="bibr" rid="ref-17">17</xref>&#x2013;<xref ref-type="bibr" rid="ref-20">20</xref>] to get the better probability distribution for diabetic retinopathy diagnostic data. If we get early diagnosis of the patients that suffer from retinopathy, it will make the medical team to treat it at early stage.</p>
<p>The probability density function (pdf) of the proposed Reflected Power function distribution (RPDF) for the diabetic retinopathy are given as</p>
<p><disp-formula id="eqn-6">
<label>(6)</label>
<mml:math id="mml-eqn-6" display="block"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>x</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:msup><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup></mml:mfrac><mml:mo>,</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">n</mml:mi><mml:mi mathvariant="normal">d</mml:mi></mml:mrow><mml:mspace width="thickmathspace" /><mml:mi>&#x03B2;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B8;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0.</mml:mn></mml:math></disp-formula></p>
<p>The probability density function (pdf) of Kumarswamy Lehmann-2 Power function distribution (KL2PFD) are given as</p>
<p><disp-formula id="eqn-7">
<label>(7)</label>
<mml:math id="mml-eqn-7" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mi>&#x03C6;</mml:mi><mml:mi>&#x03B8;</mml:mi><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03B2;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03B2;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B8;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03B2;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B8;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03C6;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mtd></mml:mtr><mml:mtr><mml:mtd /><mml:mtd><mml:mspace width="1em" /><mml:mfrac><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:msup><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup></mml:mfrac><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p>Where &#x201C;&#x03B3;&#x201D; and <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mo>&quot;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03B2;</mml:mi></mml:mrow><mml:mo>&quot;</mml:mo></mml:math></inline-formula> are the shape and scale parameters. Also, the parameters &#x201C;<inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula>&#x201D; and &#x201C;<inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mi>&#x03C6;</mml:mi></mml:math></inline-formula>&#x201D;are the tuning shape parameters.</p>
<p>The probability density function (pdf) of Beta Lehmann-2 Power function distribution (BL2PFD) are given as</p>
<p><disp-formula id="eqn-8">
<label>(8)</label>
<mml:math id="mml-eqn-8" display="block"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03B2;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>a</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03B2;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>b</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi>&#x03B1;</mml:mi><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03B2;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mfrac><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:msup><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup></mml:mfrac></mml:mrow><mml:mrow><mml:mi>B</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>a</mml:mi><mml:mo>,</mml:mo><mml:mi>b</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mfrac><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn></mml:math></disp-formula>where &#x201C;&#x03B3;&#x201D; and <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mo>&quot;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03B2;</mml:mi></mml:mrow><mml:mo>&quot;</mml:mo></mml:math></inline-formula> are the shape and scale parameters. Also, the parameters &#x201C;&#x03B1;&#x201D; and &#x201C;b&#x201D; are the tuning shape parameters.</p>
<p>The probability density function (pdf) Weighted Power function distribution (WPFD) is given as</p>
<p><disp-formula id="eqn-9">
<label>(9)</label>
<mml:math id="mml-eqn-9" display="block"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x03B3;</mml:mi><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:msup><mml:mi>&#x03B2;</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup></mml:mfrac><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0.</mml:mn></mml:math></disp-formula>where &#x201C;&#x03B3;&#x201D; and <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mo>&quot;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03B2;</mml:mi></mml:mrow><mml:mo>&quot;</mml:mo></mml:math></inline-formula> are the shape and scale parameters.</p>
<p>The probability density function (pdf) Exponentiated Generalized Power function distribution (EGPFD) is given as</p>
<p><disp-formula id="eqn-10">
<label>(10)</label>
<mml:math id="mml-eqn-10" display="block"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mi>&#x03B2;</mml:mi><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03C6;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mo>[</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x2212;</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>x</mml:mi><mml:mi>&#x03C6;</mml:mi></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup><mml:mo>}</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B1;</mml:mi></mml:mrow></mml:msup><mml:mo>]</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mfrac><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:msup><mml:mi>x</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x2212;</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:msup><mml:mi>&#x03C6;</mml:mi><mml:mrow><mml:mi>&#x03B3;</mml:mi></mml:mrow></mml:msup></mml:mfrac><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>&#x003C;</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mi>&#x03C6;</mml:mi><mml:mo>,</mml:mo><mml:mi>&#x03B1;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>&#x03B2;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>&#x03B3;</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>0.</mml:mn></mml:math></disp-formula>where &#x201C;&#x03B3;&#x201D; and <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mo>&quot;</mml:mo><mml:mrow><mml:mi mathvariant="normal">&#x03C6;</mml:mi></mml:mrow><mml:mo>&quot;</mml:mo></mml:math></inline-formula> are the shape and scale parameters. Also, the parameters &#x201C;&#x03B1;&#x201D; and &#x201C;&#x03B2;&#x201D; are the tuning shape parameters.</p>
<p>In this section, we have analyzed diabetic retinopathy data using statistical modeling. We have derived the estimates and their standard errors of the parameters of distributions using modified maximum likelihood method. The results are presented in <?A3B2 "tbl4",5,"anchor"?><xref ref-type="table" rid="table-4">Tab. 4</xref>. We have compared the proposed probability distributions using Goodness of fit measures such as Akaike information criteria (AIC), corrected Akaike information criteria (CAIC), Baysian information criteria (BIC), Hannan-Quinn information criteria (HQIC) and log likelihood criteria (LogL).</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>The estimates for parameters of distribution and their standard errors (in parentheses)</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Distribution</th>
<th colspan="4">Estimates</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi>&#x03B3;</mml:mi></mml:math></inline-formula></td>
<td><inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi>&#x03B2;</mml:mi></mml:math></inline-formula></td>
<td><inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>&#x03B1;</mml:mi></mml:math></inline-formula></td>
<td><inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi>&#x03B8;</mml:mi></mml:math></inline-formula></td>
</tr>
<tr>
<td>RPFD</td>
<td>8.271522<break/>(0.7339789)</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td>KL2PFD</td>
<td>0.3475206<break/>(0.1753739)</td>
<td>2.4412303<break/>(0.9420926)</td>
<td>4.2730098<break/>(3.0516881)</td>
<td>2.1817279<break/>(1.1363113)</td>
</tr>
<tr>
<td>BL2PFD</td>
<td>0.3678653,<break/>(0.3853524)</td>
<td>5.0156976<break/>(6.1546671)</td>
<td>1.7911449<break/>(5.5816309)</td>
<td>2.8191529<break/>(7.4576940)</td>
</tr>
<tr>
<td>WPFD</td>
<td>0.1895936<break/>(0.01682325)</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td>EGPFD</td>
<td>0.4088129<break/>(0.2152832)</td>
<td>4.9477360<break/>(4.5178964)</td>
<td>4.6112238<break/>(1.2363104)</td>
<td>&#x2013;</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The TTT-plot is presented in <?A3B2 "fig10",5,"anchor"?><xref ref-type="fig" rid="fig-10">Fig. 10</xref>, which shows that the HRF has a first concave downward and then concave upward for upside-down bathtub-shaped failure rate. So, we can easily fit above mentioned probability distributions on the diabetic retinopathy data.</p>
<p>In <xref ref-type="table" rid="table-4">Tabs. 4</xref> and <?A3B2 "tbl5",5,"anchor"?><xref ref-type="table" rid="table-5">5</xref> and <?A3B2 "fig11",5,"anchor"?><xref ref-type="fig" rid="fig-11">Fig. 11</xref>, we have compared the performances of the proposed distributions for the diabetic retinopathy diagnostic data and we see that Reflected Power function distribution (RPFD) best described the diabetic retinopathy data and can be used for further statistical approach. We have proposed an efficient hybrid quantum learning (Section 6) reflected power function distribution in statistical terms to get early detection of any future patient to get a chance of suffering from diabetic retinopathy disease. So, we have presented hybrid quantum learning along with probability distribution for the detection of diabetic patients, suffering from retinopathy disease.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>TTT plot and Boxplot for DR data</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-10.png"/>
</fig>
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>Goodness of fit measures for Diabetic Retinopathy diagnostic data</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Distribution</th>
<th>AIC</th>
<th>CAIC</th>
<th>BIC</th>
<th>HQIC</th>
<th>-logL</th>
</tr>
</thead>
<tbody>
<tr>
<td>RPFD</td>
<td>798.6368</td>
<td>798.6688</td>
<td>801.481</td>
<td>799.7923</td>
<td>398.3184</td>
</tr>
<tr>
<td>EGPFD</td>
<td>808.0444</td>
<td>808.2395</td>
<td>816.577</td>
<td>811.5111</td>
<td>400.0222</td>
</tr>
<tr>
<td>KL2PFD</td>
<td>810.2859</td>
<td>810.6137</td>
<td>821.6626</td>
<td>814.9081</td>
<td>400.6429</td>
</tr>
<tr>
<td>BL2PFD</td>
<td>810.5365</td>
<td>810.8644</td>
<td>821.9133</td>
<td>815.1588</td>
<td>401.2683</td>
</tr>
<tr>
<td>WPFD</td>
<td>942.4546</td>
<td>942.4866</td>
<td>945.2988</td>
<td>943.6102</td>
<td>470.2273</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>Estimated pdf and cdf plots for Train Diagnosis Data</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_22524-fig-11.png"/>
</fig>
</sec>
<sec id="s8">
<label>8</label>
<title>Conclusion</title>
<p>A hybrid quantum transfer learning approach is adopted to model early DR detection. From our results we clearly see that Google Cirq simulator shows higher efficiency in terms of model accuracy. Moreover, already used Classical training model have presented large gap in accuracy rate. We report superiority of Quantum models in terms of performance and speed. During training of our models, we see Pannylane default device takes very less time as compared to other models. Overall performance of Pannylane default device is very good in term of time. This work suggests that there might be some variation in performance of these quantum devices but these show high performance rate when compared with classical model and is verified by statistical methods as well. This performance analysis shows that computer aided technique can be used in mobile applications for timely detection of DR in rural areas.</p>
</sec>
</body>
<back>
<ack>
<p>The authors would like to thank Pannylane, IBM Qiskit, Google Cirq for providing access to their resources and quantum devices that are used to simulate our models.</p>
</ack>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> The authors received no specific funding for this study.</p></fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p></fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Mari</surname></string-name>, <string-name><given-names>T. R.</given-names> <surname>Bromley</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Izaac</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Schuld</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Killoran</surname></string-name></person-group>, &#x201C;<article-title>Transfer learning in hybrid classical-quantum neural networks</article-title>,&#x201D; <source>Quantum</source>, vol. <volume>4</volume>, pp. <fpage>340</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>T. R.</given-names> <surname>Gadekallu</surname></string-name>, <string-name><given-names>N.</given-names> <surname>Khare</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Bhattacharya</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Singh</surname></string-name>, <string-name><given-names>P. K. R.</given-names> <surname>Maddikunta</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Early detection of diabetic retinopathy using PCA-Firefly based deep learning Model</article-title>,&#x201D; <source>Electronics</source>, vol. <volume>9</volume>, no. <issue>2</issue>, pp. <fpage>274</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Ramchandre</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Patil</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Pharande</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Javali</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Pande</surname></string-name></person-group>, &#x201C;<article-title>A deep learning approach for diabetic retinopathy detection using transfer learning</article-title>,&#x201D; in <conf-name>IEEE Int. Conf. for Innovation in Technology (INOCON)</conf-name>, <publisher-loc>Bangalore, India</publisher-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>5</lpage>, <year>2020</year>. </mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R.</given-names> <surname>Raina</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Battle</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Lee</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Packer</surname></string-name> and <string-name><given-names>A. Y.</given-names> <surname>Ng</surname></string-name></person-group>, &#x201C;<article-title>Self-taught learning: Transfer learning from unlabeled data</article-title>,&#x201D; in <conf-name>Proc. of The 24th Int. Conf. on Machine Learning</conf-name>, <publisher-loc>Corvallis Oregon, USA</publisher-loc>, pp. <fpage>759</fpage>&#x2013;<lpage>766</lpage>, <year>2007</year>. </mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Cui</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Guo</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Wang</surname></string-name> and <string-name><given-names>Z.</given-names> <surname>Wang</surname></string-name></person-group>, &#x201C;<article-title>Classification of digital pathological images of non-Hodgkin&#x0027;s lymphoma subtypes based on the fusion of transfer learning and principal component analysis</article-title>,&#x201D; <source>Medical Physics</source>, vol. <volume>47</volume>, no. <issue>9</issue>, pp. <fpage>4241</fpage>&#x2013;<lpage>4253</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M. K. A.</given-names> <surname>Kabbar</surname></string-name></person-group>, &#x201C;<article-title>The three different data analysis methods for finding the best classification rates for Arabic sign language data</article-title>,&#x201D; in <conf-name>Proc. of The AMS Sectional Meeting: AMS Special Session</conf-name>, <publisher-loc>El Paso, USA</publisher-loc>, <year>2020</year>. </mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M. T.</given-names> <surname>Hagos</surname></string-name> and <string-name><given-names>S.</given-names> <surname>Kant</surname></string-name></person-group>, &#x201C;<article-title>Transfer learning based detection of diabetic retinopathy from small dataset</article-title>,&#x201D; <year>2019</year>. Available: <uri>http://arxiv.org/abs/1905.07203v2</uri>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Acar</surname></string-name> and <string-name><given-names>I.</given-names> <surname>Yilmaz</surname></string-name></person-group>, &#x201C;<article-title>COVID-19 detection on IBM quantum computer with classical-quantum transfer learning</article-title>,&#x201D; <source>Turkish Journal of Electrical Engineering &#x0026; Computer Sciences</source>, vol. <volume>29</volume>, no. <issue>1</issue>, pp. <fpage>46</fpage>&#x2013;<lpage>61</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>I.</given-names> <surname>Griol-Barres</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Milla</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Cebri&#x00E1;n</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Mansoori</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Millet</surname></string-name></person-group>, &#x201C;<article-title>Variational quantum circuits for machine learning. An application for the detection of weak Signals</article-title>,&#x201D; <source>Applied Science</source>, vol. <volume>11</volume>, no. <issue>14</issue>, pp. <fpage>6427</fpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>R. F.</given-names> <surname>Mansour</surname></string-name></person-group>, &#x201C;<article-title>Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy</article-title>,&#x201D; <source>Biomedical Engineering Letters</source>, vol. <volume>8</volume>, no. <issue>1</issue>, pp. <fpage>41</fpage>&#x2013;<lpage>57</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Mohammadian</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Karsaz</surname></string-name> and <string-name><given-names>Y. M.</given-names> <surname>Roshan</surname></string-name></person-group>, &#x201C;<article-title>Comparative study of fine-tuning of pre-trained convolutional neural networks for diabetic retinopathy screening</article-title>,&#x201D; in <conf-name>24th National and 2nd Int. Iranian Conf. on Biomedical Engineering (ICBME)</conf-name>, <publisher-loc>Tehran, Iran</publisher-loc>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2017</year>. </mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Wan</surname></string-name>, <string-name><given-names>Y.</given-names> <surname>Liang</surname></string-name> and <string-name><given-names>Y.</given-names> <surname>Zhang</surname></string-name></person-group>, &#x201C;<article-title>Deep convolutional neural networks for diabetic retinopathy detection by image classification</article-title>,&#x201D; <source>Computers &#x0026; Electrical Engineering</source>, vol. <volume>72</volume>, no. <issue>10</issue>, pp. <fpage>274</fpage>&#x2013;<lpage>282</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Dutta</surname></string-name>, <string-name><given-names>B. C.</given-names> <surname>Manideep</surname></string-name>, <string-name><given-names>S. M.</given-names> <surname>Basha</surname></string-name>, <string-name><given-names>R. D.</given-names> <surname>Caytiles</surname></string-name> and <string-name><given-names>N. C. S.</given-names> <surname>Iyengar</surname></string-name></person-group>, &#x201C;<article-title>Classification of diabetic retinopathy images by using deep learning models</article-title>,&#x201D; <source>Int. Journal of Grid and Distributed Computing</source>, vol. <volume>11</volume>, no. <issue>1</issue>, pp. <fpage>99</fpage>&#x2013;<lpage>106</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab>APTOS 2019 Blindness Detection ((APTOS))</collab></person-group>, Available: (<uri xlink:href="https://www.kaggle.com/c/aptos2019-blindness-detection/data">https://www.kaggle.com/c/aptos2019-blindness-detection/data</uri>) <article-title>Detect diabetic retinopathy to stop blindness before it&#x0027;s too late Asia Pacific Tele-Ophthalmology Society)</article-title>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Adhikary</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Dangwal</surname></string-name> and <string-name><given-names>D.</given-names> <surname>Bhowmik</surname></string-name></person-group>, &#x201C;<article-title>Supervised learning with a quantum classifier using multi-level systems</article-title>,&#x201D; <source>Quantum Information Processing</source>, vol. <volume>19</volume>, no. <issue>3</issue>, pp. <fpage>1895</fpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>He</surname></string-name>, <string-name><given-names>X.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Ren</surname></string-name> and <string-name><given-names>J.</given-names> <surname>Sun</surname></string-name></person-group>, &#x201C;<article-title>Deep residual learning for image recognition</article-title>,&#x201D; in <conf-name>Proc. of The IEEE Conf. on Computer Vision and Pattern Recognition</conf-name>, <publisher-loc>Las Vegas, USA</publisher-loc>, pp. <fpage>770</fpage>&#x2013;<lpage>778</lpage>, <year>2016</year>. </mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Zaka</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Akhter</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Jabeen</surname></string-name></person-group>, &#x201C;<article-title>The new reflected power function distribution: Theory, simulation &#x0026; application</article-title>,&#x201D; <source>AIMS Mathematics</source>, vol. <volume>5</volume>, no. <issue>5</issue>, pp. <fpage>5031</fpage>&#x2013;<lpage>5054</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Zaka</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Akhter</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Jabeen</surname></string-name></person-group>, &#x201C;<article-title>Beta Lehmann-2 power function distribution with application to bladder cancer susceptibility and failure times of air-conditioned system</article-title>,&#x201D; <source>Indian Journal of Science and Technology</source>, vol. <volume>13</volume>, no. <issue>23</issue>, pp. <fpage>2371</fpage>&#x2013;<lpage>2386</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Zaka</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Akhter</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Jabeen</surname></string-name></person-group>, &#x201C;<article-title>The exponentiated generalized power function distribution</article-title>,&#x201D; <source>Advances and Applications in Statistics</source>, vol. <volume>61</volume>, no. <issue>1</issue>, pp. <fpage>33</fpage>&#x2013;<lpage>63</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Zaka</surname></string-name>, <string-name><given-names>A. S.</given-names> <surname>Akhter</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Jabeen</surname></string-name></person-group>, &#x201C;<article-title>A view on characterizations of the J shaped statistical distribution</article-title>,&#x201D; <source>Indian Journal of Science and Technology</source>, vol. <volume>13</volume>, no. <issue>32</issue>, pp. <fpage>3327</fpage>&#x2013;<lpage>3338</lpage>, <year>2020</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>
