<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">28190</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2022.028190</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Robust and High Accuracy Algorithm for Detection of Pupil Images</article-title>
<alt-title alt-title-type="left-running-head">Robust and High Accuracy Algorithm for Detection of Pupil Images</alt-title>
<alt-title alt-title-type="right-running-head">Robust and High Accuracy Algorithm for Detection of Pupil Images</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Nahal</surname><given-names>Waleed El</given-names>
</name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Zaini</surname><given-names>Hatim G.</given-names>
</name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Zaini</surname><given-names>Raghad H.</given-names>
</name><xref ref-type="aff" rid="aff-3">3</xref></contrib>
<contrib id="author-4" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Ghoneim</surname><given-names>Sherif S. M.</given-names>
</name><xref ref-type="aff" rid="aff-4">4</xref><email>s.ghoneim@tu.edu.sa</email>
</contrib>
<contrib id="author-5" contrib-type="author">
<name name-style="western"><surname>Hassan</surname><given-names>Ashraf Mohamed Ali</given-names>
</name><xref ref-type="aff" rid="aff-5">5</xref></contrib>
<aff id="aff-1"><label>1</label><institution>Electronics and Communications Engineering Department, Faculty of Engineering, MSA University</institution>, <addr-line>CO, 12585</addr-line>, <country>Egypt</country></aff>
<aff id="aff-2"><label>2</label><institution>Computer Engineering Department, College of Computer and Information Technology, Taif University</institution>, <addr-line>Taif, 21944</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-3"><label>3</label><institution>Faculty of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University</institution>, <addr-line>Riyadh</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-4"><label>4</label><institution>Department of Electrical Engineering, College of Engineering, Taif University</institution>, <addr-line>Taif, 21944</addr-line>, <country>Saudi Arabia</country></aff>
<aff id="aff-5"><label>5</label><institution>Electronics and Communications Engineering Department, Faculty of Engineering, Sinai University</institution>, <addr-line>Arish, CO, 45511</addr-line>, <country>Egypt</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Sherif S. M. Ghoneim. Email: <email>s.ghoneim@tu.edu.sa</email></corresp>
</author-notes>
<pub-date pub-type="epub" date-type="pub" iso-8601-date="2022-05-16"><day>16</day>
<month>05</month>
<year>2022</year></pub-date>
<volume>73</volume>
<issue>1</issue>
<fpage>33</fpage>
<lpage>50</lpage>
<history>
<date date-type="received">
<day>04</day>
<month>2</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>3</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2022 Nahal et al.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Nahal et al.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_28190.pdf"></self-uri>
<abstract>
<p>Recently, many researchers have tried to develop a robust, fast, and accurate algorithm. This algorithm is for eye-tracking and detecting pupil position in many applications such as head-mounted eye tracking, gaze-based human-computer interaction, medical applications (such as deaf and diabetes patients), and attention analysis. Many real-world conditions challenge the eye appearance, such as illumination, reflections, and occasions. On the other hand, individual differences in eye physiology and other sources of noise, such as contact lenses or make-up. The present work introduces a robust pupil detection algorithm with and higher accuracy than the previous attempts for real-time analytics applications. The proposed circular hough transform with morphing canny edge detection for Pupillometery (CHMCEP) algorithm can detect even the blurred or noisy images by using different filtering methods in the pre-processing or start phase to remove the blur and noise and finally the second filtering process before the circular Hough transform for the center fitting to make sure better accuracy. The performance of the proposed CHMCEP algorithm was tested against recent pupil detection methods. Simulations and results show that the proposed CHMCEP algorithm achieved detection rates of 87.11, 78.54, 58, and 78 according to &#x015A;wirski, ExCuSe, Else, and labeled pupils in the wild (LPW) data sets, respectively. These results show that the proposed approach performs better than the other pupil detection methods by a large margin by providing exact and robust pupil positions on challenging ordinary eye pictures.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Pupil detection</kwd>
<kwd>eye tracking</kwd>
<kwd>pupil edge</kwd>
<kwd>morphing techniques</kwd>
<kwd>eye images dataset</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>Recently, pupil detection is one of the most important researches that have a lot of indoor and outdoor applications in our real-life [<xref ref-type="bibr" rid="ref-1">1</xref>]. Different pupil detection algorithms can be classified into (1) Indoor pupil detection algorithms that use captured eye images under different lab conditions using infrared light [<xref ref-type="bibr" rid="ref-2">2</xref>]. (2) Outdoor pupil detection algorithms that use eye images under natural environments or in real-world conditions, a lot of work has been developed for outdoor eye tracking applications such as driving [<xref ref-type="bibr" rid="ref-3">3</xref>] and shopping [<xref ref-type="bibr" rid="ref-4">4</xref>]. On the other hand, the pupil detection algorithm can be divided into two main parts: (1) pupil detection or pupil segmentation part which has different methods or operations to detect the pupil such as Down Sampling, Bright/Dark Pupil Difference, Image Threshold/Binarization, Morphing Technique, Edge Detection, Blob Detection. (2) Calculating the pupil center and the pupil radius part which has different methods to detect the pupil center and measure the pupil radius such as: Using Circle/Ellipse Fitting or Proposed Center of Mass algorithm [<xref ref-type="bibr" rid="ref-5">5</xref>]. There was a huge work done for detecting and calculating the pupil center and the pupil radius, and that can be summarized as the following: Kumar et al. [<xref ref-type="bibr" rid="ref-6">6</xref>], in which the pupil detection or pupil segmentation was done by using Morphing Technique, Edge Detection, Blob Detection for edges. Lin et al. [<xref ref-type="bibr" rid="ref-7">7</xref>], in which (1) pupil detection or pupil segmentation was done by using Down Sampling, Image Thresholding/Binarization, Morphing Technique, and Edge Detection. (2) Calculating the pupil center and the pupil radius was done by using the proposed Parallelogram Center of Mass algorithm. Agustin et al. [<xref ref-type="bibr" rid="ref-8">8</xref>], in which (1) pupil detection or pupil segmentation was done by using Image Thresholding/Binarization. (2) Calculating the pupil center and the pupil radius was done by using Ellipse Fitting using a random sample consensus (RANSAC) algorithm. Keil et al. [<xref ref-type="bibr" rid="ref-9">9</xref>], in which (1) pupil detection or pupil segmentation was done by using Image Thresholding/Binarization, Glint Edge Detection. (2) Calculating the pupil center and the pupil radius was done by using the proposed Center of Mass algorithm [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-5">5</xref>]. Sari et al. [<xref ref-type="bibr" rid="ref-10">10</xref>] in 2016, presented a study of algorithms of pupil diameter measurement, in which (1) pupil detection or pupil segmentation was done by using Image Thresholding/Binarization. (2) Calculating the pupil center and the pupil radius was done by using the Least-Square method fitting Circle Equation, Hough Transform is used to create pupil Circle and Proposed Center of Mass algorithm. Saif et al. [<xref ref-type="bibr" rid="ref-11">11</xref>] in 2017, proposed a study of pupil orientation and detection, in which (1) pupil detection or pupil segmentation was performed and (2) Calculating the pupil center and the pupil radius was done by using Circle Equation Algorithm to detect the pupil center and its contour. From the previous attempts, we can conclude that there is a tradeoff between the complexity of the code which increases accuracy and consuming time. Calculating the pupil center and the pupil radius part by using different methods such as Circle Fitting, Ellipse Fitting, RANSAC, Ellipse Fitting, or Proposed center of mass algorithm and that may be accomplished with Edge detection method will consume more processing time to get accurate results. The rest of the paper is organized as follows: Section 2 offers recent pupil detection methods, whereas Section 3 discusses the proposed algorithm. In Section 4, the data sets are presented. In Section 5, the results and their discussions are illustrated. The main conclusion points are drawn in Section 6.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Recent Pupil Detection Methods</title>
<p>In this section, we will discuss six recent methods of the pupil detection: ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>], ExCuSe [<xref ref-type="bibr" rid="ref-13">13</xref>], Pupil Labs [<xref ref-type="bibr" rid="ref-14">14</xref>], SET [<xref ref-type="bibr" rid="ref-15">15</xref>], Starburst [<xref ref-type="bibr" rid="ref-16">16</xref>], and &#x015A;wirski et al. [<xref ref-type="bibr" rid="ref-17">17</xref>].</p>
<p>&#x015A;wirski et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] proposed an algorithm that follows the given steps as shown in <xref ref-type="fig" rid="fig-1">Fig. 1</xref>. (1) convolution of the eye image with a Haar-like center-surround feature. (2) The region of the maximum response identifies the pupil region. (3) The pupil edge detection was done by using segmentation of the pupil region by k-means clustering of its histogram. (4) The segmented pupil region is passed through the Canny edge detector. (5) Random Sample Consensus (RANSAC) ellipse fitting is employed to detect the pupil. (6) Pupil detection.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>&#x015A;wirski et al. approach [<xref ref-type="bibr" rid="ref-17">17</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-1.png"/>
</fig>
<p>Pupil Labs [<xref ref-type="bibr" rid="ref-14">14</xref>], introduced a Pupil Labs detector used for a head-mounted eye-tracking platform Pupil as shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>. Pupil Labs detector algorithm starts by converting the eye image into the grayscale, the user region of interest (white stroke rectangle), and the initial region estimation of the pupil (white square and dashed line square) is found via the strongest response for a center-surround feature as proposed in &#x015A;wirski et. al [<xref ref-type="bibr" rid="ref-17">17</xref>]. (2) Canny edge detection with color (green lines) is used to detect the edges in the eye image and filter these edges according to neighboring pixel intensity. (3) Based on setting a specified dark value by a user offset (where the dark value is the lowest peak in the histogram of pixel intensities in the eye image), the algorithm looks for darker areas. (4) Filtering edges are used to identify &#x201C;dark&#x201D; areas with color (blue) and to exclude spectral reflections with color (yellow). (5) Remaining edges are employed as sub-contours (multi-colored lines) based on curvature continuity criteria [<xref ref-type="bibr" rid="ref-18">18</xref>]. (6) Ellipse fitting is employed for the pupil ellipses with color (blue). (7) Finally, pupil ellipse with its center color (red).</p>
<p>Starburst [<xref ref-type="bibr" rid="ref-16">16</xref>] introduced an algorithm that follows the given steps shown in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>. (1) The input image was filtered and smoothed from noise by using a Gaussian filter. (2) The adaptive threshold is employed to identify the location of the corneal reflection, and the pupil edge contour was estimated by detecting the edge threshold values (starting positions) exceeding an edge threshold. (3) Each point is used to offer a new contour pupil. As a result, these points (edge threshold values) are utilized as un-used beginning focuses given from candidates for rays. These rays are going in the opposite direction. (4) The searching process is repeated iteratively until reaching convergence. (5) Finally, the RANSAC ellipse fitting estimates the pupil center.</p>
<p>The exclusive Curve Selector approach (ExCuSe) [<xref ref-type="bibr" rid="ref-13">13</xref>], proposed by Fuhl et al. 2015, is a recent method in which pupil detection is based on Edge Detection [<xref ref-type="bibr" rid="ref-19">19</xref>]. Morphing technique, the pupil center, and the pupil radius are calculated by using RANSAC ellipse fitting algorithm, as shown in <xref ref-type="fig" rid="fig-4">Fig. 4</xref>. First, the input image is normalized, and its histogram can be calculated then the pupil can be estimated based on the maximum value of the bright histogram area is found. <xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows the following (1) input image with many reflections. (2) Filtering image by Canny edge detection. (3) Morphing technique operators refine edges to be more thin edges instead of thick ones. (4) All remaining edges are smoothed and analyzed regarding their curvature, and orthogonal edges are removed using morphing operators. (5) For each remaining edge curve, the enclosed mean intensity value is calculated to choose the pupil curve with the lowest value (best edge). An ellipse is fitted to this curve, and its center is taken as the pupil center. (6) Input image without reflections. (7) The Coarse pupil is estimated based on the angular integral projection function (AIPF) [<xref ref-type="bibr" rid="ref-19">19</xref>]. (8) Canny edge detection is used to refine the pupil position. (9) The pupil edge position (white line) is estimated by using the optimized edges, which are the white dots (ray hits). (10) Estimating the pupil center. Be Sinusoidal Eye Tracker (SET) approach [<xref ref-type="bibr" rid="ref-15">15</xref>], proposed by Javadi et al. is a recent method in which a combination of manual and automatic estimation of the pupil center. SET starts by setting manually two parameters before the pupil detection process: (a) the threshold parameter to convert the input image to a binary image <xref ref-type="fig" rid="fig-5">Fig. 5(2)</xref>, (b) the size of the segments parameter to detect the pupil <xref ref-type="fig" rid="fig-5">Fig. 5(3)</xref>. The image is thresholded and then segmented. The Convex Hull method presented in [<xref ref-type="bibr" rid="ref-20">20</xref>] is employed to compute the segment borders (the image pixels are grouped into) for the segments with threshold values larger than a manually predefined threshold value. Then an ellipse is fitted to each extracted segment; <xref ref-type="fig" rid="fig-5">Fig. 5(4)</xref>. The pupil edge is estimated by selecting the ellipse that is closest to a circle <xref ref-type="fig" rid="fig-5">Fig. 5(5)</xref>.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>The Pupil Labs approach [<xref ref-type="bibr" rid="ref-14">14</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-2.png"/>
</fig>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>The Starburst approach [<xref ref-type="bibr" rid="ref-15">15</xref>,<xref ref-type="bibr" rid="ref-16">16</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-3.png"/>
</fig>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Exclusive Curve Selector (ExCuSe) approach [<xref ref-type="bibr" rid="ref-13">13</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-4.png"/>
</fig>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>SET approach [<xref ref-type="bibr" rid="ref-15">15</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-5.png"/>
</fig>
<p>ANFIS Else [<xref ref-type="bibr" rid="ref-12">12</xref>], proposed by Fuhl et al. 2016. it&#x2019;s a recent method in which pupil detection is based on edge detection (Canny edge detection algorithm), morphing technique operators, calculating the pupil center and the pupil radius by using an ellipse fitting algorithm. From <xref ref-type="fig" rid="fig-6">Fig. 6</xref>, (1) input image. (2) applying a Canny edge filter for the input eye image, employing morphing technique operators to remove the edges that can&#x2019;t satisfy the curvature property of the pupil edge. (3) keep and collect the other edges that satisfy the intensity degree and the curvature or elliptic property and fit an ellipse. (4) edges that have lower intensity values and higher circular ellipse values are chosen as the pupil boundaries. (5) If ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>] fails to achieve a valid ellipse that describes the pupil edge, a second analysis is adopted in which the input image in (6) is downscaled by using to keep the dark regions as in (7) and also to reduce the blurred regions and noise that caused by eyelashes in the eye image. (8) employing the convolution between the image with two cascaded filters (a) a surface filter to calculate the area difference between an inner circle and a surrounding box, and (b) a mean filter. The ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>] algorithm calculates the maximum threshold value for a pupil region to be a starting point of the refinement process for a pupil area estimation. (9) a starting point of the refinement process for a pupil area estimation is optimized on the full-scale image to avoid a distance error of the pupil center since it is selected in the downscaled image. (10) Calculating the pupil center and the pupil radius by using an ellipse fitting algorithm is based on a decision-based approach as described in [<xref ref-type="bibr" rid="ref-4">4</xref>].</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Ellipse Selector approach (Else) [<xref ref-type="bibr" rid="ref-12">12</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-6.png"/>
</fig>
</sec>
<sec id="s3">
<label>3</label>
<title>The Proposed CHMCEP Algorithm</title>
<p>This paper introduces (CHMCEP) algorithm to detect the pupil edge and calculate the pupil center and radius with more accurate results than the other attempts. On the other hand, the proposed CHMCEP algorithm can detect even the blurred or noisy images by using different filtering methods at the first stage to remove the blur and noise and at the last stage, the second filtering process before the circular Hough transform for the center fitting to make sure better accuracy. Initially, the input image restoration is obtained by removing the blurred content and the additive noise to the input image as seen in <xref ref-type="fig" rid="fig-7">Fig. 7(1)</xref>, and that was done by using a convolution process with three different filters separately to get better performance: a motion filter to remove the blurred content, a mean filter for image noise cancellation, a Wiener filter for final smoothing of the input image to get better image restoration from blurs, and noise as seen in <xref ref-type="fig" rid="fig-7">Fig. 7(2)</xref>. Then the restored input image is converted from RGB scale to Grayscale image or the intensity image as seen in <xref ref-type="fig" rid="fig-7">Fig. 7(3)</xref>, and that enables the bright/dark pupil difference to show the eye limbus in the image in fixed case. The next step is the intensity image is converted to the binary image by finding gray threshold by using Otsu&#x2019;s method which is based on the adaptive threshold selection [<xref ref-type="bibr" rid="ref-21">21</xref>] that provides the threshold value to convert the input image to the binary image, as seen in <xref ref-type="fig" rid="fig-7">Fig. 7(4)</xref>. Then the proposed CHMCEP algorithm uses the Canny edge detection algorithm [<xref ref-type="bibr" rid="ref-22">22</xref>] to refine the pupil position by detecting the edges in the eye image and filtering these edges according to neighboring pixel intensity. Then the algorithm employs the morphing technique operators to remove the edges that don&#x2019;t satisfy the curvature or elliptic property of the pupil edge and collect the other edges that satisfy the intensity degree and the curvature or elliptic property to fit an ellipse. The algorithm selects the edges that have lower intensity values and higher circular or elliptic values as the pupil edge contour, as seen in <xref ref-type="fig" rid="fig-7">Fig. 7(5)</xref>. The proposed CHMCEP algorithm used four morphological techniques to refine the edges to be more thin edges and smoothed. These techniques enhance the pupil edge detection performance by removing the edge connections from the surrounding pupil edge. These techniques can be done by using the four morphing operators to ensure higher accuracy, Skeleton morphing, cleaning morphing, fill morphing, Spur morphing, as seen in <xref ref-type="fig" rid="fig-7">Fig. 7(5)</xref>. Finally, the proposed CHMCEP algorithm employs an image convolution with cascaded two filters such as Else algorithm [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-6">6</xref>] mean and a surface difference filter, and then it uses the threshold for pupil region extraction to get the center of mass calculation. Then the proposed CHMCEP algorithm detects the pupil center coordinates by using the circular Hough transform for the center fitting [<xref ref-type="bibr" rid="ref-23">23</xref>], as seen in <xref ref-type="fig" rid="fig-7">Fig. 7(6)</xref>. The calculation of pupil radius is illustrated in <xref ref-type="fig" rid="fig-7">Fig. 7(7)</xref> and the result image pupil edge and center is shown in <xref ref-type="fig" rid="fig-7">Fig. 7(8)</xref>. <xref ref-type="fig" rid="fig-8">Fig. 8</xref> shows The proposed algorithm for detecting the normal, the blurred and noisy images.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>The proposed algorithm process for pupil edge detection and center calculation for blur image. (1) blurred image with noise, (2) image restoration, (3) intensity image, (4) binary image, (5) edge detection with applying morphing techniques, (6) pupil center calculation, (7) pupil radius calculation, (8) result image pupil edge and centertecting the normal image and the blurred and noisy image</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-7.png"/>
</fig>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>The proposed algorithm for detecting the normal image and the blurred and noisy image (1) blurred and noisy image (2) result image pupil edge and center (3) input image (4) result image pupil edge and center</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-8.png"/>
</fig>
</sec>
<sec id="s4">
<label>4</label>
<title>Data sets</title>
<p>&#x015A;wirski et al. Data set [<xref ref-type="bibr" rid="ref-17">17</xref>] (indoor collected images) contains 600 manually labeled, high resolution (640 &#x00D7; 480 pixels) eye images for two subjects, its challenges in pupil detection come from the highly off-axial camera position and occlusion of the pupil by the eyelid. ExCuSe Data set [<xref ref-type="bibr" rid="ref-2">2</xref>] (outdoor collected images when on-road driving [<xref ref-type="bibr" rid="ref-23">23</xref>] and supermarket search task [<xref ref-type="bibr" rid="ref-4">4</xref>]) contains 38,401 high-quality, manually labeled eye images (384 &#x00D7; 288 pixels) from 17 different subjects. It has higher challenges in pupil detection since there are differences in illumination conditions and many reflections due to eyeglasses and contact lenses. ElSe Data set [<xref ref-type="bibr" rid="ref-4">4</xref>] contains 55,712 eye images (384 &#x00D7; 288 pixels) from 7 subjects wearing a Dikablis eye tracker during various tasks. Its challenges in pupil detection come from locking up or making shadows on the pupil by eyelids and eyelashes. ElSe Data set [<xref ref-type="bibr" rid="ref-4">4</xref>], XVIII, XIX, XX, XXI, and XXII eye images have low pupil contrast, motion blur, and reflections LPW [<xref ref-type="bibr" rid="ref-22">22</xref>] contain 66 high-quality eye videos that were recorded from 22 subjects using a head-mounted eye tracker [<xref ref-type="bibr" rid="ref-16">16</xref>], a total of 130,856 video frames, its challenge is that it covers a wide range of indoor and outdoor realistic illumination cases, wearing glasses, eye make-up, with variable skin tones, eye colors, and face shape.</p>
</sec>
<sec id="s5">
<label>5</label>
<title>Simulations and Results</title>
<p>We will compare the proposed CHMCEP algorithm with the previous state of art algorithms which are Starburst [<xref ref-type="bibr" rid="ref-16">16</xref>], SET [<xref ref-type="bibr" rid="ref-15">15</xref>], &#x015A;wirski et al. [<xref ref-type="bibr" rid="ref-17">17</xref>], Pupil Labs [<xref ref-type="bibr" rid="ref-14">14</xref>], ExCuSe [<xref ref-type="bibr" rid="ref-2">2</xref>], and ElSe [<xref ref-type="bibr" rid="ref-4">4</xref>]. The performance is evaluated in terms of the detection rate for different pixel errors as in [<xref ref-type="bibr" rid="ref-24">24</xref>], where the&#x00A0;detection rate was measured by calculating the number of the detected images with respect to the total number of the images on each data set used, and the pixel error represents (e) was calculated by Euclidean distance between detected pupil-center point (PCP), pd(xd, yd), and manually selected pupil-center point (PCP), p<sub>m</sub>(x<sub>m</sub>, y<sub>m</sub>) as shown in <xref ref-type="disp-formula" rid="eqn-1">Eq. (1)</xref>:</p>
<p><disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>e</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>|</mml:mo><mml:msqrt><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>d</mml:mi></mml:mrow></mml:msub><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>y</mml:mi><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:msqrt><mml:mo>|</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p><xref ref-type="fig" rid="fig-9">Fig. 9</xref> illustrates the performance of the introduced CHMCEP algorithm for the &#x015A;wirski et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] data set as shown in [<xref ref-type="bibr" rid="ref-24">24</xref>]. The detection rates of ExCuSe and ElSe are better than the other state of the art where detection rate of the ExCuSe is 86.17% and ElSe is 80.83% and the proposed CHMCEP algorithm provides a better detection rate of 87.11%, and on the other hand, it reaches a detection rate approximately of 84.3% at a pixel distance of 5, while the other state of art beyond 70% as shown in <xref ref-type="table" rid="table-1">Tab.1</xref>.</p>
<fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>The proposed CHMCEP algorithm performance for the &#x015A;wirski et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-9.png"/>
</fig>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>The performance of the proposed CHMCEP algorithm for the &#x015A;wirski et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] data set</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Approach</th>
<th>Detection rate % at a pixel distance of 5</th>
<th>Detection rate %</th>
</tr>
</thead>
<tbody>
<tr>
<td>Starburst</td>
<td>72</td>
<td>72</td>
</tr>
<tr>
<td>&#x015A;wirski et al.</td>
<td>76</td>
<td>77</td>
</tr>
<tr>
<td>ElSe</td>
<td>80.83</td>
<td>76</td>
</tr>
<tr>
<td>ExCuSe</td>
<td>86.17</td>
<td>74</td>
</tr>
<tr>
<td>Pupil Labs</td>
<td>72</td>
<td>79</td>
</tr>
<tr>
<td>SET</td>
<td>72</td>
<td>72</td>
</tr>
<tr>
<td>Proposed Algorithm</td>
<td>87.11</td>
<td>84.3</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-10">Fig. 10</xref> illustrates the performance of the proposed CHMCEP algorithm for ExCuSe [<xref ref-type="bibr" rid="ref-13">13</xref>] data set as shown in [<xref ref-type="bibr" rid="ref-24">24</xref>]. The detection rates of ExCuSe and ElSe are better than the other state of the art where the ExCuSe is 55% and ElSe is 70%, whereas the remaining algorithms show detection rates below 30% at a pixel error of 5. The proposed CHMCEP algorithm provides a better detection rate of 78.54% at a pixel error of 5 as shown in <xref ref-type="table" rid="table-2">Tab. 2</xref>.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>The proposed CHMCEP algorithm performance for ExCuSe [<xref ref-type="bibr" rid="ref-13">13</xref>] data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-10.png"/>
</fig>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>The performance of the proposed CHMCEP algorithm for the EXCUSE et al. [<xref ref-type="bibr" rid="ref-13">13</xref>] data set</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Approach</th>
<th>Detection rate % at a pixel distance of 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>Starburst</td>
<td>32</td>
</tr>
<tr>
<td>&#x015A;wirski et al.</td>
<td>33</td>
</tr>
<tr>
<td>ElSe</td>
<td>55</td>
</tr>
<tr>
<td>ExCuSe</td>
<td>70</td>
</tr>
<tr>
<td>Pupil Labs</td>
<td>47</td>
</tr>
<tr>
<td>SET</td>
<td>48</td>
</tr>
<tr>
<td>Proposed Algorithm</td>
<td>78.54</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-11">Fig.11</xref> shows the performance of the proposed CHMCEP algorithm for ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>] data set as shown in [<xref ref-type="bibr" rid="ref-25">25</xref>]. The detection rates of ExCuSe and ElSe are better than the other state of the art where the ExCuSe is 35% and ElSe is 50% because the remaining algorithms show detection rates of a maximum of 10% at a pixel error of 5. The proposed CHMCEP algorithm provides a better detection rate of 58% at a pixel error of 5 as shown in <xref ref-type="table" rid="table-3">Tab. 3</xref>.</p>
<fig id="fig-11">
<label>Figure 11</label>
<caption>
<title>The proposed CHMCEP algorithm performance for Else [<xref ref-type="bibr" rid="ref-12">12</xref>] data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-11.png"/>
</fig>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>The performance of the proposed CHMCEP algorithm for the Else et al. [<xref ref-type="bibr" rid="ref-11">11</xref>,<xref ref-type="bibr" rid="ref-12">12</xref>] data set</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Approach</th>
<th>Detection rate % at a pixel distance of 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>Starburst</td>
<td>15</td>
</tr>
<tr>
<td>&#x015A;wirski et al.</td>
<td>18</td>
</tr>
<tr>
<td>ElSe</td>
<td>35</td>
</tr>
<tr>
<td>ExCuSe</td>
<td>50</td>
</tr>
<tr>
<td>Pupil Labs</td>
<td>19</td>
</tr>
<tr>
<td>SET</td>
<td>33</td>
</tr>
<tr>
<td>Proposed Algorithm</td>
<td>58</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="fig-12">Fig. 12</xref> illustrates the performance of the proposed CHMCEP algorithm for LPW [<xref ref-type="bibr" rid="ref-22">22</xref>] data set, as shown in [<xref ref-type="bibr" rid="ref-24">24</xref>], ElSe can be considered as the most robust algorithm when used in outdoor challenges, the detection rate of ElSe is 70% and it is 50% for ExCuSe and Swirksi, whereas the remaining algorithms show detection rates below 40% at a pixel error of 5. The proposed CHMCEP algorithm provides a better detection rate of 76% at a pixel error of 5 as shown in <xref ref-type="table" rid="table-4">Tab. 4</xref>.</p>
<p>In the following Figs. (<xref ref-type="fig" rid="fig-9">Figs. 9</xref> to <xref ref-type="fig" rid="fig-12">12</xref>), we compare the performance of the proposed CHMCEP algorithm with the performance shown in <xref ref-type="fig" rid="fig-11">Fig. 11</xref> in [<xref ref-type="bibr" rid="ref-24">24</xref>] which shows the limitations of the six algorithms. <xref ref-type="fig" rid="fig-13">Fig. 13</xref> shows the limitations of the 6 algorithms for a Data set XIX. Data set XIX in <xref ref-type="fig" rid="fig-11">Fig. 11</xref> in [<xref ref-type="bibr" rid="ref-25">25</xref>] is determined by different reflections, which cause edges on the pupil but not at its boundary. Since most of the six works are based on edge sifting, they are exceptionally likely to fall flat in recognizing the student boundary. As a result, the discovery rates accomplished here are very destitute.</p>
<fig id="fig-12">
<label>Figure 12</label>
<caption>
<title>The The proposed CHMCEP algorithm performance for LPW [<xref ref-type="bibr" rid="ref-22">22</xref>] data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-12.png"/>
</fig>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>The performance of the proposed CHMCEP algorithm for the LPW [<xref ref-type="bibr" rid="ref-22">22</xref>] data set</title>
</caption>
<table frame="hsides">
<colgroup>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th>Approach</th>
<th>Detection rate % at a pixel distance of 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>Starburst</td>
<td>45</td>
</tr>
<tr>
<td>&#x015A;wirski et al.</td>
<td>49</td>
</tr>
<tr>
<td>ElSe</td>
<td>70</td>
</tr>
<tr>
<td>ExCuSe</td>
<td>50</td>
</tr>
<tr>
<td>Pupil Labs</td>
<td>53</td>
</tr>
<tr>
<td>SET</td>
<td>58</td>
</tr>
<tr>
<td>Proposed Algorithm</td>
<td>76</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig-13">
<label>Figure 13</label>
<caption>
<title>The proposed CHMCEP algorithm performance for XIX data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-13.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-14">Fig. 14</xref> shows the limitations of the 6 algorithms. <xref ref-type="fig" rid="fig-11">Fig. 11</xref> in [<xref ref-type="bibr" rid="ref-25">25</xref>] postures challenges related to destitute brightening conditions, driving in this way to an iris with low-intensity values. This makes it exceptionally troublesome to partition the understudy from the iris (e.g., within the Canny edge location the reactions are disposed of since they are as well moo). Moreover, this information set contains reflections, which have a negative effect on the edge afterward reaction. Whereas the calculations ElSe and Pardon accomplish location rates of around 45%, the remaining approaches can identify the student center as it were 10% of the eye pictures. <xref ref-type="fig" rid="fig-15">Fig. 15</xref> shows the limitations of the 6 algorithms. <xref ref-type="fig" rid="fig-11">Fig. 11</xref> in [<xref ref-type="bibr" rid="ref-25">25</xref>] is recorded from an exceedingly off-axial camera position.</p>
<fig id="fig-14">
<label>Figure 14</label>
<caption>
<title>The The proposed CHMCEP algorithm performance for XXI data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-14.png"/>
</fig>
<fig id="fig-15">
<label>Figure 15</label>
<caption>
<title>The proposed CHMCEP algorithm performance for XXVIII data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-15.png"/>
</fig>
<p><xref ref-type="fig" rid="fig-16">Fig. 16</xref> shows the performance of the proposed CHMCEP algorithm for the last challenging Data set XXIX. In expansion, the outline of the subjects&#x2019; glasses covers the understudy, and most of the pictures are intensely obscured. These lead to unsuitable reactions from the Canny edge finder. As a result, the location rates are exceptionally destitute, e.g., ElSe (the finest performing calculation) can distinguish the student in as it were 25% of the eye pictures at a pixel mistake of 5. The proposed CHMCEP algorithm provides a better detection rate of 60% at a pixel error of 5.</p>
<fig id="fig-16">
<label>Figure 16</label>
<caption>
<title>The The proposed CHMCEP algorithm performance for XXIX data set</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-16.png"/>
</fig>
<p>There is another contribution to the proposed CHMCEP algorithm. As seen, <xref ref-type="fig" rid="fig-17">Fig. 17</xref> shows the success cases of the proposed CHMCEP algorithm for detecting the pupil on eye images from Data sets XXI and XXIX other than the other attempts. The proposed CHMCEP algorithm has been tested and compared with the proposed pupil detection algorithm (PDA) of the work done in [<xref ref-type="bibr" rid="ref-26">26</xref>] by using two databases of eye images. Database (A) has 400 (Infra red) IR eye images with a resolution of 640 &#x00D7; 480 pixels, captured with the head-mounted device developed by [<xref ref-type="bibr" rid="ref-26">26</xref>], and Database (B) has 400 IR eye images with a resolution of 640 &#x00D7; 480 pixels from the available database Casia-Iris-Lamp [<xref ref-type="bibr" rid="ref-27">27</xref>]. The performance was tested by running and implementing Python 3 bindings OpenCV on the same machine as in [<xref ref-type="bibr" rid="ref-26">26</xref>].</p>
<p><xref ref-type="fig" rid="fig-18">Fig. 18</xref> illustrates the performance of the introduced CHMCEP algorithm for the databases A and B in [<xref ref-type="bibr" rid="ref-26">26</xref>,<xref ref-type="bibr" rid="ref-27">27</xref>]. The proposed CHMCEP algorithm provides better performance than the PDA algorithm [<xref ref-type="bibr" rid="ref-1">1</xref>] and ExCuSe [<xref ref-type="bibr" rid="ref-2">2</xref>], where it achieves a detection rate of approximately 100% at 8 pixels than the PDA algorithm [<xref ref-type="bibr" rid="ref-28">28</xref>&#x2013;<xref ref-type="bibr" rid="ref-31">31</xref>] that achieves a detection rate of approximately 100% at 10 pixels [<xref ref-type="bibr" rid="ref-32">32</xref>&#x2013;<xref ref-type="bibr" rid="ref-37">37</xref>].</p>
<fig id="fig-17">
<label>Figure 17</label>
<caption>
<title>Success cases of the proposed CHMCEP algorithm for Data sets XXI and XXIX. The left column presents the input images and the right column presents the proposed CHMCEP algorithm</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-17.png"/>
</fig>
<fig id="fig-18">
<label>Figure 18</label>
<caption>
<title>The The proposed CHMCEP algorithm performance for data bases A&#x0026;B in [<xref ref-type="bibr" rid="ref-26">26</xref>,<xref ref-type="bibr" rid="ref-27">27</xref>]</title>
</caption>
<graphic mimetype="image" mime-subtype="png" xlink:href="CMC_28190-fig-18.png"/>
</fig>
</sec>
<sec id="s6">
<label>6</label>
<title>Conclusions</title>
<p>The present work introduces a robust pupil detection algorithm with higher accuracy than the previous attempts ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>], ExCuSe [<xref ref-type="bibr" rid="ref-13">13</xref>], Pupil Labs [<xref ref-type="bibr" rid="ref-14">14</xref>], SET [<xref ref-type="bibr" rid="ref-15">15</xref>], Starburst [<xref ref-type="bibr" rid="ref-16">16</xref>], and &#x015A;wirski et&#x00A0;al. [<xref ref-type="bibr" rid="ref-17">17</xref>] and it can be used in real-time analysis applications. The proposed CHMCEP algorithm can detect successfully the pupil of the blurred or noisy images by using different filtering methods at the first stage of the proposed CHMCEP algorithm. This is used to remove the blur and noise and finally the second filtering process before the circular Hough transforms for the center fitting to ensure better accuracy. From the simulations, we can conclude that the introduced CHMCEP algorithm has a better performance than ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>] and the other attempts. On the other hand, the proposed CHMCEP algorithm uses two filtering stages of different filtering methods. This enables it to provide successful detection of the pupils of the data sets XXI (Bad illumination, Reflections, ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>]) and XXIX (Border of glasses covering pupil, blurred images, LPW [<xref ref-type="bibr" rid="ref-22">22</xref>]) than the other attempts especially ElSe [<xref ref-type="bibr" rid="ref-12">12</xref>]. The problem of ElSe may become from choosing a pupil center position in the downscaled image causes a distance error of a pupil center position in the full-scale image, since the position optimization may be accomplished with error. The proposed shows a good performance on these challenging data sets that distinguished by reflections, blurred or poor illumination conditions.</p>
</sec>
</body>
<back>
<ack><p>The authors appreciate &#x201C;TAIF UNIVERSITY RESEARCHERS SUPPORTING PROJECT, grant number TURSP-2020/345&#x201D;, Taif University, Taif, Saudi Arabia for supporting this work.</p></ack>
<fn-group>
<fn fn-type="other"><p><bold>Funding Statement:</bold> This research was funded by &#x201C;TAIF UNIVERSITY RESEARCHERS SUPPORTING PROJECT, grant number TURSP-2020/345&#x201D;, Taif University, Taif, Saudi Arabia.</p>
</fn>
<fn fn-type="conflict"><p><bold>Conflicts of Interest:</bold> The authors declare that they have no conflicts of interest to report regarding the present study.</p>
</fn>
</fn-group>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. L.</given-names> <surname>Bergera</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Garde</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Porta</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Cabeza</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Villanueva</surname></string-name></person-group>, &#x201C;<article-title>Accurate pupil center detection in off-the-shelf eye tracking systems using convolutional neural networks</article-title>,&#x201D; <source>Sensors</source>, vol. <volume>21</volume>, no. <issue>6847</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>14</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Pasarica</surname></string-name>, <string-name><given-names>R. G.</given-names> <surname>Bozomitu</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Cehan</surname></string-name>, <string-name><given-names>R. G.</given-names> <surname>Lupu</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Rotariu</surname></string-name></person-group>, &#x201C;<article-title>Pupil detection algorithms for eye tracking applications</article-title>,&#x201D; in <conf-name>Int. Symp. for Design and Technology in Electronic Packaging, 21st Int. Conf. on</conf-name>, Brasov, Romania, <publisher-name>IEEE</publisher-name>, <volume>21</volume>, pp. <fpage>161</fpage>&#x2013;<lpage>164</lpage>, <year>2015</year>. </mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>C.</given-names> <surname>Braunagel</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Stolzmann</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Kasneci</surname></string-name> and <string-name><given-names>W.</given-names> <surname>Rosenstiel</surname></string-name></person-group>, &#x201C;<article-title>Driver-activity recognition in the context of conditionally autonomous driving</article-title>,&#x201D; in <conf-name>IEEE 18th Int. Conf. on Intelligent Transportation Systems</conf-name>, Gran Canaria, Spain, vol. <volume>18</volume>, pp. <fpage>1652</fpage>&#x2013;<lpage>1657</lpage>, <year>2015</year>. </mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>K.</given-names> <surname>Sippel</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Kasneci</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Aehling</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Heister</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Rosenstiel</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Binocular glaucomatous visual field loss and its impact on visual exploration-a supermarket study</article-title>,&#x201D; <source>PLOS</source>, vol. <volume>9</volume>, no. <issue>8</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>7</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>Q.</given-names> <surname>Ali</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Heldal</surname></string-name>, <string-name><given-names>C. G.</given-names> <surname>Helgesen</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Krumina</surname></string-name>, <string-name><given-names>C.</given-names> <surname>Costescu</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Current challenges supporting school-aged children with vision problems: A rapid review</article-title>,&#x201D; <source>Applied Sciences</source>, vol. <volume>11</volume>, no. <issue>9673</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>23</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Kumar</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Kohlbecher</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Schneider</surname></string-name></person-group>, &#x201C;<article-title>A novel approach to video- based pupil tracking</article-title>,&#x201D; in <conf-name>Proc. of IEEE SMC</conf-name>, San Antonio, TX, USA, vol. <volume>2</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>8</lpage>, <year>2009</year>. </mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>Lin</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Pan</surname></string-name>, <string-name><given-names>L. F.</given-names> <surname>Wei</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Yu</surname></string-name></person-group>, &#x201C;<article-title>A robust and accurate detection of pupil images</article-title>,&#x201D; in <conf-name>Proc. of IEEE Biomedical Engineering and Informatics (BMEI)</conf-name>, Yantai, China, vol. <volume>3</volume>, pp. <fpage>70</fpage>&#x2013;<lpage>74</lpage>, <year>2010</year>. </mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>J. S.</given-names> <surname>Agustin</surname></string-name>, <string-name><given-names>E.</given-names> <surname>Mollenbach</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Barret</surname></string-name></person-group>, &#x201C;<article-title>Evaluation of a low-cost open-source gaze tracker</article-title>,&#x201D; in <conf-name>Proc. of ETRA, ACM</conf-name>, Austin, TX, USA, vol. <volume>2010</volume>, pp. <fpage>77</fpage>&#x2013;<lpage>80</lpage>, <year>2010</year>. </mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>A.</given-names> <surname>Keil</surname></string-name>, <string-name><given-names>G.</given-names> <surname>Albuquerque</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Berger</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Andreas</surname></string-name></person-group>, &#x201C;<article-title>Real-time gaze tracking with a consumer-grade video camera</article-title>,&#x201D; in <conf-name>Proc. of WSCG</conf-name>, Plzen, Czech Republic, vol. <volume>2</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2010</year>. </mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>J. N.</given-names> <surname>Sari</surname></string-name>, <string-name><given-names>H.</given-names> <surname>Adi</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Edi</surname></string-name>, <string-name><given-names>I.</given-names> <surname>Santosa</surname></string-name> and <string-name><given-names>R.</given-names> <surname>Ferdiana</surname></string-name></person-group>, &#x201C;<article-title>A study of algorithms of pupil diameter measurement</article-title>,&#x201D; in <conf-name>2nd Int. Conf. on Science and Technology-Computer (ICST)</conf-name>, <publisher-loc>Yogyakarta. Indonesia</publisher-loc>, <volume>2</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>6</lpage>, <year>2016</year>. </mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. M. S.</given-names> <surname>Saif</surname></string-name> and <string-name><given-names>M. S.</given-names> <surname>Hossain</surname></string-name></person-group>, &#x201C;<article-title>A study of pupil orientation and detection of pupil using circle algorithm: A review</article-title>,&#x201D; <source>International Journal of Engineering Trends and Technology (IJETT)</source>, vol. <volume>54</volume>, no. <issue>1</issue>, pp. <fpage>12</fpage>&#x2013;<lpage>16</lpage>, <year>2017</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Fuhl</surname></string-name>, <string-name><given-names>T. C.</given-names> <surname>Santini</surname></string-name>, <string-name><given-names>T.</given-names> <surname>K&#x00FC;bler</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Kasneci</surname></string-name></person-group>, &#x201C;<article-title>Else: Ellipse selection for robust pupil detection in real-world environments</article-title>,&#x201D; in <conf-name>Proc. of 9th Biennial ACM Symp. on Eye Tracking Research &#x0026; Applications. ACM</conf-name>, <publisher-loc>New York. NY. USA</publisher-loc>, <publisher-name>ETRA</publisher-name>, <volume>9</volume>, pp. <fpage>123</fpage>&#x2013;<lpage>130</lpage>, <year>2016</year>. </mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Fuhl</surname></string-name>, <string-name><given-names>T. K.</given-names> <surname>&#x0308;ubler</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Sippel</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Rosenstiel</surname></string-name> and <string-name><given-names>E.</given-names> <surname>Kasneci</surname></string-name></person-group>, &#x201C;<article-title>ExCuSe: Robust pupil detection in real-world scenarios</article-title>,&#x201D; in <conf-name>Int. Conf. on Computer Analysis of Images and Patterns</conf-name>, Valletta, Malta, <publisher-name>Springer</publisher-name>, <volume>2015</volume>, pp. <fpage>39</fpage>&#x2013;<lpage>51</lpage>, <year>2015</year>. </mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Kassner</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Patera</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Bulling</surname></string-name></person-group>, &#x201C;<article-title>An open-source platform for pervasive eye tracking and mobile gazebased interaction</article-title>,&#x201D; in <conf-name>Adjunct Proc. of the 2014 ACM Int. Joint Conf. on Pervasive and Ubiquitous Computing (UbiComp)</conf-name>, Seattle Washington, USA, vol. <volume>2014</volume>, pp. <fpage>1151</fpage>&#x2013;<lpage>1160</lpage>, <year>2014</year>. </mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. H.</given-names> <surname>Javadi</surname></string-name>, <string-name><given-names>Z.</given-names> <surname>Hakimi</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Barati</surname></string-name>, <string-name><given-names>V.</given-names> <surname>Walsh</surname></string-name> and <string-name><given-names>L.</given-names> <surname>Tcheang</surname></string-name></person-group>, &#x201C;<article-title>Set: A pupil detection method using sinusoidal approximation</article-title>,&#x201D; <source>Frontiers in neuroengineering</source>, vol. <volume>8</volume>, no. <issue>4</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>10</lpage>, <year>2015</year>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>D.</given-names> <surname>Li</surname></string-name>, <string-name><given-names>D.</given-names> <surname>Winfield</surname></string-name> and <string-name><given-names>D. J.</given-names> <surname>Parkhurst</surname></string-name></person-group>, &#x201C;<article-title>Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches</article-title>,&#x201D; in <conf-name>CVPR Workshops. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition-Workshops, 2005</conf-name>, San Diego, CA, USA, <publisher-name>IEEE</publisher-name>, vol. <volume>2005</volume>, pp. <fpage>79</fpage>&#x2013;<lpage>86</lpage>, <year>2005</year>. </mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>L.</given-names> <surname>&#x015A;wirski</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Bulling</surname></string-name> and <string-name><given-names>N.</given-names> <surname>Dodgson</surname></string-name></person-group>, &#x201C;<article-title>Robust real-time pupil tracking in highly off-axis images</article-title>,&#x201D; in <conf-name>Proc. of the Symp. on Eye Tracking Research and Applications (ETRA)</conf-name>, Santa Barbara California, USA, <publisher-name>ACM</publisher-name>, <volume>2012</volume>, pp. <fpage>173</fpage>&#x2013;<lpage>176</lpage>, <year>2012</year>. </mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Suzuki</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Abe</surname></string-name></person-group>, &#x201C;<article-title>Topological structural analysis of digitized binary images by border following</article-title>,&#x201D; <source>Computer Vision. Graphics, and Image Processing</source>, vol. <volume>1</volume>, no. <issue>32</issue>, pp. <fpage>32</fpage>&#x2013;<lpage>46</lpage>, <year>1985</year>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>G. J.</given-names> <surname>Mohammed</surname></string-name>, <string-name><given-names>B. R.</given-names> <surname>Hong</surname></string-name> and <string-name><given-names>A. A.</given-names> <surname>Jarjes</surname></string-name></person-group>, &#x201C;<article-title>Accurate pupil features extraction based on new projection function</article-title>,&#x201D; <source>Computing and Informatics</source>, vol. <volume>29</volume>, no. <issue>4</issue>, pp. <fpage>663</fpage>&#x2013;<lpage>680</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>T. H.</given-names> <surname>Cormen</surname></string-name>, <string-name><given-names>C. E.</given-names> <surname>Leiserson</surname></string-name>, <string-name><given-names>R. L.</given-names> <surname>Rivest</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Stein</surname></string-name></person-group>, <source>Introduction to algorithms</source>, <edition>Third</edition> edition, <publisher-loc>Cambridge</publisher-loc>: <publisher-name>The MIT press</publisher-name>, pp. <fpage>1014</fpage>&#x2013;<lpage>1047</lpage>, <year>2009</year>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Ostu</surname></string-name></person-group>, &#x201C;<article-title>A threshold selection method from gray-level histograms</article-title>,&#x201D; <source>IEEE Transactions on Systems, Man and Cybernetics</source>, vol. <volume>9</volume>, no. <issue>1</issue>, pp. <fpage>62</fpage>&#x2013;<lpage>66</lpage>, <year>1979</year>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>Cany</surname></string-name></person-group>, &#x201C;<article-title>A computational approach to edge detection</article-title>,&#x201D; <source>IEEE Transactions on Pattern Analysis and Machine Intelligence. PAMI</source>, vol. <volume>8</volume>, no. <issue>6</issue>, pp. <fpage>679</fpage>&#x2013;<lpage>689</lpage>, <year>1986</year>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Cherabit</surname></string-name>, <string-name><given-names>F. Z.</given-names> <surname>Chelali</surname></string-name> and <string-name><given-names>A.</given-names> <surname>Djeradi</surname></string-name></person-group>, &#x201C;<article-title>Circular hough transform for iris localization</article-title>,&#x201D; <source>Science and Technology</source>, vol. <volume>2</volume>, no. <issue>5</issue>, pp. <fpage>114</fpage>&#x2013;<lpage>121</lpage>, <year>2012</year>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>E.</given-names> <surname>Kasneci</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Sippel</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Aehlinh</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Heister</surname></string-name>, <string-name><given-names>W.</given-names> <surname>Rosenstiel</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Driving with binocular visual field loss? A study on a supervised on-road parcours with simultaneous eye and head tracking</article-title>,&#x201D; <source>PLoS ONE</source>, vol. <volume>9</volume>, no. <issue>2</issue>, pp. <fpage>122</fpage>&#x2013;<lpage>136</lpage>, <year>2014</year>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>F.</given-names> <surname>Wolfgang</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Marc</surname></string-name>, <string-name><given-names>B.</given-names> <surname>Andreas</surname></string-name> and <string-name><given-names>K.</given-names> <surname>Enkelejda</surname></string-name></person-group>, &#x201C;<article-title>Pupil detection in the wild: An evaluation of the state of the art in mobile head-mounted eye tracking</article-title>,&#x201D; <source>Machine Vision and Applications</source>, vol. <volume>27</volume>, no. <issue>8</issue>, pp. <fpage>1275</fpage>&#x2013;<lpage>1288</lpage>, <year>2016</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>P.</given-names> <surname>Bonteanu</surname></string-name>, <string-name><given-names>A.</given-names> <surname>Cracan</surname></string-name>, <string-name><given-names>R. G.</given-names> <surname>Bozomitu</surname></string-name> and <string-name><given-names>G.</given-names> <surname>Bonteanu</surname></string-name></person-group>, &#x201C;<article-title>A new robust pupil detection algorithm for eye tracking based human-computer interface</article-title>,&#x201D; in <conf-name>Int. Symp. on Signals. Circuits and Systems</conf-name>, Iasi, Romania, vol. <volume>2019</volume>, pp. <fpage>1</fpage>&#x2013;<lpage>4</lpage>, <year>2019</year>. </mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>J.</given-names> <surname>&#x017D;uni&#x00FC;</surname></string-name>, <string-name><given-names>K.</given-names> <surname>Hirota</surname></string-name> and <string-name><given-names>P. L.</given-names> <surname>Rosin</surname></string-name></person-group>, &#x201C;<article-title>A Hu moment invariant as a shape circularity measure</article-title>,&#x201D; <source>Pattern Recognition</source>, vol. <volume>43</volume>, no. <issue>1</issue>, pp. <fpage>47</fpage>&#x2013;<lpage>57</lpage>, <year>2010</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>G. Z.</given-names> <surname>Dai</surname></string-name>, <string-name><given-names>X. R.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>X. Z.</given-names> <surname>He</surname></string-name> and <string-name><given-names>X.</given-names> <surname>Chen</surname></string-name></person-group>, &#x201C;<article-title>TBE-Net: A three-branch embedding network with part-aware ability and feature complementary learning for vehicle re-identification</article-title>,&#x201D; <source>IEEE Transactions on Intelligent Transportation Systems</source>, vol. <volume>4</volume>, no. <issue>9</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>13</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>W.</given-names> <surname>Sun</surname></string-name>, <string-name><given-names>L.</given-names> <surname>Dai</surname></string-name>, <string-name><given-names>X. R.</given-names> <surname>Zhang</surname></string-name>, <string-name><given-names>P. S.</given-names> <surname>Chang</surname></string-name> and <string-name><given-names>X. Z.</given-names> <surname>He</surname></string-name></person-group>, &#x201C;<article-title>RSOD: Real-time small object detection algorithm in UAV-based traffic monitoring</article-title>,&#x201D; <source>Applied Intelligence</source>, vol. <volume>9</volume>, no. <issue>10</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>16</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>N.</given-names> <surname>Zdarsky</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Treue</surname></string-name> and <string-name><given-names>M.</given-names> <surname>Esghaei</surname></string-name></person-group>, &#x201C;<article-title>A deep learning-based approach to video-based eye tracking for human psychophysics</article-title>,&#x201D; <source>Front Hum. Neurosci</source>, vol. <volume>15</volume>, no. <issue>10</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Kim</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Jeong</surname></string-name> and <string-name><given-names>B. C.</given-names> <surname>Ko</surname></string-name></person-group>, &#x201C;<article-title>Energy efficient pupil tracking based on rule distillation of cascade regression forest</article-title>,&#x201D; <source>Sensors</source>, vol. <volume>20</volume>, no. <issue>12</issue>, pp. <fpage>1</fpage>&#x2013;<lpage>18</lpage>, <year>2020</year>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>K. I.</given-names> <surname>Lee</surname></string-name>, <string-name><given-names>J. H.</given-names> <surname>Jeon</surname></string-name> and <string-name><given-names>B. C.</given-names> <surname>Song</surname></string-name></person-group>, &#x201C;<article-title>Deep learning-based pupil center detection for fast and accurate eye tracking system</article-title>,&#x201D; in <conf-name>European Conf. on Computer Vision</conf-name>, <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>, <publisher-name>Springer</publisher-name>, <volume>12</volume>, pp. <fpage>36</fpage>&#x2013;<lpage>52</lpage>, <year>2020</year>. </mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>A. F.</given-names> <surname>Klaib</surname></string-name>, <string-name><given-names>N. O.</given-names> <surname>Alsrehin</surname></string-name>, <string-name><given-names>W. Y.</given-names> <surname>Melhem</surname></string-name>, <string-name><given-names>H. O.</given-names> <surname>Bashtawi</surname></string-name> and <string-name><given-names>A. A.</given-names> <surname>Magableh</surname></string-name></person-group>, &#x201C;<article-title>Eye tracking algorithms, techniques, tools, and applications with an emphasis on machine learning and Internet of Things technologies</article-title>,&#x201D; <source>Expert Systems With Applications</source>, vol. <volume>166</volume>, no. <issue>21</issue>, pp. <fpage>166</fpage>&#x2013;<lpage>181</lpage>, <year>2021</year>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><given-names>R. G.</given-names> <surname>Lupu</surname></string-name>, <string-name><given-names>R. G.</given-names> <surname>Bozomitu</surname></string-name>, <string-name><given-names>A.</given-names> <surname>P&#x02D8;as&#x02D8;aric&#x02D8;a</surname></string-name> and <string-name><given-names>C.</given-names> <surname>Rotariu</surname></string-name></person-group>, &#x201C;<article-title>Eye tracking user interface for Internet access used in assistive technology</article-title>,&#x201D; in <conf-name>Proc. of the 2017 E-Health and Bioengineering Conf. (EHB)</conf-name> <conf-date>22&#x2013;24 June 2017</conf-date>, <publisher-loc>Piscataway, NJ, USA</publisher-loc>, <publisher-name>IEEE</publisher-name>, <volume>2017</volume>, pp. <fpage>659</fpage>&#x2013;<lpage>662</lpage>, <year>2017</year>. </mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>S.</given-names> <surname>Said</surname></string-name>, <string-name><given-names>S.</given-names> <surname>Kork</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Beyrouthy</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Hassan</surname></string-name>, <string-name><given-names>O.</given-names> <surname>Abdellatif</surname></string-name> <etal>et al.</etal></person-group><italic>,</italic> &#x201C;<article-title>Real time eye tracking and detection&#x2014;A driving assistance system</article-title>,&#x201D; <source>Advances in Science, Technology and Engineering Systems Journal</source>, vol. <volume>3</volume>, no. <issue>6</issue>, pp. <fpage>446</fpage>&#x2013;<lpage>454</lpage>, <year>2018</year>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Wedel</surname></string-name>, <string-name><given-names>R.</given-names> <surname>Pieters</surname></string-name> and <string-name><given-names>R. V.</given-names> <surname>Lans</surname></string-name></person-group>, <chapter-title>Eye tracking methodology for research in consumer psychology</chapter-title>. In: <source>Handbook of Research Methods in Consumer Psychology</source>. Vol. <volume>2019</volume>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>Routledge</publisher-name>, pp. <fpage>276</fpage>&#x2013;<lpage>292</lpage>, <year>2019</year>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><given-names>M.</given-names> <surname>Mei&#x00DF;ner</surname></string-name>, <string-name><given-names>J.</given-names> <surname>Pfeiffer</surname></string-name>, <string-name><given-names>T.</given-names> <surname>Pfeiffer</surname></string-name> and <string-name><given-names>H.</given-names> <surname>Oppewal</surname></string-name></person-group>, &#x201C;<article-title>Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research</article-title>,&#x201D; <source>Journal of Business Research</source>, vol. <volume>100</volume>, no. <issue>4</issue>, pp. <fpage>445</fpage>&#x2013;<lpage>458</lpage>, <year>2019</year>.</mixed-citation></ref>
</ref-list>
</back>
</article>
