Automated Pupillary Light Reflex Test on a Portable Platform

Automated Pupillary Light Reflex Test on a Portable Platform

Dogancan Temel, Melvin J. Mathew, and Ghassan AlRegib This project is sponsored by Georgia Research Alliance (GRA) School of Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA, USA
{cantemel, mmathew31, alregib}@gatech.edu
   Yousuf M. Khalifa School of Medicine, Ophthalmology
Emory University
Atlanta, GA, USA
yousuf.khalifa@emoryhealthcare.org
Abstract

In this paper, we introduce a portable eye imaging device denoted as lab-on-a-headset, which can automatically perform a swinging flashlight test. We utilized this device in a clinical study to obtain high-resolution recordings of eyes while they are exposed to a varying light stimuli. Half of the participants had relative afferent pupillary defect (RAPD) while the other half was a control group. In case of positive RAPD, patient’s pupils constrict less or do not constrict when light stimuli swings from the unaffected eye to the affected eye. To automatically diagnose RAPD, we propose an algorithm based on pupil localization, pupil size measurement, and pupil size comparison of right and left eye during the light reflex test. We validate the algorithmic performance over a dataset obtained from 22 subjects and show that proposed algorithm can achieve a sensitivity of 93.8% and a specificity of 87.5%.*This study is supported by the Georgia Research Alliance based in Atlanta, Georgia. The purpose of the Georgia Research Alliance support is to explore the commercialization of the technology being studied. The authors of this paper, Georgia Institute of Technology and Emory University are entitled to royalties related to this research in case of the commercialization of the developed technology.

eye imaging, pupil detection, pupil tracking, pupillary light reflex, RAPD screening, ocular diseases, vision loss prevention
  • D. Temel, M. J. Mathew, G. AlRegib and Y. M. Khalifa, ”Automated Pupillary Light Reflex Test on a Portable Platform,” International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 2019, pp. 1-7.

  • Date added to IEEE Xplore: 09 May 2019

  • @INPROCEEDINGS{Temel2019_ISMR,
    author={D. Temel and M. J. Mathew and G. AlRegib and Y. M. Khalifa},
    booktitle={2019 International Symposium on Medical Robotics (ISMR)},
    title={Automated Pupillary Light Reflex Test on a Portable Platform},
    year={2019},
    pages={1-7},
    doi={10.1109/ISMR.2019.8710182},
    month={April},}

  • ©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Fig. 1: Lab-on-a-headset platform: Front view, top view, and side view while in use, respectively.

I Introduction

The term robot is introduced by the Czech author Karel Čapek in his science fiction play Rossum’s Universal Robots published in 1920 [1]. The origin of the word is from the Czech word robota, which means tedious and monotonous routine work. After more than half a century from the emergence of the word, the field of medicine welcomed its first robot in 1984 with arthrobot, which was utilized to position knee joints of patients for a knee replacement surgery [2]. After the FDA approval in 2000, medical experts started to use the da Vinci robotic surgical system to perform operations through small incisions with the guidance of its high resolution vision system and display [3]. In 2016, University of Oxford surgeons performed the world’s first operation inside the eye using a robot [4]. In addition to the surgical automation and assistance systems in ophthalmology, there has been a significant advancement in imaging and diagnosis systems including but not limited to automated detection of diabetic retinopathy in fundus photographs [5] and automated referral recommendation based on OCT images [6]. Advancements in algorithmic analysis not only affected research community but also started to transform clinical practice. On April 11, 2018, the U.S. Food and Drug Administration (FDA) authorized the first device for marketing that can provide a screening decision without the need for a clinician. Authorized device can detect diabetic retinopathy in adults if the condition level is mild or greater [7].

Fig. 2: RAPD detection framework: We display a light sequence to subjects with lab-on-a-headset and capture their pupillary light reflex synchronously. Then, we measure the dissimilarity between pupillary light reflexes to obtain an index, which can be used to identify patients with RAPD condition.

Aforementioned ophthalmology-related studies [5, 6, 7] focused on imaging the back of the eye and the cross-section of retinas to perform frame-level anomaly detection. However, in this study, we need to image the surface of the eye and utilize sequential information to detect abnormalities in pupillary light reflex, which can be an early indicator of ocular and neurological diseases that can eventually lead to vision loss [8]. Specifically, we focus on relative afferent pupillary defect (RAPD), which is based on the difference between the light reflex of the pupils. In healthy subjects, pupils should constrict equally when they are exposed to light and dilate equally when there is no light stimuli. In case of positive RAPD, patient’s pupils do not constrict or constrict less when light stimuli swings from the unaffected eye to the affected eye [9]. Currently, RAPD assessment is performed by a clinician who asks patients to fixate on a distant point in a relatively dark environment, swings the flashlight between the eyes of the patient, and observe the relative pupil size change. Although subjective RAPD assessment is practical in terms of requiring only a flashlight, variations in test setup and subjective opinion can affect the reliability of the test results. Even the type of the light source can significantly affect the RAPD assessment performance as shown in a study conducted with experienced nurses [10]. Moreover, subtle abnormalities are not easy to detect subjectively in case of dark irises, small or poorly reactive pupils [11, 12].

To standardize testing environment and eliminate subjectivity, we introduce a portable eye imaging and light reflex test device denoted as lab-on-a-headset as shown in Fig. Automated Pupillary Light Reflex Test on a Portable Platform . The introduced device can be considered as a medical imaging device that can perform tedious and monotonous routine work, which corresponds to swinging flashlight test in this study as shown in Fig. 2. The remaining of this paper is organized as follows: We analyze existing pupil dataset studies in Section II and describe the introduced dataset in Section III. We introduce an RAPD detection algorithm in Section IV and report the results in Section V. Finally, we conclude our work in Section VI.

Ii Related Work

There are numerous studies in the literature that investigate the pupillary light reflex of subjects based on off-the-shelf digital pupilometers [13, 14, 15]. However, these studies cannot be used as a baseline to investigate RAPD detection because they do not disclose utilized algorithms as well as acquired data. Similarly, capturing a new dataset with an off-the-shelf pupilography device is not preferable because of limited control over the stimuli and limited access to raw data. Therefore, we analyzed publicly-available pupil datasets in the literature to understand whether they can be utilized for automated RAPD assessment or not. Kasneci et al. [16] conducted a research study to assess the on-road driving performance of subjects during a 40-minute driving task on a specific route. Research study resulted in a dataset of closely captured eye movements, which were utilized to investigate the visual exploration ability of subjects while driving. Half of the participants were healthy control subjects whereas remaining half had Homonymous visual defect or Glaucoma. Sippel et al. [17] performed a research study to investigate the impact of visual field loss in everyday living activities. Specifically, subjects were asked to collect certain products in a drugstore and their eye movements were recorded during the search task with a closely mounted setup. Half of the participants were healthy control subjects and the reaming half had Binocular Glaucoma.

Fuhl et al. [18] investigated the robustness of pupil detection in real-world challenging scenarios including varying illumination conditions, reflection on eyeglasses and contact lenses. In addition to the dataset introduced by Swirski et al. [19], Fuhl et al. [18] evaluated their algorithm over nine image sets from on-road experiments [16] and eight image sets from supermarket experiments [17]. Fuhl et al. [20] extended the dataset in [18] with more challenging scenarios to test pupil detection performance in real-world environments. Five new image sets were obtained from on-road experiments [16] with motion blur, reflections, and low pupil contrast. And two new image sets were obtained indoor from Asian subjects whose eyelids and eyelashes partially covered pupils along with reflections in certain images. Fuhl et al. (Microscope) [21] focused on pupil detection tailored for microscope images and introduced a pupil dataset whose images were obtained with an unmodified microscope ocular. Channeling conditions in the microscope dataset include blur, exposure, contrast, irregular pupil shape, light gradient, and reflections. Tonsen et al. [22] introduced the labeled pupils in the wild (LPW) dataset to study pupil detection in unconstrained environments, which includes indoor/outdoor environments, participants wearing glasses and eye make-up from different ethnicities with variable skin tones, eye colours, and face shapes.

Existing pupil datasets can not be directly used for pupillary light reflex assessment for three main reasons. First, majority of existing datasets are based on images, which lack the sequential information necessary to assess light reflex. Kasneci [16] and Sippel [17] were acquired as video sequences but only static images were provided in related studies [18, 20]. Second, limited control over acquisition conditions makes it impossible to assess the reflex based on solely light stimuli because of simultaneous exposure to varying conditions. Third, there is no medical metadata corresponding to the RAPD condition. Because of these limitations, we decided to obtain a new dataset for pupillary light reflex assessment. An intuitive option would be capturing pupil data with off-the-shelf portable eye trackers [23, 24] that are commonly used for related research studies. Even though eye trackers are optimized for gaze and eye tracking, their design leads to inherent limitations for measuring constriction and dilation motion. Imaging sensors of the eye trackers are located around the eyes and have an indirect view of the pupils, which makes it more challenging to accurately measure pupil size. Moreover, integrating an automated light stimulus with an off-the-shelf eye tracker is not straightforward because of the black box nature and limited control over the eye tracker. Thus, we decided to develop a custom acquisition and test platform that can be used in a clinical study to obtain a RAPD dataset.

Iii RAPD Dataset

We utilized lab-on-a-headset [25, 26] to perform automated light rexlex test and generated our RAPD dataset. Lab-on-a-headset is an ultra portable device with on-board processing capability, which enabled us to capture and preview patient videos during the clinical study. We predefined the test sequence by connecting to the headset and utilizing the graphical user interface that provides full control over the test stimuli. Tested subjects were stimulated with the automated light sequences and their pupillary reactions were recorded simultaneously as high definition streams. In Fig. 3, we show sample images captured with the introduced headset. We used the infrared frames to assess relative afferent pupillary defect in this study.

Fig. 3: Sample pupil images obtained with lab-on-a-headset.
Fig. 4: Pupil detection pipeline based on Starburst algorithm.

We obtained approvals from the Institutional Review Board committees of Emory University and Georgia Institute of Technology. RAPD conditions of the subject were determined by clinicians with manual swinging flashlight test as well as neutral density filter test. The demographics of clinical subjects are summarized in Table I. The RAPD dataset used in study included 22 subjects, half of which correspond to a control group without RAPD whereas the other half have positive RAPD. Four out of ten males and seven out of twelve females have positive RAPD. Average age of the participants is around 52 years for no RAPD subjects and 56 years for RAPD positive subjects with a standard deviation of approximately 14 and 10, respectively.

No RAPD RAPD +ve
# Subjects (% of total)
11 (50.00%)
11 (50.00%)
# Males (% of group)
6 (54.55%)
4 (36.36%)
# Females (% of group)
5 (45.45%)
7 (63.64%)
Age (mean , in years)
52.27 14.48
56.73 10.53
TABLE I: Subject statistics in the clinical study.

Iv RAPD Detection Algorithm

Iv-a Pupil Detection

We utilize the Starburst algorithm for pupil localization [27]. Specifically, we implemented our own version based on the details in the paper [27] and the website [28]. We can group the main components in the algorithm into four stages as (1) initialization of the pupil center, (2) ray-feature projection, (3) reiteration, and (4) mean calculation as shown in Fig.4. Based on the lab-on-a-headset setup, pupils are usually located within the central region of the video frames as shown in Fig. 5. Therefore, we initialize the pupil center as the center of the video frames.

Fig. 5: Heat map of the estimated pupil centers in the test sequences. X-axis corresponds to horizontal pupil location, y-axis corresponds to vertical pupil location, and the color represents the density, which switches from white to dark blue as more pupils are detected over the same region. Distrubtion of pupil centers are also shown on the right side and the top of the heat map.

In Stage 2, rays are projected outwardly from the initialized pupil center to determine feature points, which are used to update the pupil center. We provide a detailed illustration of the progression of Stage 2 in Fig. 6. We can observe the initialized pupil center in Fig. 6(a) and first projected ray from the pupil center in Fig. 6(b). We then compare the gradient value of every point along this ray against a gradient threshold value. Ray projection continues until either a valid feature point is determined or until the ray reaches the border of the image frame. Gradient threshold test does not result in any detected feature points. Thus, the same process is repeated for rays projected in all directions as shown in Fig. 6(c) in which the green point represents a valid feature point. After determining all feature points on all rays, the mean of the feature points is used to update the pupil center. We show this in Fig. 6(d)-(f). In Stage 3, we repeat the steps in Stage 2 using the updated pupil center and a new threshold value to determine the next updated pupil center. This is repeated until all gradient values are swept through, ranging from 255 to 0, which leads to feature points shown in Fig. 6(g) and estimated pupil centers in Fig. 6(h). In Stage 4, we calculate the mean of feature points to determine the final pupil location as shown in Fig. 6(i).

(a) (b) (c) (d) (e) (f) (g) (h) (i)
Fig. 6: Visualization of the stages in the Starburst algorithm including center initialization (a), ray-feature projection (b-f), reiteration (g-h), and averaging of the feature centers (i).

Iv-B Pupil Size Measurement

We can consider pupil as a circular region and utilize the Circular Hough Transform (CHT) to measure the pupil size. We use CHT to transform the pixels in the image plane (x,y) into right circular cones in the Hough space. In Fig.7, we provide an example in which red and orange points over the pupil circumference are transformed into Hough domain. We can observe that transformed points correspond to cones in the Hough domain and the intersection of the cones lead to the parameters of the pupil in the spatial domain.

Fig. 7: Circular Hough Transform from the image plane to the Hough space.

We need to determine a minimum value and a maximum value for the search range of the circles. In this study, we set the max search range as the entire image to consider pupil size variation among subjects and min as the 5% of the max value to eliminate smaller regions that can correspond to glint or other pointy regions. We perform a parameter sweep to determine the accumulator threshold, the accumulator array size, and the canny threshold. The accumulator array is used to determine the point of intersection between all cones in the Hough parameter space. The accumulator threshold is used to determine the number of votes required for a point of intersection to be considered as a circle in the image plane. The canny threshold is used to transform the image into a binary map and separate the brighter regions from the dark region, which should correspond to pupils. We perform pupil size measurement over entire pupil images (Fig. 8(a)) as well as over images cropped around estimated pupil center (Fig. 8(b)). Specifically, we crop the image around the estimated pupil location to quarter of input image size and measure the pupil size over the cropped region. We can observe that surrounding structures in the eye image corrupt the pupil measurement whereas we obtain a more accurate pupil measurement in the cropped scenario.

(a) Original image (b) Highlighted cropped region
Fig. 8: Pupil size measurement over entire image versus cropped image.

Iv-C RAPD Assessment

Inaccurate variants of pupil size measurement can mislead the RAPD assessment process. In Fig. 9, we provide a sample from the conducted swinging flashlight test. Horizontal motion of the pupil is shown in Fig. 9(a) in which x-axis corresponds to frame number of the video, y-axis corresponds to the horizontal location of the pupil, and green line indicates the mean horizontal location of the pupil. Based on the preliminary analysis of the test sequences, we observed that horizontal/vertical pupil motion is usually restricted to 5% of the mean location, which is shown with the orange lines in Fig. 9(a). We highlight the data points exceeding the normal motion range with red in Fig. 9(a) and provide corresponding pupil size measurements in Fig. 9(b). We show a pupil localization result corresponding to one of these erroneous points in Fig. 9(c) along with detected pupil in Fig. 9(d). We can observe that even though the radius of the measurement is in a normal range, detected pupil corresponding to erroneous motion is inaccurate. Therefore, we filter out the detection results that exceed pupil motion limit both horizontally and vertically before comparing right and left pupillary light reflex.

(a) Pupil horizontal motion (b) Pupil size measurement (c) Deviation from mean (d) Invalid measurement
Fig. 9: An erroneous pupil detection example in which false pupil detection can be determined from the horizontal motion of the pupils.
(a)Dissimilarity: 1-SRCC (b)Dissimilarity: 1-PLCC
Fig. 10: Receiver operator curves (ROC) for RAPD detection algorithms. There are 16 algorithm configurations, half of which are based on residual of Spearman correlation (a) as dissimlarity measure and the remaining ones are based on residual of Pearson correlation (b). In each subfigure, there are 8 ROC curves that correspond to distinct configurations based on cropping, motion-based postprocessing, and smoothing as shown in the color legend.

We assess the RAPD condition by measuring the dissimilarity between right and left pupillary reflex. At first we measure the similarity between right and left reflex in terms of Spearman rank correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC). Then, we calculate one minus absolute correlation to obtain dissimilarity. For each dissimilarity measure, we perform a combination of crop, motion-based postprocessing, and smoothing. Crop refers to measuring pupils over cropped images as described in Section IV-B. Smoothing refers to median filtering the pupil size curves with a window size of three before dissimilarity comparison. Motion-based postprocessing refers to removal of pupil size measurements that correspond to erroneous pupil motion as described in the previous paragraph.

Dissimilarity measures approach zero as the correlation between relative pupil size measurements increases. Spearman correlation focuses on the monotonic relationship between compared signals whereas Pearson correlation focuses on the linear relationship. When a subject does not have RAPD, pupillary reactions should be almost identical, which would lead to a strong monotonic and linear relationship between compared signals. Therefore, dissimilarity measure should be close to zero as correlation measures are close to one. In case of RAPD, monotonic and linear relationship should be weaker, which would lead to low correlation values and high dissimilarity values.

V Experimental Results and Discussion

Term Description
Positive () Number of RAPD positive subjects
Negative () Number of subjects without RAPD
True positive () Number of correct RAPD detections
True negative () Number of correct no RAPD detections
False positive () Number of false RAPD detection
False negative() Number of undetected RAPD patients
TABLE II: Description of the terms used for detection performance.
Crop Motion Smoothing Dissimilarity Sensitivity Specificity Precision AUC
1-SRCC 1-PLCC
x 81.3% 68.8% 72.2% 0.738 1.912 0.765 0.478
x 68.8% 56.3% 61.1% 0.580 1.618 0.647 0.404
x x 75.0% 68.8% 70.6% 0.738 1.818 0.727 0.455
x x 75.0% 62.5% 66.7% 0.627 1.765 0.706 0.441
x x 81.3% 50.0% 61.9% 0.527 1.757 0.703 0.439
x x 56.3% 50.0% 52.9% 0.473 1.364 0.545 0.341
x x x 62.5% 50.0% 55.6% 0.596 1.471 0.588 0.368
x x x 56.3% 50.0% 52.9% 0.570 1.364 0.545 0.341
x x 87.5% 75.0% 77.8% 0.885 2.059 0.824 0.515
x x 93.8% 87.5% 88.2% 0.916 2.273 0.909 0.568
x x x 87.5% 75.0% 77.8% 0.861 2.059 0.824 0.515
x x x 81.3% 68.8% 72.2% 0.840 1.912 0.765 0.478
x x x 93.8% 81.3% 83.3% 0.676 2.206 0.882 0.551
x x x 87.5% 75.0% 77.8% 0.820 2.059 0.824 0.515
x x x x 87.5% 81.3% 82.4% 0.635 2.121 0.848 0.530
x x x x 75.0% 62.5% 66.7% 0.537 1.765 0.706 0.441
TABLE III: Overall evaluation of RAPD detection algorithms.

At first, we analyze the receiver operating characteristics (ROC) curves of RAPD detection algorithms to evaluate their diagnostic capability. We obtain the ROC curves in Fig. 10 by plotting the true positive rate () in y-axis versus false positive rate () in x-axis of RAPD detection algorithms while sweeping the classification threshold. We calculate as and false positive rate as , which are described in Table II. For each algorithm configuration, we determine the optimal operation points that maximize sensitivity and specificity, which are shown with a filled circle over each curve. We calculate as and as . In addition to () and , we evaluate the RAPD detection performance in terms of (), , and . We obtain values by calculating the area under ROC curves in Fig. 10 for each algorithm configuration. Finally, we obtain the by calculating a weighted combination of and as

(1)

where is set to , , and to set the relative importance of and . We report the RAPD detection performance of each algorithm configuration around their optimal operation points in Table III in which we highlight the best performance for each evaluation metric with bold font.

Fig. 11: Ground truth versus estimated RAPD conditions.

Baseline algorithm combined with cropping and Pearson-based dissimilarity leads to highest RAPD detection performance in all evaluation categories as highlighted with a yellow background in Table III. Based on the highest performing algorithm, we provide a scatter plot of ground truth RAPD conditions versus estimated RAPD conditions in Fig. 11. In the scatter plot, x axis corresponds to RAPD index obtained from best performing algorithm and y axis corresponds to ground truth RAPD classes. In the scatter plot, top right region corresponds to true positives and bottom left corresponds to true negatives. Top left region corresponds to false negatives and bottom right corresponds to false positives. Overall, there are only 2 false RAPD detections (FP) and 1 missed RAPD detection () out of 32 test cases.

Vi Conclusion

We automated manual swinging flashlight test with a portable imaging device and introduced an algorithm to objectively detect relative afferent pupilary defect (RAPD). Preliminary results show that introduced algorithm can correctly identify the RAPD condition of 29 cases out of 32. In this study, we only focused on the RAPD condition. However, in the future work, we plan to utilize lab-on-a-headset and developed algorithms for other clinical conditions related to pupillary assessment. Our objective is to develop portable systems backed with artificial intelligence that can operate in remote locations, and serve underprivileged communities and patients with mobility constraints.

References

  • [1] K. Čapek, R.U.R. - Rossum’s Universal Robots, Aventinum, 1920.
  • [2] G. P. Moustris, S. C. Hiridis, K. M. Deliparaschos, and K. M. Konstantinidis, “Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature,” The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 7, no. 4, pp. 375–392, 2011.
  • [3] N. G. Hockstein, C. G. Gourin, R. A. Faust, and D. J. Terris, “A history of robots: from science fiction to surgical robotics,” Journal of Robotic Surgery, vol. 1, no. 2, pp. 113–118, Jul 2007.
  • [4] T. L. Edwards, K. Xue, H. C. M. Meenink, M. J. Beelen, G. J. L. Naus, M. P. Simunovic, M. Latasiewicz, A. D. Farmery, M. D. de Smet, and R. E. MacLaren, “First-in-human study of the safety and viability of intraocular robotic surgery,” Nature Biomedical Engineering, vol. 2, no. 9, pp. 649–656, 2018.
  • [5] V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, and et al., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA, vol. 316, no. 22, pp. 2402–2410, 2016.
  • [6] J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N. Tomasev, S. Blackwell, H. Askham, X. Glorot, B. O’Donoghue, D. Visentin, and et al, “Clinically applicable deep learning for diagnosis and referral in retinal disease,” Nature Medicine, vol. 24, no. 9, pp. 1342–1350, 2018.
  • [7] The U.S. Food and Drug Administration, “FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems,” [Online] Available: https://www.fda.gov/newsevents/newsroom/pressannouncements/ucm604357.htm. [Accessed: 5-Nov-2018].
  • [8] C. A. Hall and R. P. Chilcott, “Eyeing up the Future of the Pupillary Light Reflex in Neurodiagnostics,” Diagnostics, vol. 8, no. 1, Mar 2018.
  • [9] D. C. Broadway, “How to test for a relative afferent pupillary defect (RAPD),” Community Eye Health, vol. 25, no. 79-80, pp. 58–59, 2012.
  • [10] L. Omburo, S. Stutzman, C. Supnet, M. Choate, and D M. Olson, “High variance in pupillary examination findings among postanesthesia care unit nurses,” Journal of PeriAnesthesia Nursing, vol. 32, no. 3, pp. 219 – 224, 2017.
  • [11] I. E. Loewenfeld and D. A. Newsome, “Iris Mechanics I. Influences of Pupil Size on Dynamics of Pupillary Movements,” American Journal of Ophthalmology, vol. 71, no. 1, pp. 347–362, 1971.
  • [12] A. Kawasaki, P. Moore, and R. H. Kardon, “Variability of the relative afferent pupillary defect,” American Journal of Ophthalmology, vol. 120, no. 5, pp. 622 – 633, 1995.
  • [13] N. J. Volpe, E. S. Plotkin, M. G. Maguire, R. Hariprasad, and S. L. Galetta, “Portable pupillography of the swinging flashlight test to detect afferent pupillary defects,” Ophthalmology, vol. 107, no. 10, pp. 1913 – 1921, 2000.
  • [14] A. Miki, A. Iijima, M. Takagi, K. Yaoeda, T. Usui, S. Hasegawa, H. Abe, and T. Bando, “Pupillography of automated swinging flashlight test in amblyopia,” Clinical Ophthalmology, vol. 2, no. 4, pp. 781–786, 2008.
  • [15] M Waisbourd, B. Lee, M.H. Ali, L. Lu, P. Martinez, B. Faria, A. Williams, M.R. Moster, L.J. Katz, and G.L. Spaeth, “Detection of asymmetric glaucomatous damage using automated pupillography, the swinging flashlight method and the magnified-assisted swinging flashlight method,” Nature Eye, vol. 29, pp. 1321–1328, 2015.
  • [16] E. Kasneci, K. Sippel, K. Aehling, M. Heister, W. Rosenstiel, U. Schiefer, and E. Papageorgiou, “Driving with binocular visual field loss? a study on a supervised on-road parcours with simultaneous eye and head tracking,” PLOS ONE, vol. 9, no. 2, pp. 1–13, 02 2014.
  • [17] K. Sippel, E. Kasneci, K. Aehling, M. Heister, W. Rosenstiel, U. Schiefer, and E. Papageorgiou, “Binocular glaucomatous visual field loss and its impact on visual exploration - a supermarket study,” PLOS ONE, vol. 9, no. 8, pp. 1–7, 08 2014.
  • [18] W. Fuhl, T. Kübler, K. Sippel, W. Rosenstiel, and E. Kasneci, “Excuse: Robust pupil detection in real-world scenarios,” in Computer Analysis of Images and Patterns, George Azzopardi and Nicolai Petkov, Eds., Cham, 2015, pp. 39–51, Springer International Publishing.
  • [19] L. Świrski, A. Bulling, and N. Dodgson, “Robust real-time pupil tracking in highly off-axis images,” in Proceedings of the Symposium on Eye Tracking Research and Applications, 2012, pp. 173–176.
  • [20] W. Fuhl, T. C. Santini, T. Kübler, and E. Kasneci, “Else: Ellipse selection for robust pupil detection in real-world environments,” in Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, New York, NY, USA, 2016, pp. 123–130.
  • [21] W. Fuhl, T. Santini, C. Reichert, D. Claus, A. Herkommer, H. Bahmani, K. Rifai, S. Wahl, and E. Kasneci, “Non-intrusive practitioner pupil detection for unmodified microscope oculars,” Computers in Biology and Medicine, vol. 79, pp. 36 – 44, 2016.
  • [22] M. Tonsen, X. Zhang, Y. Sugano, and A. Bulling, “Labelled pupils in the wild: A dataset for studying pupil detection in unconstrained environments,” in Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, New York, NY, USA, 2016, ETRA ’16, pp. 139–142, ACM.
  • [23] M. Kassner, W. Patera, and A. Bulling, “Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction,” in arXiv:1405.0006, 2014.
  • [24] Tobii Technology, “An introduction to eye tracking and tobii eye trackers,” 2010.
  • [25] D. Temel, M. J. Mathew, G. AlRegib, and Y. M. Khalifa, “Lab-on-a-headset: Multi-purpose ocular monitoring,” in U.S. Provisional Patent, No: 62/642,279, March 2018.
  • [26] D. Temel, M. J. Mathew, G. AlRegib, and Y. M. Khalifa, “Ocular monitoring headset,” in U.S. Non-Provisional Patent, No: 16/382,363, March 2019.
  • [27] D. Li, D. Winfield, and D. J. Parkhurst, “Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, June 2005, pp. 79–79.
  • [28] University Tuebingen, “Pupil detection,” [Online] Available: http://www.ti.uni-tuebingen.de/Pupil-detection.1827.0.html.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
366077
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description