Biometrics is the science of authenticating individuals using their biological traits. Given their more natural user experience and acceptance, fingerprints, faces, and eyes are among the most popular biometric modalities in mobile devices. Ocular biometrics in the visible spectrum has gained significant traction in recent years given its “selfie” like acquisition through the front-facing cameras of mobile phones, and its perceived higher levels of security. For ocular biometrics in visible light, a user is usually authenticated using either vasculature seen on the white of the eye, eyebrows, or the periocular region encompassing the eye. We collected the largest visible-light mobile ocular dataset, VISOB, and introduced a subset of it to the public during a competition in IEEE ICIP 2016. However, that competition mainly focused on the short-term data and single frame input matching experiments. In contrast, the larger superset in our possession has stacks of burst images for each capture series in addition to long-term data. In the case of mobile devices, given the varying lighting conditions, motion blur, and prescription glass, ocular biometrics may significantly benefit from multi-frame image enhancement. It is known that image stacks from burst captures can be used to resolve finer features essential to “selfie” ocular biometrics, which often suffer from subpar front-facing cameras. In the proposed challenge, we will not decouple the image enhancement stage from the matching stage and thus allow for an end to end learning to provide the best fusion of biometric information from bursts of input images. This part II of VISOB competition will also focus on long term user verification for a more challenging but also more realistic scenario.

Example Eye Image with Iris, Conjunctiva & episcleral vasculature, Eyebrow, and Periocular regions labeled

VISOB 2.0 Dataset

VISible light mobile Ocular Biometric (VISOB) 2.0 dataset (WCCI/IJCNN2020 Challenge Version) is a new subset of the original dataset consisting of eye images from 250 healthy adult volunteers acquired using two different smartphones, i.e. Samsung Note 4 and Oppo N1 capturing bursts of still images at 1080p resolution using pixel binning. Volunteers' data were collected during two visits (Visit 1 and Visit 2), 2 to 4 weeks apart, with all 250 participants returning for a second visit (long-term). During each visit, volunteers were asked to take selfie like captures using front facing cameras of the aforementioned mobile devices in two different sessions (Session 1 and Session 2) that were about 10 to 15 minutes apart. The volunteers used the mobile phones naturally, holding the devices 8 to 12 inches away from their faces. For each session, images were captured under three lighting conditions: regular office light, dim light (office lights off but dim ambient lighting still present), and natural daylight (next to large sunlit windows).

During the development stage, contestants will be provided with five consecutive eye crops (see figure below) from 150 participants from both visits. The remaining participant data will be withheld for our evaluation (open set testing), where the contestants will be asked to submit an executable that accepts two eye image stacks and generates a matching score.

office light dim light daylight office light dim light daylight
Visit - I
4,542 4,788 4,868 7,848 5,138 5,314
Visit - II
6,138 6,158 6,148 10,546 7,076 6,864

The table shows number of left and right eye sample distribution though each device, lighting, and visits in development set provided to the contestants.

Sample eye stacks from the dataset.

As the performance metric, we will be compiling genuine match rates (GMR) at 1/1000, 1/10,000, and 1/100,000 false match rates (FMR) on the embargoed test dataset a-using same lighting condition for enrollment and testing, b-using different lighting condition for enrollment vs testing, and c-mixing all combinations of lighting conditions for enrollment and testing. The winner will be decided based on the best GMR at 1/100,000 FMR for the latter case (c, all vs all). GMRs at higher FARs will be used as tie- breakers only if and when needed. We will also report on the average execution time on our hardware setup (please see the following evaluation section) for completeness, but that won’t affect the rankings.

Registration Instruction

To register in the competition, download the VISOB 2.0 dataset WCCI/IJCNN2020 challenge version. Please first fill out and sign VISOB general Data Sharing Agreement and send it to: Narsi Reddy (sdhy7@mail.umkc.edu), Mark Nguyen (hdnf39@mail.umkc.edu), and Dr. Reza Derakhshani (derakhshanir@umkc.edu) with subject line: VISOB 2.0 WCCI/IJCNN2020 challenge. You will receive a download link upon approval of your usage of the database.

*For VISOB dataset ICIP2016 challenge version, please click here.


For evaluation, the contestants are asked to provide an executable which can accept a .csv file consisting of locations of test images on each row (i.e. paths of two stacks of ocular images that are to be matched).

An example of how a submitted executable could be used:

cibitlab@umkc:~$ executable match_file.csv

Inside the match_file.csv:

/media/pc/test_data/img_1.png, /media/pc/test_data/img_9.png
/media/pc/test_data/img_2.png, /media/pc/test_data/img_65.png

The submitted executable is expected to provide its generated results in another csv file in the same order as the image stacks from the input match_file.csv.

If your code fails to generate a match score or otherwise generates an error, it should return an NaN in place of the match score. For simplicity, we will consider all NaNs as false rejects.

For evaluation please provide your model through email to organisers.

Competition Results

Participated teams:

Participant Members Institution Features Matcher Rank
Team 1 Luiz Zanlorensi, Diego Rafael Lucio, and David Menotti Federal University of Parana ResNet-50 Cosine similarity 1st
Team 2 Ritesh Vyas Bennett University Directional threshold local binary pattern Chi-square distance 2nd
Team 3 Anonymous Anonymous GoogLeNet Euclidean distance and LSTM 3rd

Results (Note 4 device) in EER (%):

Participant Dark vs Dark Dark vs Daylight Dark vs Office Daylight vs Dark Daylight vs Daylight Daylight vs Office Office vs Dark Office vs Daylight Office vs Office
Team 1 7.462 10.025 6.659 11.456 7.763 6.722 12.102 8.063 5.256
Team 2 35.014 40.468 42.153 41.499 30.679 34.403 43.651 34.309 27.05
Team 3 42.074 44.688 43.435 44.41 40.685 42.514 46.085 42.686 39.772

Results (Oppo device) in EER (%):

Participant Dark vs Dark Dark vs Daylight Dark vs Office Daylight vs Dark Daylight vs Daylight Daylight vs Office Office vs Dark Office vs Daylight Office vs Office
Team 1 6.394 9.397 8.082 8.282 8.112 6.672 9.757 8.654 6.487
Team 2 34.334 40.362 40.898 41.993 29.697 31.911 42.945 31.785 26.208
Team 3 40.301 44.943 43.705 45.411 42.46 45.137 46.679 45.698 42.047

Important Dates

January 7, 2020: Open registration

January 7, 2020: Instructions for train dataset.

January 7, 2020: Instructions for model submission for evaluation.

May 8, 2020 June 8, 2020: Deadline for model submissions.

May 15, 2020 June 15, 2020: Notification of results (Through e-mail)

May 17, 2020 June 22, 2020: Publication of the results on website

July 19-24, 2020: Competition Session and Awarding Ceremony at WCCI 2020

Due to the delays and disruptions caused by the COVID-19, and the requests from our participants, we are extending WCCI VISOB 2.0 competition deadlines by a month.

Please note that these dates are final and won’t be extended.

We're also planning on compiling and publishing a more detailed version of the competition results at a later date in the form of a peer-reviewed paper. To facilitate the aforesaid, it would be very helpful if you could please submit a one-page document detailing your submitted methods and models along with your final submission, which may also help us to better evaluate your submission. Please do not hesitate to reach out to us should you have any questions

Organizer/Contact Information

Reza Derakhshani University of Missouri Kansas City reza@umkc.edu
Ajita Rattani Wichita State University
Mark Nguyen University of Missouri Kansas City hdnf39@mail.umkc.edu
Narsi Reddy University of Missouri Kansas City sdhy7@mail.umkc.edu