Competition in a Nutshell

About the Data

The dataset consists of both high- and low-resolution driver video data prepared by Oak Ridge National Laboratory for this Driver Video Privacy Task. The data for this competition were gathered using the same data acquisition system as the much larger SHRP2 dataset which currently has limited access in a secure enclave. The SHRP 2 dataset was collected as part of the Naturalistic Driving Study (NDS) and contains millions of hours and petabytes of driver video data that can be used by researchers to gain a better understanding about the underlying causes of car crashes. For the data in this Task, there are 10 drivers in choreographed situations designed to emulate different naturalistic driving environments. Actions include talking, coughing, singing, dancing, waving, eating, and various others.

Evaluation Process

The evaluation process includes a preliminary automated evaluation as well as a human evaluation, to assess the de-identification of faces and measure the consistency in preserving driver actions and emotions. An initial automated process will be run using a deep learning-based gaze estimator. The difference in predicted gaze-vectors from the original un-filtered video and de-identified video will be used as an initial score. Human evaluators will then use a method as described here to produce another score.

The scores for each of these areas will be combined for an overall assessment, prioritizing the human assessment of de-identification. PLEASE NOTE that this Task is heavily reliant on human evaluation, and we encourage participants to include in their submission any ideas, methods, and results from their own evaluation approaches. The participants’ descriptions of methodology, assumptions, and results will be shared with reviewers and the project organizers for additional discussion and opportunities for seed funding for further research.