This research will attempt an automated facial masking technique to deidentify face images while preserving the facial behaviors of the drivers. Facial deidentification is complete and nonreversible so that the driver's identity cannot be re-established. At the core of the research is a new concept referred to as Facial Action Transfer (FAT). FAT clones the facial actions from the video of one person (e.g., the source; the driver to be masked) to another person (e.g., the target; the person that will be used to replace the driver's face). Two important distinctions of FAT (compared to other image distortion methods) are: (1) the ability to replace the person-specific facial features (identity information) of the subject to be protected (source) with those of the target; and (2) the ability to preserve facial actions by generating video-realistic facial shape and appearance changes on the target's face. This method produces photo-realistic and video-realistic deidentified video that preserves spontaneous and subtle facial movements, while deidentifying the driver. The proposed system has two main components: (1) real-time facial feature tracking, referred to as the SDM (supervised descent method) tracker; and (2) FAT-based face deidentification (masking), which replaces facial features while preserving other information (e.g., head pose and facial action).
Develop automated identity masking tools that can measurably protect the personally identifiable information (PII) of participants in the Naturalistic Driving Study.