The goal of the project is to develop a system that can analyze naturalistic driving videos and automatically produce annotations and descriptors for events, behavior, and driving scenarios that relate to transportation safety. Using large datasets obtained in naturalistic driving situations, the system will use innovative machine-learning techniques to recognize safety-related aspects of the driver, passengers, and outside environment. The project has four primary objectives that address 1) characterization of high-level driver behavior, such as eating or attending to a phone; 2) classification of extra-vehicular context, such as construction zones; 3) interactions and dependencies between drivers and the surrounding environment, such as looking at a passing vehicle or looking at a billboard; and 4) conclusions as to how the video analytics techniques developed under the other objectives enable human factors researchers to address key research questions in novel ways.
This project represents a collaborative university/industry effort by Virginia Tech and SmartDrive Systems, Inc. Both organizations will contribute technical expertise as well as extensive in-vehicle video datasets. In particular, the Virginia Tech Transportation Institute will provide access to the SHRP2 dataset, and SmartDrive Systems will provide annotations and video epochs of naturalistic driving video data obtained from more than 8 billion driving miles.