USA Banner

Official US Government Icon

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure Site Icon

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

U.S. Department of Transportation U.S. Department of Transportation Icon United States Department of Transportation United States Department of Transportation
FHWA Highway Safety Programs

DATA QUALITY

One of the key findings of this report was the importance of data quality. The research team organized the management of the system development process into three parts to ensure data quality for the MIRE MIS:

  • Quality Planning: selecting standards and how to meet them.
  • Quality Assurance: evaluations during project development.
  • Quality Control: monitoring project results and improving performance as needed.

Quality Planning

Data quality decisions may vary from State to State depending on their priorities. Quality planning involves deciding what level of quality to expect in the resulting system. The section of this document, Data Quality Performance Metrics, discussed these data standards. Agencies should include the standards for the system into the design of the system from the earliest possible point in the design cycle.

As part of the quality planning effort, the design team would also conduct a benefit/cost analysis for achieving the desired level of quality. Assessing the downside of increased quality can be a difficult task for designers unfamiliar with roadway data reporting and analysis in general, or for those system users who are unfamiliar with system development. Therefore, it is important that the system design team include personnel from both IT and user management functions so the team can decide how high to set the quality bar in an informed manner, and ensure that the resulting system is sustainable over time. Benefit/cost analysis is discussed in further detail in the following section.

Quality Assurance

The quality assurance process includes techniques used by the system development team to ensure that the system meets expectations. After the design is complete, the development team constructs and tests the system before implementation. Quality assurance continues after implementation as well. The measurements that apply to the system's performance may not seem familiar to those used to managing data quality, but they are very important for making sure that the system itself is not causing data problems. For example, does the system design allow access for all users who have a responsibility of analyzing safety data and setting project priorities? A measure of the percentage of critical users that have flexible access to MIRE MIS for analysis is a quality assurance metric. It tells system managers if there is a problem with the system's accessibility. Such "down time" may not have a measurable impact on more traditional data quality measures (e.g., timeliness, accuracy, completeness, etc.) but can have a major impact on how well the system is perceived by users, and how willing they are to continue using it. Following system implementation, the system developers and managers are the primary users of the quality assurance measures. These measures allow tracking of how well the hardware and software are performing, and lets managers know if there are problems that need to be addressed. They can also serve as an early warning of problems that could affect data quality – the bottom line for most users of a system.

Quality Control

For data managers and users, quality control is the most familiar part of the project planning process. Quality control is the set of measurements and procedures put in place to ensure that the data quality is meeting expectations. The measures of data quality can cover a wide variety of issues at a wide range of "levels" – from global indicators of overall quality, to micro-level indicators of the validity of data in one particular field of the database. Quality control processes are the responses of the system (the software and the people working with it) to quality problems that arise. For example, in the MIRE MIS, the data quality metrics would show if location data were not meeting expectations (e.g., location coding is not precise enough, or features cannot be matched to a location on the roadway network). The quality control processes are the response to these questions – what do the data managers and collectors do about the problems?

SYSTEM PERFORMANCE METRICS

The three most desired types of system-level performance measures for the MIRE MIS system are those that would:

1) Relate the availability of the MIRE data to lives saved;

2) Accurately track the cost of maintaining the MIRE MIS; and

3) Reflect good customer service both intra-and interagency.

In other words, measurements of the benefits and costs associated with maintaining a complete and accurate record of the MIRE MIS data elements for the State.

Benefit/Cost

It is a possibility that the MIRE MIS will have distributed databases; therefore, defining and obtaining the cost of entering data into the system may be difficult. For example, the planning division may establish the base network from design and as-built plans, while the inventory updates may be made in the maintenance or other divisions. These distributed costs make it more difficult to obtain a reliable cost estimate. The research team did not find agreed-upon standards for the components to include, or exclude, in the cost metric. Questions that should be considered include: should the time for the driver of the data collection van be included? Should the cost of supervisory review be a factor? If a roadway feature or crash report cannot be located accurately, should the cost of correction be included in the overall cost of operating the MIRE MIS?

If a State initially collects much of the data imported into the MIRE MIS for other uses, it could make the definition of costs and benefits more distinct from the State DOT's operational roadway databases. A standard method for explicitly tracking the system cost could include the following components:

  • Time spent collecting initial data (if not already included in operational roadway databases).
  • Cost for transmission of imports of data from existing systems.
  • Costs of initial software purchase and implementation or in-house development.
  • Costs for annual maintenance, including any licensing and support.
  • Separate line items for life-cycle costs of the hardware and software.
  • Total number of roadway segments and features obtained electronically, and the percentage that represents of total data collection for the entire system.

The FHWA Office of Safety has developed a methodology to estimate the costs and benefits of investing in data and data systems for safety. This methodology was published in a Guidebook, Benefit-Cost Analysis of Investing in Data Systems and Processes for Data-Driven Safety Programs: Decision-Making Guidebook in October, 2012 (9).

Customer Service

States may wish to measure the impact of their MIRE MIS on delivery of service to the users of the roadway and other safety-related data. Customer service, in particular, may be affected by the use of a decision support system that consolidates safety-related data. A State may wish to measure how able it is to better meet customers' needs (both internal to the agency and external customers) once the MIRE MIS data are available electronically. For example, States could measure the proportion of data requests met via a web portal (thus requiring little or no direct staff time), as well as the time it takes to deliver data following a customer request. States wishing to show the full impact and utility of their system will measure customer service as well as costs.

DATA QUALITY PERFORMANCE METRICS

Performance metrics/measures for data systems are tools for helping measure data quality and establishing goals for data improvement. The National Highway Traffic Safety Administration (NHTSA) defined performance measures for timeliness, accuracy, completeness, uniformity, integration, and accessibility for each of the six core traffic safety data systems – crash, vehicle, driver, roadway, citation/adjudication, and injury surveillance. An FHWA report, Performance Metrics for Roadway Inventory Data, conducted as part of the overall MIRE MIS project, builds on those metrics, providing a detailed review of each of the metrics proposed for roadway data and suggesting modifications of and possible additions to that original NHTSA list. The measurement of data quality using metrics is of little value without follow-up efforts to correct problems identified. The report also provides suggestions for data-related business practices that can lead to the successful use of metrics and to improvements in roadway data (10).