USA Banner

Official US Government Icon

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure Site Icon

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

U.S. Department of Transportation U.S. Department of Transportation Icon United States Department of Transportation United States Department of Transportation
FHWA Highway Safety Programs

Chapter 7 Monitoring, Evaluation, and Feedback

Monitoring, evaluation, and feedback are methods for measuring SHSP progress, understanding its impact on safety, identifying and institutionalizing lessons learned, improving decision-making, and providing the information necessary to make course corrections and update the SHSP. Some States have already updated or begun the process of updating their SHSP based on the experience they have acquired to date. Effective monitoring and evaluation requires an engaged SHSP management team, action plan implementers who provide regular status updates, and a procedure or system to collect, organize, and display progress.

Monitor SHSP Implementation

Establishing a formalized reporting system with standard elements provides timely and consistent information to SHSP managers and stakeholders to improve decision-making and accountability. In some States, emphasis area teams provide quarterly, annual, or semiannual reports to a Statewide highway safety commission or coalition. While reporting of progress is typically done on a periodic basis, the internal process of monitoring implementation and results is continuous and ongoing. Several States have developed tools to formalize and streamline the monitoring and reporting process. Tracking tools range from relatively simple spreadsheets showing fatalities by emphasis area to customized Web-based tools and programs identifying location-specific crashes by emphasis area. One State is developing a formalized process to update action plans: who is responsible for emphasis area updates; when are updates required; what information is provided; and how will the updates drive the SHSP decision-making process?

Evaluate SHSP Implementation Efforts

Evaluation depends on collecting baseline data reflecting the situation prior to implementation, as well as continued data collection during the implementation period and after project completion. Waiting until after the project has started to develop an evaluation plan can result in missed opportunities to collect data that are critical to evaluating the impact of a project. To avoid this, establish an evaluation plan to track progress and evaluate effectiveness as an integral part of the process. Define what constitutes “success� prior to implementation to ensure appropriate data are collected for the evaluation.

SHSP evaluations determine project effectiveness in terms of fatality and serious injury reductions. The data to collect may include:

  • Costs of safety countermeasures.
  • Benefits of safety countermeasures.
  • Incidence of crashes before and after strategy implementation.
  • Expected incidence of crashes without strategy implementation.

While it may take several years to develop valid conclusions about the effectiveness of a complex project, preliminary judgments can often be made more quickly when based upon suitable data. These may provide an early indication of likely success or failure and enable managers to react accordingly. Some engineering and behavioral countermeasures (e.g., low-cost safety improvements, enforcement) tend to generate early or interim results that are accurate predictors of longer-term effects. Other countermeasures (e.g., graduated driver licensing) may take several years of performance monitoring and reporting cycles to begin to reveal their actual effectiveness in a particular implementation.

Develop SHSP Performance Measures

Many States develop action plans with measurable objectives and track quarterly progress on them. Performance measures or indicators are used to streamline the tracking and evaluation process by defining consistent data, metrics, and reporting methods from one period to the next. Performance measurement provides quantifiable evidence of progress and helps managers determine whether the project met its stated objectives or needs to be modified. Even “permanent” installations (e.g., rumble strips) require decisions about future maintenance investments. An evaluation plan should specify the measures that will be used to track progress in each emphasis area, and the data required to support those measures. Evaluation results should be retained to improve future estimates of effectiveness as well as to identify trends over longer-time periods.

Evaluation of performance can use measures of “output” or “outcome,” and preferably will include both. Output measures indicate the level of activity or effort that was put into a particular countermeasure, for example, miles of rumble strip installed or guardrail replaced. These are appropriate to track the cost and productivity of SHSP implementation. Outcome measures are direct indicators of the effectiveness of a countermeasure in meeting the fundamental objectives of the SHSP, for example, crash rates or fatality rates. These require different types of data than output measures, and professional judgment must be exercised before concluding there is causality between the countermeasure and the outcome.

Evaluate Projects and Programs

Evaluation can take place at the project level (e.g., a specific implementation of a countermeasure) or at the program level. Project-level evaluation focuses on the impacts, benefits, and cost-effectiveness of a particular project or set of projects, and therefore requires data that is specific to the project location(s). Project-level information is most useful in determining whether the appropriate countermeasures have been selected and effectively implemented. This should also inform the review of, and decisions about, strategies and countermeasures identified in the SHSP. In contrast, program-level evaluation provides managers and stakeholders with a broad picture of the efficiency and effectiveness of the SHSP implementation effort. Program-level performance measures may include evaluation of administrative aspects such as whether the programmed projects have been implemented in a timely fashion according to budget. Program-level evaluations can also include qualitative analysis of decision-making processes and consistency in the application of policies or procedures. Some strategies (e.g., data improvements) are best evaluated only at the program level as their direct impact and effectiveness on reducing fatalities and serious injuries cannot be measured.

Provide Feedback to the Planning and Implementation Process

The working group meets periodically to review the SHSP, examine progress toward goals, suggest changes or modifications, and brief the leadership. By regularly reexamining its data, evaluating the effectiveness of its countermeasures and strategies, and monitoring its progress in accomplishing the SHSP goals, they can better determine which elements of the plan, if any, should be updated or revised. The review process is accomplished at least annually, but in reality, it is a continuous process. Treat the SHSP as a living document that evolves and progresses as goals, strategies, and safety data change. Measuring the success of the overall SHSP effort is the key to maintaining momentum and advancing implementation to higher levels.

Key Monitoring, Evaluation, and Feedback Strategies

  • Establish a timeline for reviewing and updating the SHSP, identifying key data inputs, reporting cycles, and other schedules (e.g., STIP) with which the review should be coordinated.
  • Monitor the implementation effort and issue periodic, standardized progress reports for each emphasis area.
  • Use a tracking tool, at least a spreadsheet, to organize and manage the monitoring process and to formalize reporting and sharing of information.
  • Use data-driven evaluation techniques and collect baseline data prior to implementation; consult standard data collection and analysis references as necessary to ensure credible results. • Define performance objectives that determine what constitutes “successâ€� prior to countermeasure selection and implementation.
  • Select suitable performance measures that are clearly related to performance objectives to make sure the appropriate data are collected pre-and post- implementation.
  • Make sure the SHSP implementation team is familiar with safety-related performance measuring tools.
  • Use the results of monitoring and evaluation to identify opportunities to update or revise the SHSP.


Answering these questions will help stakeholders review their current SHSP monitoring, evaluation, and feedback processes and identify opportunities for improvement.

  • Does your State have procedures for monitoring and evaluating the SHSP? Who is responsible?
  • What tools do you use to assemble and analyze data? To create reports?
  • Do you utilize performance measures? Are they clearly linked to or derived from SHSP objectives?
  • Are performance measures tied to future program funding? If so, how?
  • What procedures are in place for ongoing SHSP update and revision? Who is responsible for leading the effort? Who participates?
  • What data are used to update or revise the SHSP?