USA Banner

Official US Government Icon

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure Site Icon

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

U.S. Department of Transportation U.S. Department of Transportation Icon United States Department of Transportation United States Department of Transportation

Public Roads - Spring 2019

Date:
Spring 2019
Issue No:
Vol. 83 No. 1
Publication Number:
FHWA-HRT-19-003
Table of Contents

Measuring Maturity

by Nate Deshmukh Towery, David Kuehn, and Kirsten Van Fossen

Technology readiness level assessments can help transportation agencies enhance program management and project results. Here's how.

des1
Researchers demonstrate FHWA's in situ scour testing device in Nebraska. A technology readiness level assessment helped the research team determine the next steps for the project.

New technologies to solve transportation challenges are emerging constantly, and existing technologies can be applied in new ways. For example, early government research during the 1970s looked at ways that radio-frequency identification could be used in the movement of hazardous materials or agricultural products. Later, researchers realized that this technology could be applied to transponders to solve the problem of slowed traffic and congestion at highway toll booths. In the 1980s, private companies ran demonstrations with test vehicles before early adopters made toll transponders available for the first time for public use in 1989. The technology has continued to improve, making way for completely automated toll lanes and highways with no booths at all.

State and local agencies need to be able to assess new technologies for deployment or further development. But, how can agencies make the best, most informed decisions about which will be most beneficial for reaching their goals? Understanding the maturity of a technology is critical to making investment and policy decisions for transportation research. However, assessing maturity can be complex, costly, and time-consuming.

Determining the technology readiness level (TRL) is a way for everyone involved in a project to use a defined scale to assess a technology's maturity. TRL assessments provide guidance to simplify the process, determine the maturity of technologies, and identify next steps in the research process. The Federal Highway Administration has applied TRL assessments to a broad range of research across industries and project types. Using TRLs offers several benefits for transportation researchers and program managers, including improved project communications, project outcomes, and research program management.

The TRL Scale

TRLs are formal metrics that support assessments of the maturity of a particular technology and create a framework for comparing levels of maturity between different technologies. The National Aeronautics and Space Administration (NASA) implemented the initial scale in 1974 to assist in the selection of technologies for further development and mission deployment in the space program. However, the appeal of TRLs as a structured evaluation tool has extended their use to the fields of energy, software, and manufacturing, among others. The TRL scale is the predominant tool for assessing technology maturity across disciplines and industries.

Over time, the scale has been modified and refined. The current TRL assessment process ranks a technology using a nine-point scale grouped into four categories of progression from concept to implementation: basic research, applied research, development, and implementation.

The TRL scale assesses technology in terms of certain characteristics, such as whether system performance metrics have been established, end-user requirements documented, or a prototype tested in a realistic and relevant environment outside the laboratory. Researchers measure these characteristics through completed tests appropriate to the specific technology.

The scale considers two aspects of conducting the tests. The first is how complete the technology was when it was tested: Was it a paper-and-pen concept, a system of equations, a component, a subsystem, or the complete system? The second addresses the representativeness of the test environment: Was it a computer simulation, a controlled laboratory experiment, a demonstration at a proving ground, or a real-world test? In addition, this component of the TRL assessment considers how similar the tester was to the ultimate technology user: Was the tester the developer of the technology, another expert in the field, or a user with no more specific knowledge than the typical user?

Each level of the scale offers requirements framed as questions to help determine where a particular technology should fall. By focusing on completed tests and a typical progression of testing toward technology adoption, the TRL scale facilitates a structured approach for indicating immediate next steps for a research project. Considering and debating the questions that comprise the TRL scale guides all parties involved in TRL assessments to a shared understanding of the technical state of the project. The discussion involved in assigning a readiness level to a project can uncover technical gaps and questions that point toward next steps in the technology's development.

Category Technology Readiness Level Description Requirements
Basic Research 1 Basic principles and research
  • Do basic scientific principles support the concept?
  • Has the technology development methodology or approach been developed?
2 Application formulated
  • Are potential system applications identified?
  • Are system components and the user interface at least partly described?
  • Do preliminary analyses or experiments confirm that the application might meet the user need?
3 Proof of concept
  • Are system performance metrics established?
  • Is system feasibility fully established?
  • Do experiments or modeling and simulation validate performance predictions of system capability?
  • Does the technology address a need or introduce an innovation in the field of transportation?
Applied Research 4 Components validated in laboratory environment
  • Are end-user requirements documented?
  • Does a plausible draft integration plan exist, and is component compatibility demonstrated?
  • Were individual components successfully tested in a laboratory environment (a fully controlled test environment where a limited number of critical functions are tested)?
5 Integrated components demonstrated in laboratory environment
  • Are external and internal system interfaces documented?
  • Are target and minimum operational requirements developed?
  • Is component integration demonstrated in a laboratory environment (fully controlled setting)?
Development 6 Prototype demonstrated in relevant environment
  • Is the operational environment (that is, user community, physical environment, and input data characteristics, as appropriate) fully known?
  • Was the prototype tested in a realistic and relevant environment outside the laboratory?
  • Does the prototype satisfy all operational requirements when confronted with realistic problems?
7 Prototype demonstrated in operational environment
  • Are available components representative of production components?
  • Is the fully integrated prototype demonstrated in an operational environment (real-world conditions, including the user community)?
  • Are all interfaces tested individually under stressed and anomalous conditions?
8 Technology proven in operational environment
  • Are all system components form-, fit-, and function-compatible with each other and with the operational environment?
  • Is the technology proven in an operational environment (meet target performance measures)?
  • Was a rigorous test and evaluation process completed successfully?
  • Does the technology meet its stated purpose and functionality as designed?
Implementation 9 Technology refined and adopted
  • Is the technology deployed in its intended operational environment?
  • Is information about the technology disseminated to the user community?
  • Is the technology adopted by the user community?

The EAR Program and TRLs

FHWA's Exploratory Advanced Research (EAR) Program focuses on high-risk, high-reward research that fills the gap between basic and applied research. It also supports the development of transformative research tools with potential highway benefits.

Capturing the core of a research project succinctly while offering tangible next steps is a difficult undertaking. The EAR Program needed a system to describe the maturity of highway research projects. In 2014, EAR Program researchers began using TRL assessments as a tool to improve project outcomes, project communications, and program management.

By codifying completed tests and proposing a sequence of future research and development activities, a TRL assessment can support strong project outcomes. The assessment results usefully indicate what sort of development may be necessary to mature the technology to the next readiness level, although the level of effort required will vary across TRLs and projects.

TRLs improve project communications by providing a common language across industries and disciplines. The structured process used to assign a TRL to a technology helps to create a common understanding, rather than relying on terms such as “market ready” or “deployment ready,” which may mean different things to different audiences. This common language for discussing technology readiness is particularly helpful when handing off technology between different groups involved in development, or for communicating the technological readiness of a project to decisionmakers who are determining where and how to invest research funding.

Research program management also can benefit from introducing TRLs. Some agencies choose to use TRLs as a decision gate, requiring that a particular TRL must be achieved to advance the technology to a next stage of funding or implementation. Alternatively, project teams can use TRLs as an informal check-in to bring experts and stakeholders together to discuss appropriate next steps.

The cross-disciplinary nature of the assessment is useful as a portfolio tool for larger research programs. Research program managers can use TRLs to understand if investment is occurring at the intended stages of development, and to assess the general level of progress.

Preparing for and Conducting the Assessment

When preparing for a TRL assessment, those involved should consider several key elements. Project leaders must first agree that the TRL assessment is an appropriate tool for their purposes. Preparation then consists of four major components: selecting panelists, establishing clear goals, formalizing timing and location, and creating materials.

Convening a well-rounded panel of experts to assess the maturity of a technology is essential to the success of a TRL assessment. For most research projects, a panel of four to six stakeholders, researchers, and subject matter experts provides an effective balance. Panelists should at a minimum be knowledgeable about the technology, the potential users of the technology, or the application environment.

Goal setting includes defining why the assessment is being conducted and exactly what is being evaluated. For example, TRL assessment organizers may decide to conduct assessments separately for different components of a project.

After the panel is selected and the goals are determined, the team can set the timing and location of the assessment and prepare and distribute the materials. A thorough materials packet should include all relevant background information, studies, and test results. These should be provided to the panel with plenty of time for review prior to the assessment meeting.

The TRL assessment itself is straightforward. The typical framework includes a brief presentation of the technology by the project's principal investigator, a period of deliberation by the panel, and a discussion with the principal investigator about the results. The last step is an important opportunity for the principal investigator to learn the panel's thoughts on the technology's current maturity and how that maturity can improve.

des2

To help with the preparation and assessment process, FHWA published the Technology Readiness Level Guidebook (FHWA-HRT-17-047) in September 2017. The guidebook provides the necessary information for conducting an assessment, offers background on the TRL scale, walks through every aspect of preparing for and conducting a TRL assessment, and provides helpful tools and tips throughout.

Using Assessment Results

To maximize the value of its research and efficiently communicate results, the EAR Program uses TRL assessments along with other tools to help identify research products for further support and audiences that would be interested in the results. TRL assessments also provide a useful foundation for other research planning and evaluation tools, such as logic models and mind maps.

TRL assessments can help project teams identify stakeholders who would be interested in research results at a given stage of maturity. For example, practitioners likely will be more interested in technologies that can be piloted in real-world environments than in basic research without a clearly defined application.

Still, the TRL scale has limits. It describes technology maturity, which is only one factor for continued investment in research and development. Others to consider include the benefits of a technology over existing technologies, development costs, and risks from continued investment. Accordingly, it is important to complement a TRL assessment with other methods of assessing the potential of new technologies, such as market and barrier analyses, determination of level of effort, and logic models.

Documenting results through reports, peer reviewed papers, and professional presentations is important. In addition, FHWA uses workshops, demonstrations, training, and other communications activities to transfer results from one audience to another. In-person contact and demonstrations give stakeholders an opportunity to see and touch the actual work, ask questions, and interact with the research team, which encourages continued commitment to the work and improved knowledge transfer.

TRL assessments are a valuable tool for a wide range of highway-related technologies and research. Following are four examples of projects that completed TRL assessments and the insights generated through the process.

Web-based Wildlife Observation

The Wildlife Observer Network Web-based system, developed by the Road Ecology Center at the University of California, Davis (UC Davis) developed technology to detect wildlife and capture and transmit images without the need for manual operation of the camera. Remote operation minimizes the required time and personnel resources. Potential users include wildlife camera operators working with State and local transportation agencies.

des3
A motion-activated camera in UC Davis' Wildlife Observer Network captured this image of a gray fox, wirelessly transmitted it to a database, and tagged it with the time and location.

The technology is a system of components that includes hardware (such as cameras), a Web-based system (online project management platform, database), and people (camera operators). Motion-activated and cell- or Wi-Fi-enabled cameras detect, capture, and automatically upload images of animal sightings to the Wildlife Observer Network's database. The system automatically tags the images with location and time information. Camera operators then analyze the images to identify the animal and discard false positives.

The project team completed a TRL assessment of the wireless technology in August 2016. A meeting at the National Highway Institute convened both in-person and remote attendees including principal investigator Dr. Fraser Shilling of UC Davis, staff from the U.S. Department of Transportation, and representatives from four State departments of transportation. The State DOT representatives served as the TRL assessment panel and provided the perspectives of both technology experts and potential early adopters.

The panel determined the technology to be at level 6, the earliest stage of the development category in the TRL scale. As next steps to mature the project, the panel suggested that the team clearly define performance metrics (such as adoption rates, time savings, and website usability), seek user feedback through a virtual focus group, and integrate the feedback from the focus group into future versions of the system. The TRL assessment also provided the developers with useful feedback regarding the need to explore image transmission methods that lead to higher quality images, enabling improved animal identification.

“The TRL assessment was useful because it was a structured critique of the system [and] approach we have developed,” says Shilling. “This allowed both a detailed self-assessment of specific aspects of the technology and an idea from practitioners of where we are in development and how far we have left to go.”

Minnesota's HIL Test Bed

The University of Minnesota conducted research with a hardware-in-the-loop (HIL) test bed. HIL technology enables a system to have physical hardware components that interact with simulated components in a simulated environment. The goal is to create an environment to safely test scenarios that would be too dangerous or costly to pilot in the real world.

The University of Minnesota's HIL test bed uses a laboratory power-train research platform--a real engine, an engine-loading device (hydrostatic dynamometer), and a virtual powertrain model to represent a vehicle--connected remotely to a traffic simulator. The HIL test bed captures actual fuel and emissions measurements, which researchers cannot calculate precisely using fuel and emission maps in simulations.

In May 2017, a TRL assessment for the HIL test bed brought the developers together with representatives from the U.S. Department of Energy's Advanced Research Project Agency – Energy and National Renewable Energy Laboratory, the Minnesota Department of Transportation, and other agencies. After considering the completeness of the technology and the representativeness of the test environment, the panel debated whether to assign the HIL test bed a TRL 5 or TRL 6. While the prototype appeared to satisfy all operational requirements, panel members were uncertain if factors such as grade, weight, and classification of the simulated vehicles were truly representative of real-world use.

des4
Shown here is a gasoline engine, part of the University of Minnesota's HIL test bed.

The debate over the level proved helpful to the researchers in identifying next steps and advancing development. “We were quite encouraged by the high TRL assessment of the HIL system,” says Zongxuan Sun, director of the Center for Compact and Efficient Fluid Power at the University of Minnesota. “One area we felt will further enhance the HIL system is to compare [it] directly with actual vehicle tests. With the support from FHWA, we were able to install a new engine in the HIL system that is the same as the engine used in the fleet vehicles at FHWA.”

In Situ Scour Testing Device

Traditionally, bridge engineers and researchers have found it difficult to accurately estimate the erosion of fine-grained, cohesive soils around bridge foundations. This erosion, called scour, impacts the structural integrity of a bridge. Established methods use empirical models that assume uniformly graded, noncohesive sands and represent worst-case conditions. However, bridge engineers and researchers consider this approach overly conservative, and it may lead to design and maintenance practices that are potentially unnecessary and resource intensive.

des5
Researchers demonstrate the in situ scour testing device in Michigan.

To address these concerns, a team of researchers at FHWA has developed an in situ scour testing device, which works in place (in situ) in conjunction with a standard geotechnical drill rig to measure the erosion resistance of fine-grained cohesive soils in terms of erosion rate and critical soil resistance. Instead of making a typical geotechnical soil investigation, the erosion head of the in situ device is lowered into the bore hole to conduct the erosion test.

The TRL assessment panel for the device, held in November 2017, included FHWA staff and represented potential users. As the panelists learned about how the testing device would be used in practice, they raised concerns about how the device's output compares to the output of existing laboratory-based testing methods. Panelists suggested developers clarify what the output means in comparison to the output from existing tests that scour analysts are trained in interpreting.

While the TRL assessment's primary value is enabling the technology's developers to accelerate their progress, the in situ device's assessment also provided a positive experience for the panel members. Panelists learned about this cutting-edge technology, but also about the TRL assessment process itself. They noted that the experience gave them the confidence to carry out TRL assessments on their own research and other projects in their labs, as well as to promote the use of TRL assessments as a valuable exercise within their organizations.

“The standard format of the TRL [scale] offers a commonsense understanding of the requirements for the progression of technology maturity,” says panel member Mike Adams from FHWA's Office of Infrastructure Research and Development. “The TRL assessment establishes the level of maturity, and provides unbiased feedback to the researchers. It is a helpful process to streamline the advancement of technology.”

SeeBridge: Semantic Enrichment Engine

The SeeBridge project is one of 9 research projects in the Infravation Program, funded by FHWA in partnership with 10 other countries and the European Commission. The Infravation Program is a cooperative research initiative between FHWA and other national road administrations that enables U.S. entities such as academic institutions, State DOTs, and businesses to participate in the research along with entities from other countries. The SeeBridge project sought to develop a powerful and comprehensive approach to revolutionize surveying and inspection of bridges.

With SeeBridge, researchers collect structural information on bridges using terrestrial laser scanning, or laser scanners mounted on vehicles with high-resolution cameras and video recorders. The technology produces high-density, graphic representations of bridges, called point clouds. Analysts then develop 3D models of the bridges from the point clouds using advanced algorithms. To test the technology, the project team scanned bridges in Georgia, working with the Georgia Department of Transportation, as well as bridges in England, Germany, and Israel.

The TRL assessment of SeeBridge, held in October 2017, recommended that researchers obtain more feedback on integrating the system with agencies' bridge inspection records and management systems. Specifically, the assessment team recommended extensions to current asset management systems to include historic records in the SeeBridge system at the structural element level. The panel also recommended obtaining more data to test the ability to recognize exceptions to common structural elements and further development of the 3D reconstruction steps.

des6
This image of a bridge is created via photogrammetry to permit 3D analysis of the structure for virtual bridge inspection.

Since the TRL assessment, the research team has continued to obtain feedback from asset owners and examine how this new technology can integrate with current records and systems. Researchers are planning to pilot the technology on more structures. With increased experience and more data, the team will be able to make the system more robust and advance the technology's maturity.

Expanding the Use of TRLs

With research and development taking place across many different organizations and programs--from Federal agencies to State DOTs to industry--the use of TRLs can smooth the transition and communication of results from one program to another. However, more people need to become familiar with the process to understand its benefits and adopt TRL assessments as a tool. To this end, FHWA worked with the Volpe Center to develop a webinar providing an introduction to TRLs (slides available at www.trb.org/Calendar/Blurbs/173937.aspx) and to produce the guidebook on conducting TRL assessments (available at www.fhwa.dot.gov/publications/research/ear/17047/index.cfm.)

New technology development in the highway transportation system--from mobile sensor data to increased vehicle automation--is increasing the interdependence of research and programmatic areas from infrastructure to safety to traffic operations. Hari Kalla, Associate Administrator with FHWA's Office of Research, Development, and Technology says, “Early consideration of the deployment environment and common understanding of technical maturity will grow in importance as new technologies come into the highway system.”


Nate Deshmukh Towery is a technology policy analyst at the Volpe Center. He has a B.A. in history and science from Harvard University and a Ph.D. from the Massachusetts Institute of Technology.

David Kuehn is the program manager for the FHWA EAR Program and has a master's of public administration from the University of Southern California.

Kirsten Van Fossen is an engineer at the Volpe Center, researching innovations with potential sustainability benefits. She has a B.S. in environmental engineering from Harvard University and a Ph.D. from the University of Cambridge.

For more information, see FHWA's Exploratory Advanced Research Program website at https://highways.dot.gov/research/exploratory-advanced-research or contact David Kuehn at 202–493–3414 or david.kuehn@dot.gov.