Value of Science 107: The VALUABLES Impact Assessment Framework, Step 1: Identifying Improved Information and Its Alternatives

Identifying the decisionmaker, context, and alternative sources of information are all part of a crucial first step in an effective impact assessment.

Date

Aug. 26, 2021

Authors

Yusuke Kuwayama and Sarah Aldy

Publication

Explainer

Reading time

5 minutes

As discussed in the Value of Science 106 explainer, Earth scientists can quantify and communicate the socioeconomic contributions of improved data from satellites using the three-step VALUABLES impact assessment framework (Figure 1). At its core, the framework compares two states of the world: one in which improved information is available for use in a decisionmaking process, and one in which the information is not available. A natural starting point for this type of assessment—represented in the first row of the framework—is to identify the improved information to be evaluated (the blue column) along with the information that would be used in its absence to make the decision (the red column).

Figure 1. The VALUABLES Impact Assessment Framework

Assessment Impact Framework

Identifying the improved information and its alternatives is a crucial first step in every impact assessment. It is impossible to conceptualize the value of a piece of information without knowing the alternative information that the decisionmaker would use in the absence of the improved information.

To illustrate this point, let’s take an example from everyday life. Imagine you are considering using GPS navigation with live traffic information to get to your destination, and you want a general idea of what the GPS information might be worth to you. Turning to the VALUABLES framework, you would write “GPS navigation with live traffic information” in the first row of the blue column.

Suppose that, in the absence of GPS navigation with live traffic information, your best alternative is GPS with no live traffic information. You would write “GPS with no live traffic information” in the first row of the red column. But what if, instead, your best alternative is no GPS navigation at all, and your decision about which route to take would be based on past experience driving around town? In this case, you would write “Past experience” in the first row of the red column.

This example underscores how the value of the information to be evaluated can change, depending on its alternative. The value of GPS with live traffic information is presumably much greater if your alternative source of information is past experience, rather than if your alternative information is GPS with no live traffic information—and even more so if you are driving to an unfamiliar destination.

So, identifying the information to be evaluated and its alternatives is a crucial first step that requires researchers to ask who is the decisionmaker and what is the decision context. Because scientists who produce new information don’t always know who uses it or how it is used, this first step often requires some investigation and outreach.

Identifying the Decisionmaker and the Decision Context

We can break down the first of these questions depending on the nature of the study: In an ex post (retrospective) study, which decisonmaker used the information that’s being evaluated? In an ex ante (forward-looking) study, which decisionmaker will be using the information that’s being evaluated?

This step may be straightforward, or it could require surveying or interviewing the users of the information to determine who has used or will be using it to make a decision. Recall our water resource forecasting example from the Value of Science 105 explainer. We had noted that the statement, “An improved forecast of x can help inform water management,” is too vague for the purpose of assessing the value of information. On the other hand, “An improved harmful algal bloom forecast will help managers of recreational areas along Utah Lake issue more timely warnings to visitors, so they are not exposed to toxins in the algal blooms” is better. It tells us very specifically that the decisionmakers are the managers of recreational areas along Utah Lake.

Once the researcher has identified the decisionmaker, she can begin a conversation to better understand the decision context. Usually, a decision involves choosing one of several possible actions. Once again, the more specific we can be about these possible actions, the more straightforward the impact assessment will be to conduct. Returning to the Utah Lake example, we know from the more detailed statement that the decision context is one in which managers are deciding when to issue advisories that warn visitors of harmful algal blooms.

Identifying Alternative Sources of Information

Once the decisionmaker and decision context are clearly identified, we can start to define the types of information that the decisionmakers would use if the improved data weren’t available. Depending on the decision context, this step may take a little or a lot of investigation—and is usually best done through conversations with decisionmakers. Sometimes, the information that is used to make a decision is documented in written rules or guidelines governing the decision; these documents also can serve as evidence of what alternative information might have been available.

For ex ante (forward-looking) impact assessments, in which the improved information does not exist or is not yet used in decisions, the researcher might be able to determine alternative sources of information by asking decisionmakers what information they are currently using. For ex post (retrospective) impact assessments, in which the improved information already is being used, the researcher might ask the decisionmaker what their second-best source of information is, or what source of information they were using prior to the improved information being available. Once this alternative information is clearly defined, the researcher should be able to fill in the first row of the framework.

Putting It All Together

In the case of harmful algal blooms around Utah Lake, Stroming et al. (2020) were conducting an ex post (retrospective) study to examine the effect of improved satellite data on recreational advisory management. The decisionmakers—the lake managers—reported that they used the satellite information on harmful algal blooms together with monthly field tests, lab tests, and visitor reports. All these sources of information together comprise the improved information. Before incorporating the satellite data, the managers used just the latter three data sources: field tests, lab tests, and visitor reports. These latter three data sources comprise the alternative information. Figure 2 shows how the study authors filled in the first row of the VALUABLES impact assessment framework.

Figure 2. VALUABLES Impact Assessment Framework: An Example for Satellite Data on Harmful Algal Blooms versus Testing and User Reports

Assessment Impact Framework Sample

Recall from the Value of Science 105 explainer that, to measure how improved information leads to better decisions and societal outcomes, we need to identify some “deltas”—the changes in decisions and associated changes in outcomes that are enabled by the improved information. By completing the first row of the framework, we’ve taken the first step toward identifying these “deltas.” In essence, we are defining the change in information experienced by the decisionmaker—a change from the alternative information to the improved information—that triggers changes in decisions made by the decisionmaker and changes in societal outcomes tied to the decision context. In the next several explainers, we will learn how to use the framework to understand and quantify these changes in decisions and outcomes.

Projects