This was developed by the OMB Evidence Team for informational purposes and does not represent official OMB policy.
HOW CAN PERFORMANCE MEASUREMENT AND EVALUATION WORK TOGETHER?
While often undertaken separately, collaboration between performance measurement and evaluation teams can
lead to stronger evidence-building. Ways the two can work hand in hand include:
Performance measurement can help identify priority questions to be addressed by evaluations, informing
decisions about allocating evaluation resources.
Evaluation findings can clarify what indicators are predictive of an activity’s success and should be tracked in
performance measurement.
Evaluation can provide context and potential explanations for variation over time or across sites revealed by
performance measurement.
When performance measures suggest that many participants in a program experience a certain outcome,
evaluation can confirm (or refute) whether that is directly attributable to the program by comparing outcomes
seen in a control or comparison group when possible.
Performance measurement can suggest to evaluators what types of indicators are important to program
operators and might thus be useful to include in selecting evaluation measures.
CASE STUDY #1
A government agency that administers a large
formula grant program to states looked at
performance data and saw that they were falling
short of their enrollment targets. Staff observed
that a significant portion of individuals who were
eligible for the services funded by the grants were
not receiving them. This conclusion, drawn from
the performance data, motivated the agency to
implement a behavioral science-informed
intervention aimed at “nudging” participants to
take advantage of these services. The program
ran a randomized controlled trial evaluation of
this intervention in order to determine whether it
did in fact increase uptake of services as
intended, compared to service uptake without
the intervention. The main outcome of interest in
that study was the same performance metric: the
number of individuals who participated after
receiving the behavioral “nudge” compared to
the number of individuals who participated
without having received the intervention.
Performance measurement processes inspired an
evaluation that was ultimately aimed at finding
ways to improve upon a particular performance
metric that was important to the program.
Simultaneously, there is an ongoing impact
evaluation of the overall program that looks at
whether individuals who received these services
experienced better outcomes than a control
group of individuals who did not.
CASE STUDY #2
A multi-site national program had been tracking
performance for over a decade, collecting data on
various measures and comparing it to goals for
each measure. The performance information was
used for a range of purposes, including to reward
sites, pay incentive bonuses to staff, and decide
whether to renew existing site contracts. When
the program underwent a large-scale random
assignment evaluation, researchers saw an
opportunity to compare the performance data
with impact evaluation data by analyzing
whether participants at sites that consistently
met performance targets were likelier to
experience better outcomes than a carefully
selected control group that did not participate in
the program. This independent study revealed
that there was a weak connection between how
sites were doing on the performance measures
and the extent to which their participants were
faring better than the control group. Sites that
appeared to be top performers based on their
performance data did not always have the
biggest impacts on participants, and sites that
had reported lower performance did not
necessarily have less of an impact on
participants’ outcomes. The research was
additionally able to use data to identify some
possible causes for this lack of connection, such
as the fact that the higher-performing sites were
on average serving higher-ability participants
from the outset. This instance demonstrates how
evaluation can serve as a crucial supplement for
performance data.