Use your widget sidebars in the admin Design tab to change this little blurb here. Add the text widget to the Blurb Sidebar!
Home Commentary Metropolitan Spotlights Dashboard Data Maps Blog Subscribe to MetroTrends Blog - RSS RSS icon

Evaluating place-based programs

Author: Austin Nichols

| Posted: July 3rd, 2013

 

houses

Random assignment studies are the gold standard for judging whether an intervention works, but that doesn’t mean they’re always the best tool for the job. When it comes to evaluating place-based programs—those that aim for comprehensive community-wide changes—random assignment is typically the wrong approach to take.

In random assignment studies, people are randomly divided into two groups. One group receives the intervention being tested—whether it’s a new drug or a job training program—and the other group does not. Then researchers study how each group’s outcome differs. Whether in clinical studies or social science research, an experiment that randomly assigns treatment is the preferred approach because it is unbiased, given a few assumptions: it gets the answer right on average. Without random assignment, the drug being given to the healthiest patients or the job training given to those with the lowest current earnings can bias us in favor of finding large effects where, in truth, the effects are small or none. But the assumptions are not always justified.

Place-based programs, such as Promise Neighborhoods from the Department of Education and Choice Neighborhoods from the Department of Housing and Urban Development, aim to produce change by affecting the whole community, not just the individuals touched by a funded program. Part of that approach is to saturate the target, providing services to a large portion of the population so that even those not directly affected probably know someone who is, and social networks transmit the effect across the whole community. These spillovers mean that the statistical framework underlying random assignment does not apply.

We could randomly assign communities to get a program or not, but place-based programs are not a simple prescription formulated the same way everywhere. These programs are grown organically in the communities where they are implemented and draw different interventions from a broad menu of services. Each intervention is tailored to conditions on the ground. They are also continually improved using data in an ongoing development effort, as described by Sue Popkin. Treatments may also adapt to individual circumstances, with constant feedback from outcome data. Another common element is a form of case management where services are coordinated across domains, so that individuals do not fall through the cracks.

How should we evaluate place-based programs?

Spillover effects on people not receiving services, plus continual improvement of services and place-specific designs, make a simple random assignment design the wrong choice. But there are methods that can credibly evaluate place-based interventions. The crucial part is defining exactly what intervention is being examined, and then using data from other communities to estimate the counterfactual outcome: What would have happened without that intervention?

It is hard to define what treatments might happen in the absence of an intervention. A neighborhood that does not receive federal dollars to implement a specific place-based program can choose to enact its own intervention. Is the right alternative no intervention at all, or whatever intervention grows in the absence of the specific treatment? There are no sugar pills given out in social experiments to prevent individuals or communities from designing their own treatment regimen. The absence of a placebo is even trickier in the absence of random assignment, but that just means we need to collect very good data on what is being done everywhere we look.

Houses illustration from Shutterstock.

Filed under: Economic development, Economic Growth and Productivity, Education and Training, Infrastructure, Job Market and Labor Force, Job opportunities, Local, Metro, Mobility and transportation, Neighborhood indicators, Neighborhoods and community-building, Neighborhoods, Cities, and Metros, Performance Management and Measurement |Tags: , , , , , , ,
2 Comments »

2 Comments on “Evaluating place-based programs”

  1. 1 When you don't like the results, criticize the methodology said at 9:15 am on September 19th, 2013:

    [...] that research can help policymakers answer. And some very interesting and promising programs aren't amenable to controlled experimental designs. Depending on the question being asked, we should consider the [...]

  2. 2 The battles ahead in the war on poverty: a Q&A with Urban Institute President Sarah Rosen Wartell said at 12:09 pm on June 3rd, 2014:

    [...] once. Random control trials—the gold standard for evaluating many other types of interventions—aren’t the right approach. So one of our roles will be trying to better understand what it takes to effectively implement and [...]


Leave a Reply