Use your widget sidebars in the admin Design tab to change this little blurb here. Add the text widget to the Blurb Sidebar!
Home Commentary Metropolitan Spotlights Dashboard Data Maps Blog Subscribe to MetroTrends Blog - RSS RSS icon

Economic Growth and Productivity Archive

A flourishing performance management landscape

Author: Mary Winkler

| Posted: July 23rd, 2014

 

 

0723blogUSbudget

In summer 2013, John Bridgeland and Peter Orszag’s “Can Government Play Moneyball?” challenged the government and nonprofit sectors to make greater investments in understanding what works and to pursue a more rigorous approach to evidence and impact.

Today, the spirit of the “moneyball” movement is blossoming. At its core, it’s about organizations improving their performance by continuously tracking whether their programs and services are leading to desired results. Although performance measurement and management is hardly new to the public and nonprofit sectors, it is much more utilized than arguably it has ever been due to demands for greater accountability and growing expectations that organizations do more with less.

Demand for performance management techniques is high

In 2012, the Urban Institute, in partnership with Child Trends and Social Solutions, launched a new tool to help nonprofits measure and manage performance: PerformWell. This resource was designed to fill an information void, help nonprofits identify outcomes and performance indicators, and provide surveys and assessment tools to assist with tracking and reporting.

Since its launch in March 2012, the PerformWell site has had nearly 300,000 visitors and more than 12,500 have signed up for webinars, signaling a genuine need and appetite to engage in this work.

The performance management landscape

PerformWell is only one recent addition to an increasingly vibrant landscape of performance measurement and evaluation resources. In October 2012, the Bill and Melinda Gates Foundation, Hewlett Foundation, and Liquidnet launched Markets for Good, a forum for sharing innovative ideas, best practices, and diverse points of view for helping the social sector make better decisions and support a “dynamic culture of continuous learning and development.”

America Achieves, through the Results for America Initiative, developed an agenda that calls for a federal evidence and evaluation framework, an increase in the use of evidence in all federal formula and competitive programs, the creation of a federal “what works” clearinghouse, and more accessible, user-friendly, publicly available data.

In December 2013, Leap of Reason and PerformWell partnered to host After the Leap, the first-ever national conference on performance management. Themes from this conference have been repeated in many circles, including the recent Social Impact Exchange (SIE) conference which included a panel on how funders can support the capacity of nonprofits. Although this annual conference  is generally geared toward nonprofits and funders interested in scaling social impact, many participants acknowledge that performance and evaluation strategies and practice evolve along a continuum of practice – a message echoed by Nancy Roob at the ATL conference and more recently in her blog post in the Stanford Social Innovation Review series on the “Value of Strategic Planning and Evaluation”.

And not to be left out, foundations are now likely to face increasing scrutiny around investment choices, thanks to a recent entrant to the field – Philamplify – designed by the National Committee for Responsive Philanthropy. Just launched in May, the new site is described by the Washington Post as “Yelp for the philanthropy sector.” What distinguishes Philamplify from other efforts to hold foundations accountable is that reviews are conducted independently and with or without the consent of foundations. The goal is to grow the number of foundation assessments from 3 to 100 of the largest foundations in the United States.

A bright future for performance management

As my colleague Jeremy Koulish pointed out last year, “getting to measures that can be applied uniformly across the whole sector is a challenging endeavor.” Yet these resources and initiatives are indicative of growing attention and a sense of urgency around issues of measurement and evaluation for the nonprofit, government, and philanthropic sectors, neither of which are likely to diminish anytime soon.

Photo: AP Photo/J. Scott Applewhite. 

Follow Mary Winkler on Twitter @MaryKWinkler.

A version of this piece was originally published in the PerformWell newsletter (July 2014).

Filed under: Center on Nonprofits and Philanthropy, Cross-Center Initiatives, National Center for Charitable Statistics, Nonprofit data and statistics, Nonprofits and government policy, Nonprofits and Philanthropy, Performance Management and Measurement, Performance measurement and management, PerformWell, Public and private investment, Tracking the economy |Tags: , , , , ,
Add a Comment »

Is student debt hindering homeownership?

Author: Maia Woluchem and Taz George

| Posted: July 17th, 2014

Since 2004, student loan debt has tripled to $1.1 trillion, surpassing both outstanding auto and credit card debt. Many have sought to connect the dots between the rise in student debt and the five percent decline in homeownership, but research presented this week at the Urban Institute raises questions about the evidence.

Debt1

While there is some indication of a possible link, it’s not nearly strong enough to fuel a narrative casting debt-ridden graduates as a significant economic burden, permanently lowering the homeownership rate.

Regardless of the evidence, we still need to monitor student debt’s effects to the best of our ability, given its outsized role in household balance sheets, said Meta Brown of the Federal Reserve Bank of New York. In previous years, borrowers with student loans were associated with better credit profiles and higher homeownership rates among young households relative to their student debt-free counterparts. Now, the relationship is less clear, and those with student debt are slightly less likely to hold home-secured debt (a proxy for homeownership). Holders of student loan debt also have worse credit scores, meaning it could be more difficult for them to qualify for a loan, find housing, and obtain a credit card.

Debt2

If student debt really is hurting homeownership, the panelists agreed that certain types of students are bearing the bulk of the damage. Research presented by Jeffrey Thompson of the Federal Reserve Board found a connection between student debt and lower homeownership almost entirely attributable to students that took out loans but did not successfully attain a degree.

Other factors besides student debt are more certainly at play in driving down the homeownership rate, such as changing interest in homeownership and the broader issue of restricted credit availability. Notably, the homeownership rate has declined steeply for 27-30 olds both with and without student debt. Many of these young potential homeowners have been locked out of the housing market at an opportune time to buy a home due to their inability to meet the debt-to-income ratios required in this tight lending environment. Others have been stymied by low credit scores, which hamper their ability to secure loans for a home. Others still may have decided that homeownership is not the best financial decision at this point.

Sandy Baum of Urban’s Income and Benefits Policy Center also critiqued the notion that student loan debt is weighting down the  homeownership rate. She notes many student debt measures are flawed because of weak data and questionable assumptions about borrowing patterns.  And from the perspective of students planning on taking out loans to continue their education, what is the alternative? Forgoing college is rarely the best long-term financial decision, demonstrated by the well-established earnings gap between those with and without a college degree. Moreover, a large proportion of the borrowers with more than $40,000 in student loan debt have borrowed for graduate or professional school, raising long-run earning power even higher.

The mainstream discourse often ignores the nuances surrounding the issue of growing student debt and it’s linked to financial hardship and broader economic woes, suggested Beth Akers from Brookings Institution. As student loans comprise an increasingly large share of total household debt, expect the conversation to continue.

Correction: The original version of this post mislabeled the series in the second chart. The chart plots only 30-year-olds with home-secured debt, broken out by those who did and did not have student loan debt at any time between ages 27 and 30. We originally implied that it showed homeowners between ages 27 and 30. Our apologies.

Filed under: Credit availability, Economic Growth and Productivity, Education and Training, Employment and education, Higher education, Homeownership, Housing and Housing Finance, Housing Finance Policy Center, Labor force, Tracking the economy |Tags: , , , ,
Add a Comment »

Is residual income the key to the superior performance of VA loans?

Author: Laurie Goodman and Ellen Seidman and Jun Zhu

| Posted: July 16th, 2014

Default rates on loans guaranteed by Veterans Affairs (VA) are consistently lower than on loans insured by the Federal Housing Administration (FHA). For loans originated in 2007, the worst origination year, 36 percent of FHA loans have experienced at least one delinquency of 90 days or more, compared with only 16 percent of VA loans, as shown in the figure. These differences persist; for 2012 origination, the 2.3 percent FHA default rate was 64 percent higher than the VA’s 1.3 percent default rate.

Va1

While FHA and VA borrowers spend roughly the same percentage of their income on their mortgage payments, FHA borrowers have lower incomes and lower credit scores. When controlling for income and credit score, VA borrowers still have considerably lower default rates. For 2008 loans, for example, the default rate for FHA loans was 26.1 percent compared with just 11.6 percent for VA loans. But even if we apply VA borrower characteristics to FHA borrowers, the FHA default rate for 2008 loans would still have been 20.1 percent.

Why does the difference persist over time? In a commentary posted today, we looked at some possible explanations:

  • Military culture – Could military culture or special incentives not to default, such as potential loss of a security clearance, cause a significant difference? Evidence is weak to support this theory and in 2013, only 17 percent  of VA borrowers were on active duty when they took out their loan.
  • Direct contact – The VA has a statutory requirement to service its borrowers and contact them directly. FHA does not engage in direct contact; the servicer contacts the borrower. As a result, the VA intervenes at an earlier point in a more uniform manner. While this might improve the likelihood that a delinquent loan reperforms, often referred to as the cure rate (it actually doesn't seem to), it is unlikely to explain the difference in the substantially higher rate at which FHA loans go 90 days delinquent.
  • Skin in the game – Unlike the FHA’s 100 percent insurance, VA lenders remain on the hook for losses after the VA’s limited guaranty is exhausted. As a result, VA loans tend to be concentrated in lenders who are familiar with the VA’s special underwriting and servicing systems. We hope to explore FHA and VA default rates for lenders who originate both types of loans.
  • Residual income test -- While the VA’s uses a residual income test and debt-to-income (DTI) guidelines to assess a borrower’s ability to pay, the FHA and conventional lenders rely exclusively on DTI. The residual income test measures whether a borrower will have enough money left after paying their mortgage and related expenses each month to meet unanticipated expenses. Although the expense side of the VA’s test has not been updated for years, and therefore probably understates the residual income a family actually needs, it works. For 2008 originations by borrowers with incomes under $50,000, the VA default rate was about 60 percent of the FHA default rate.

While adding a residual income test may cause some families to rethink or delay a home purchase or purchase a less expensive house, it also appears to be an effective way to reduce default rates and ensure borrowers take out mortgages they can afford. FHA and conventional programs should consider adding residual income to their underwriting. Moreover, lenders making higher cost Qualified Mortgages may want to consider using a residual income screen to provide more certainty that their borrowers can truly repay the loan.

Filed under: Agency securitization, Credit availability, Economic Growth and Productivity, Federal programs and policies, GSE reform, Homeownership, Housing and Housing Finance, Housing and the economy, Housing finance, Housing Finance Policy Center, Tracking the economy |Tags: , , , , , ,
1 Comment »

Why government data sites are so hard to use

Author: Jon Schwabish

| Posted: July 14th, 2014

 

0711blog

A couple of weeks ago over at FlowingData, Nathan Yau wrote a post about how to improve government data sites. The post was mostly a constructive critique of the difficulties users have extracting and using data provided by the federal government. (Surely state and local governments create similarly poor interfaces). It’s not that I disagree with Nathan, but I think it’s worth digging a little deeper into why government web sites and data sets aren’t particularly user-friendly.

Having worked at a government agency for nearly a decade and spoken to countless agencies about data visualization, presentation techniques, and technology challenges over the past few years, I thought I might add my own perspective.

In his post, Nathan suggests three reasons why government data sites are inexcusably poor:

Maybe the people in charge of these sites just don't know what's going on. Or maybe they're so overwhelmed by suck that they don't know where to start. Or they're unknowingly infected by the that-is-how-we've-always-done-it bug.

In my experience, government web sites aren’t difficult to use or extract data from because government workers don’t “know what’s going on” or are “overwhelmed by suck.” The real answer is probably closer to the “that-is-how-we’ve-always-done-it bug”—but even that simplifies a more complicated story.

Let’s say for the moment that you work at a large government agency and your job is to process a large household survey and make it available to the public (think, say, the Census Bureau). Up until the past couple of years or so, your target audience was other government workers, academics, and researchers in similar fields. And most of those analysts use tools similar to the ones you’re using: Stata, SAS, SPSS, MATLAB, maybe a little Fortran or C++. So what do you do? You create a data file so that they can download it, unpack it, and analyze it using those programming languages. Your primary audience is not journalists (data-driven journalism had not yet taken off) or bloggers (in-depth data blogging was just beginning) or data scientists (the term didn’t even exist).

Now, however, with the Open Data movement, interest in and demand for Big Data, expanded open source programming languages and tools, and the general explosion of DATA EVERYWHERE, everyone is clamoring for more of your government data. So the mandate has changed. And you, as the government worker who has for so long processed this survey the same way, now are being asked to provide that data in a variety of formats. You’re not familiar with those different file formats or tools, so you ask about training or maybe even hiring some additional staff. Unfortunately, that’s probably not going to happen. Demand for more (or better) data has not translated into more funds to train existing staff or hire new staff. For example, between fiscal years 2011 and 2013, the overall budget appropriation for the Census Bureau fell from $1.2 billion to $859.3 million, a decline of over 25 percent. (It’s hard to tell, but that may actually be an overstatement of the decrease, if there were still some extra funds in the 2011 appropriation to process the 2010 decennial census.) At the Bureau of Economic Analysis, the producer of the National Income and Product Accounts, total appropriations fell by a smaller amount: from $93 million in 2011 to $89.8 million in 2013.

I don’t believe that government agencies can’t or don’t want to make their data more accessible or are so overwhelmed by the technology that they’re unable to come up with solutions. Instead, I think many agencies have yet to adjust to a world that demands data, and demands that it be easily accessible at all times. It’s going to take time, money, and training for the government to catch up.

Filed under: Economic Growth and Productivity, Income and Benefits Policy Center, Monetary policy and the Federal Reserve, Tracking the economy |Tags: , , , , , ,
2 Comments »

The Federal Reserve is not ending its stimulus

Author: Donald Marron

| Posted: July 10th, 2014

 

federalReserve

Yesterday, the Federal Reserve confirmed that it would end new purchases of Treasury bonds and mortgage-backed securities (MBS)—what’s known as quantitative easing—in October. In response, the media are heralding the end of the Fed’s stimulus:

  • “Fed Stimulus is Really Going to End and Nobody Cares,” says the Wall Street Journal.
  • “Federal Reserve Plans to End Stimulus in October,” reports the BBC.

This is utterly wrong.

What the Fed is about to do is stop increasing the amount of stimulus it provides. For the mathematically inclined, it’s the first derivative of stimulus that is going to zero, not stimulus itself. For the analogy-inclined, it’s as though the Fed had announced (in more normal times) that it would stop cutting interest rates. New stimulus is ending, not the stimulus that’s already in place.

The Federal Reserve has piled up more than $4 trillion in long-term Treasuries and MBS, thus forcing investors to move into other assets. There’s great debate about how much stimulus that provides. But whatever it is, it will persist after the Fed stops adding to its holdings.

(P.S. I have just espoused what is known as the “stock” view of quantitative easing, i.e., that it’s the stock of assets owned by the Fed that matters. A competing “flow” view holds that it’s the pace of purchases that matters. If there’s any good evidence for the “flow” view, I’d love to see it. It may be that both matter. In that case, my point still stands: the Fed will still be providing stimulus through the stock effect.)

Filed under: Economic Growth and Productivity, Monetary policy and the Federal Reserve, Public and private investment, Taxes and Budget, Tracking the economy |Tags: , , , ,
Add a Comment »

What you need to know about the new workforce development bill

Author: Lauren Eyster

| Posted: July 9th, 2014

 

0709blog

After more than a decade of continuing resolutions, a bipartisan bill to reauthorize the Workforce Investment Act of 1998 (WIA) has passed Congress and should go this week to the White House for the president’s signature.

While in no way perfect, the new Workforce Innovation and Opportunity Act (WIOA) is a clear improvement over its predecessor. It builds on 16 years of learning and knowledge and will provide better opportunities for workers who need new skills for the new economy.

Our paper last year on the innovations and future directions of workforce development highlights some of the key ideas that are embedded in WIOA.

Encouraging innovation. WIOA encourages local workforce boards to use promising strategies such as career pathways and sector strategies to better serve workers and employers. The advantage of these approaches is that they connect employer demand for skills and worker characteristics and abilities with the design of education and training programs. WIOA would also restore the provision for governors to reserve a full 15 percent of WIOA funds for statewide activities, allowing them to support greater innovations in their states.

Attaining industry-recognized credentials. One of the new core performance indicators under WIOA measures a student or trainee’s progress toward recognized postsecondary credentials. Again, this is designed to link employer and worker needs, as employers can be more confident that graduates have the right skills.

Improving data for measuring performance. While the original Workforce Investment Act introduced common measures of performance, WIOA strengthens performance reporting by enhancing and aligning a set of performance indicators across adult (including adult education) and youth programs. The legislation also supports efforts to link participant data to earnings data across all WIOA- funded programs and coordinate state and federal evaluation efforts.

Refocusing on disadvantaged populations. The Workforce Investment Act dismantled most requirements around serving disadvantaged populations. WIOA does not reinstate these provisions but does require boards and One-Stop operators to develop practices that encourage providing services to individuals with barriers to employment. This could be challenging considering that states need to meet performance level requirements but could help ensure that more disadvantaged individuals receive the often longer-term services they need.

What WIOA does not do is return overall workforce development funding to pre-sequestration levels immediately. Funding would be increased annually until 2020, but states and local areas will continue to be asked to do more with less.

For more of the legislative details, see the National Skills Coalition’s great side-by-side analysis of WIA and WIOA provisions.

Photo from Shutterstock.

Filed under: Economic Growth and Productivity, Education and Training, Income and Benefits Policy Center, Income and Wealth, Job Market and Labor Force, Job training and apprenticeships, Labor force, Low-wage workers, Tracking the economy, Unemployment, Wages and nonwage compensation, Wages and nonwage compensation, Work support strategies, Workforce development, training, and opportunity |Tags: , , , , , , , ,
3 Comments »

Maps need context

Author: Jon Schwabish and Bryan Connor

| Posted: July 2nd, 2014

It might be the case that maps are the most data-dense visualizations. Consider your basic roadmap: it includes road types (highways, toll roads), directions (one-way, two-way), geography (rivers, lakes), cities, types of cities (capitals), points of interest (schools, parks), and distance. Maps that encode statistical data, such as bubble plots or choropleth maps, are also data-dense and replace some of these geographic characteristics with different types of data encodings. But lately we’ve been wondering if most maps fail to convey enough context.

India
As an example, consider this map of poverty rates by districts in India. It’s a fairly simple choropleth map and you can immediately discern different patterns: high poverty rates are concentrated in the districts in the northernmost part of the country, on part of the southeast border, and in a stretch across the middle of the country. Another set of high-poverty areas can be found in the land mass in the northeast part of the map. But here’s the thing: we don’t know much about India’s geography. Without some context—plotting cities or population centers—we can only just guess what this map is telling me.

Many readers will be more familiar with the geography of the United States. So when maps like this one from the Census Bureau show up, we are better equipped to understand it because we’re familiar with areas such as the high-poverty South and around the Texas-Mexico border. But then again, what about readers familiar with basic U.S. geography, but not familiar with patterns of poverty? How useful is this map for them?

To more completely understand data encoded to maps, context is important. Where are the city centers? What are the patterns of population or income or other metrics that may be important?

For example, a team (including Bryan Connor) from this year’s civic hack day in Baltimore built something to directly address this problem. This mapping tool lets you build and compare every map possibly generated from the city data (as collected and published by the Baltimore Neighborhood Indicators Alliance). Poverty can be placed next to population, racial diversity next to education data, and so on.

BNIA

The data visualization studio Interactive Things provides us with another good example with their Daily Swiss Maps project for NZZ. Over several months, they worked with an editorial team to build maps that reveal new insights about Switzerland. Some feature small multiples, others highlight population distribution, and all of them link to a corresponding editorial explanation on nzz.ch.

Maps are some of the most popular visualization types—and not just those that map favorite beer types or accents—but also those maps that provide dense levels of data in a familiar format. It’s just that sometimes those maps leave out a level of context that will help us better understand the information being shown.

Are we suggesting that every single map needs several other maps to give context? Well, maybe. Or perhaps single maps need more and better annotation in order to highlight regions and patterns of relevance.

Filed under: Baltimore, Economic Growth and Productivity, Finance, Geographies, Income and Wealth, International |Tags: , , , ,
3 Comments »

Small investors spurred spike in cash sales. So what happens next?

Author: Taz George and Maia Woluchem

| Posted: June 30th, 2014

More homebuyers are paying in cash upfront

In the aftermath of the recession, a record share of homebuyers are foregoing mortgages altogether, paying the full sales price upfront and in cash.

Between 2006 and 2012, cash sales jumped from 21 to 40 percent of home sales before declining slightly in 2013. But what’s driving these elevated cash sales and what does it mean for housing markets and communities?

Loans2

Investors entered when lower credit borrowers were locked out.

As a result of the financial crises, over the past six years, a total of nearly seven million borrowers have lost their homes through foreclosure and short sales. Moreover, the slow economy and tightening credit box have made qualifying for a mortgage increasingly difficult for creditworthy low- and middle-income households.

To absorb the large supply of distressed sales and replace the low- and no- credit buyers, investors stepped in, buying up homes to resell or convert to rental units, often at a heavily discounted price. Cash upfront became a competitive advantage when vying for potentially lucrative properties. As a result, the cash share of home purchases has climbed steeply, while the homeownership rate has fallen to 65 percent, down from 69 percent in 2004.

But most of these investors aren’t the big guys.

While  media attention has focused on institutional investors managing huge stocks of homes as rentals, those investors are only a small part of the story. The majority of these cash purchasers are individuals who buy just one to ten homes in their own community, according to the Urban Institute’s Laurie Goodman, head of the Housing Finance Policy Center, who shared her analysis in a forum co-hosted by the Urban Institute and CoreLogic last Wednesday night: Cash Sales, Institutional Investors and Single-Family Rentals: Performance, Pricing, and Policy.

Here’s how Goodman interprets the census data: assuming an average sale price of $100,000, the approximately $25 billion in institutional money raised for home purchases equates to roughly 250,000 homes. Compare that with the nearly 4.5 million homes converted from owner-occupied to rental units between 2004 and 2013, and it’s clear that the big institutional players account for just over one twentieth of the shift.

The majority of investors are buying a smaller number of properties, hoping to make a profit off of rental income and rebounding prices, “a common theme throughout history” according to Mark Fleming, CoreLogic’s chief economist, another forum participant. “While institutional investors have been active in this space, they remain a relatively small share of the whole,” says Fleming.

What happens when investor activity ebbs?

Home prices would have fallen much further without investors absorbing the excess supply from distressed sales during the crisis. But the troubling part of this trend may be what happens next. We just don’t know the long-term consequences of neighborhoods transitioning from predominantly owner-to renter-occupied.

“Property management at this scale has never been done,” added Sara Edelman of the Center for American Progress. “We don’t have a whole idea of what’s going on” in neighborhoods with high concentrations of institutional investor units, whether the homes are adequately maintained, or whether prospective renters are fairly evaluated.

And with credit still far tighter than in the years before the boom, we don’t know what will happen to the housing market if overall investor activity continues to taper off. Oliver Chang, cofounder and director of Sylvan Road Capital, put it this way: “Many of the people who got in this to grab the appreciation didn’t understand how to manage scatter-shot single family rentals. Good progress has been made in getting up to speed and bringing these operations in house. But it’s hard and there is no roadmap.”

Sign up here to receive the Housing Finance Policy Center’s monthly newsletter, Housing Finance At a Glance, which contains the monthly chartbook, as well our blog posts, commentaries, events, and other news.

Filed under: Agency securitization, Credit availability, Economic Growth and Productivity, Federal programs and policies, GSE reform, Homeownership, Housing and Housing Finance, Housing and the economy, Housing finance, Housing Finance Policy Center, Housing markets and choice, National (US), Tracking the economy |Tags: , , , ,
Add a Comment »

Why don’t governments implement evidence-based best practices?

Author: John Roman

| Posted: June 19th, 2014

In 1975, Robert Martinson famously wrote that “nothing works” in treating convicted criminals and thus he concluded that rehabilitation in any form was not cost-effective. As discouraging as the criminal justice research was at the time, it was no more pessimistic than education, or public health, or child welfare research.

Today, however, we have mountains of evidence about effective programs. There is rigorous, transparent, objective evidence about what works in schools, what works to prevent violence, how we can help high-risk adolescents get on a better life path, and how to curb criminal offending in general. There are resources that can help guide governments find best practices in almost any sector, from child welfare to health care.

We don’t have a solution to every problem, but there are evidence-based solutions to many problems. But these programs focus on very narrow populations. If a government is looking to reduce drug use within the criminal justice system, we can provide precise estimates of the likelihood that community-based treatment will reduce new offending.

sibs_blogBut what if government wants to reduce all types of criminal reoffending? Drug treatment alone is not enough to accomplish that objective. You also have to improve work skills, deal with mental illness, chronic homelessness, anger management, and more. There is no silver bullet for recidivism that can be summarized in a nice figure like the one above.

A few months ago, I was asked by a local government to help them think through what kind of social services they might finance through a new financing mechanism called social impact bonds (SIBs). SIBs are used to fund evidence-based programs, shifting the risk of new investment from the government to the private or philanthropic sector. The result could be an infusion of new money to fund programs demonstrated to work.

The local government’s particular question: How do we increase high school completion in our city by 20 percent?

Great question.

The problem is that what we do in the social sciences is ask very particular questions about the effectiveness of very narrow programs. If the goal is to reduce new offenses from highly at-risk youth, research provides an answer. If the goal is to increase high school completion among all youth, that is much more difficult to answer.

Florida Redirection is a cost-effective intervention for the highest-risk adolescents, a group that is a tiny (but expensive) fragment of high school dropouts. Reducing the high-risk adolescent dropout rate helps overall high school completion, but only somewhat. If you want to reduce overall high school dropout rates, you have to do lots of things. You have to have effective mentoring. You have to improve math literacy. You have to do out-of-the-the box things, like reduce asthma, which is a leading cause of truancy.

At the moment, we have no idea how to do these things in combination. Just like drug therapy for an illness, two really good medications could complement each other—or have adverse consequences. But that’s not how we study social service programs today.

In the United Kingdom, though, they are taking this issue head on. There, governments are testing 10 SIBs to improve the life chances of NEET youth (NEET being Not in Education, not Employed, not in a Training program), who often become huge drains on social resources as they age.

Each SIB tries a different combination of remedies for the NEET youth. And ultimately, that’s the way we will figure out the solution to these big policy problems.

The point is this: We don’t have solutions that cross programs and sectors yet. But we can if we use SIBs and other social innovations as a way to experiment, to test ideas about how evidence-based programs can work together.

Until then, research will have the wrong answers for policymakers’ questions, and policymakers will not pursue the evidence-based answers researchers can provide.

Filed under: Crime and Justice, Economic Growth and Productivity, National (US), Tracking the economy |Tags:
8 Comments »