Use your widget sidebars in the admin Design tab to change this little blurb here. Add the text widget to the Blurb Sidebar!
Home Commentary Metropolitan Spotlights Dashboard Data Maps Blog Subscribe to MetroTrends Blog - RSS RSS icon

Economic Growth and Productivity Archive

Detroit's retirees aren't the only ones taking a haircut

Author: Richard Johnson

| Posted: July 28th, 2014

 

0728blogPensions

Detroit’s municipal employees and retirees made headlines last week when they accepted a pension cut to help reduce the city’s debt. Is this agreement a game-changer, as some contend? Now that some retirees have agreed to forgo already earned pension benefits—once considered sacrosanct—will unions across the country follow suit? Don’t count on it.

Many state and local employees and retirees have already shouldered significant pension cuts, and it’s unrealistic to expect them to absorb any more. Rather than relying on givebacks from public servants, policymakers must commit themselves to paying the benefits they promised.

Like most public-sector employees, city workers in Detroit who spent their entire careers on the city’s payroll receive generous pensions. Benefits equal a fraction of employees’ final average salaries multiplied by years of service. Before the city’s finances collapsed, that fraction rose over employees’ careers, reaching 2.2 percent of salary after 25 years on the job. Employees with 30 years of service could retire as early as age 55. Once retired, their benefits automatically rose 2.25 percent a year.

Over a lifetime, these benefits add up. Consider a Detroit municipal employee making $50,000 a year. Under the old rules, after 30 years on the job, he could retire at the age of 55 with an initial pension of $27,500, worth more than half a million dollars over his lifetime (assuming a 2 percent real interest rate and 3 percent inflation).

As the city prepares to enter bankruptcy, however, that pension has been slashed. First, the city cut the benefit-formula multiplier for years worked after 2011 to just 1.5 percent. That change trimmed lifetime benefits for newly hired city workers by about a fifth. Last week’s agreement further reduces annual benefits by 4.5 percent and eliminates the automatic benefit escalator in retirement, reducing lifetime benefits for new retirees by another quarter. New hires who eventually retire at age 55 after 30 years of service will receive pensions worth 40 percent less, in inflation-adjusted dollars, than their counterparts who retired in 2011.

New Jersey employees have already taken on the burden of benefit cuts

These cuts may help get Detroit back on its feet, but don’t expect similar retiree givebacks elsewhere to solve the nation’s public pension problem. Many state and local retirement plans have already substantially cut benefits. Most of the benefit formula changes apply only to new hires, but current workers and retirees have not been spared.

As the Center for Retirement Research points out, between 2010 and 2013, 12 states reduced or eliminated cost-of-living adjustments for current retirees as well as current employees and new hires. (Another five states reduced COLAs but shielded current retirees). Many jurisdictions have also raised the amount employees must contribute to their plans.

New Jersey, with some of the worst-funded plans in the nation, is a good example. In 2011, lawmakers eliminated COLAs for state retirees, whose benefits had increased each year by 60 percent of the change in the consumer price index. That cut shaved 25 percent off the lifetime pension of a newly retired state employee with 35 years of service. The state also increased mandatory employee contributions from 5.5 to 7.5 percent of pay, reducing what retirees get from the state by another eighth. Additionally, the 2011 reforms boosted the retirement age and reduced the plan multiplier.

All told, New Jersey state retirees will receive pensions only two-fifths as large as what they would have been paid under the rules in effect before 2008. Yet, New Jersey’s pension plan is in worse financial shape today than it was in 2007. Because of these benefit cuts, state employees hired at age 25 must now work 28 years before their future payments are worth more than the value of their required plan contributions. Those who leave state employment earlier end up financing their entire pensions themselves, without any state contributions. In fact, they would be better off if they could opt out of the state retirement plan and invest their required contributions elsewhere.

States should fully fund their retirement promises

Most troubled public pensions are financially distressed because states and localities have not contributed as much as their actuaries say they must in order to pay the future benefits they’ve promised, not because their benefits are too generous. Our recent comprehensive analysis of state plans graded many plans poorly because they failed to provide employees with adequate retirement security, especially those who spent less than a full career in public service. If policymakers want to fix the public pension mess, they must dedicate themselves to fully funding the retirement promises they’ve already made.

Shirley Lightsey, president, Detroit Retired City Employees Association, stands front of part of Diego Rivera's Detroit Industry mural after a news conference at the Detroit Institute of Arts in Detroit, Monday, June 9, 2014. (AP Photo/Paul Sancya)

Filed under: Aging, Detroit, Economic well-being, Geographies, Income and Benefits Policy Center, Income and Wealth, Labor force, Metro, NJ, Retirement, Retirement/pensions, Social Security |Tags: , , , , ,
Add a Comment »

The Washington DC area needs more affordable rental housing

Author: Leah Hendey

| Posted: July 24th, 2014

Last Tuesday saw the release of Housing Security in the Washington Region, a study I wrote with my colleagues at the Urban Institute with assistance from the Metropolitan Washington Council of Governments.

The study is unique for its breadth, spanning Washington, DC and 11 surrounding jurisdictions in Maryland and Virginia. It’s also thematically expansive, examining the full continuum of housing needs, from emergency shelter to affordable homeownership, highlighting how supply, demand, funding streams, and policies impact homeowners, renters, and the unhoused at every income level.

In a study of this size, it’s easy to lose sight of what all these numbers mean for real people. Let’s unpack a small portion of the study.

Rental housing affordability is a big problem in the area. Nearly half of all renters (regardless of income level) in the Washington region were cost burdened in 2009-11. That means almost 315,000 households were paying more than 30 percent of their monthly income on rent and utilities. To give you a sense of the magnitude of the problem, there were about 300,000 households total in Prince George’s County, Maryland.

Of course, households at the bottom of the income scale were most likely to be cost burdened. In fact, 86 percent of extremely low income households—those earning less than $32,000 annually—were cost burdened in 2009-11. Keep in mind that many of the services that get you through your week are performed by workers who would fall into the “extremely low income” category—maids, drycleaning workers, pharmacy aides, fast food cooks, coffee shop cashiers, and nursing aides and orderlies, to name a few.

Screen Shot 2014-07-29 at 8.00.27 AM

Take, for example, a nursing aide (who may just be caring for a loved one right now). On average, an aide in the DC metro area earns $28,700. Imagine that the nursing aide has two children and needs a two-bedroom apartment. If we think that she could afford to pay 30 percent of her income in rent, then she could afford a utilities-included apartment that rents for $720. (By the way, that only leaves her with $20,000 to pay for food for a family of three; clothing, including her scrubs for work; transportation to get to work; health insurance; emergencies; etc.)

As you can see in the chart above, it’s no surprise that our nursing aide might be cost burdened—there is not one jurisdiction in the area that she could afford to live in if she paid the DC metro area’s median rent of $1,320. At that level, our nursing aide would have to work the equivalent of 1.83 jobs to afford to rent such an apartment and not be cost burdened. In Virginia’s Arlington and Fairfax Counties, she would have to work more than two full-time jobs.

Our study concludes that every jurisdiction in the Washington region needs more units to meet the needs of renters like our nursing aide—94,200 units in total. Policymakers, local agency staff, and philanthropists can use this data on gaps in the housing supply to inform their work and make strategic investments to aid those struggling with high rents.

This study was commissioned by The Community Foundation for the National Capital Region, with generous support from The Morris and Gwendolyn Cafritz Foundation.

Filed under: Affordability, Affordable housing, Economic Growth and Productivity, Geographies, Housing and Housing Finance, Housing and the economy, Income and Wealth, Job Market and Labor Force, Labor force, Low-wage workers, Metro, Metropolitan Housing and Communities Policy Center, Multifamily housing, Neighborhoods, Cities, and Metros, Wages and nonwage compensation, Wages and nonwage compensation, Washington DC, Washington, D.C |Tags: , , , , , , ,
Add a Comment »

A flourishing performance management landscape

Author: Mary Winkler

| Posted: July 23rd, 2014

 

 

0723blogUSbudget

In summer 2013, John Bridgeland and Peter Orszag’s “Can Government Play Moneyball?” challenged the government and nonprofit sectors to make greater investments in understanding what works and to pursue a more rigorous approach to evidence and impact.

Today, the spirit of the “moneyball” movement is blossoming. At its core, it’s about organizations improving their performance by continuously tracking whether their programs and services are leading to desired results. Although performance measurement and management is hardly new to the public and nonprofit sectors, it is much more utilized than arguably it has ever been due to demands for greater accountability and growing expectations that organizations do more with less.

Demand for performance management techniques is high

In 2012, the Urban Institute, in partnership with Child Trends and Social Solutions, launched a new tool to help nonprofits measure and manage performance: PerformWell. This resource was designed to fill an information void, help nonprofits identify outcomes and performance indicators, and provide surveys and assessment tools to assist with tracking and reporting.

Since its launch in March 2012, the PerformWell site has had nearly 300,000 visitors and more than 12,500 have signed up for webinars, signaling a genuine need and appetite to engage in this work.

The performance management landscape

PerformWell is only one recent addition to an increasingly vibrant landscape of performance measurement and evaluation resources. In October 2012, the Bill and Melinda Gates Foundation, Hewlett Foundation, and Liquidnet launched Markets for Good, a forum for sharing innovative ideas, best practices, and diverse points of view for helping the social sector make better decisions and support a “dynamic culture of continuous learning and development.”

America Achieves, through the Results for America Initiative, developed an agenda that calls for a federal evidence and evaluation framework, an increase in the use of evidence in all federal formula and competitive programs, the creation of a federal “what works” clearinghouse, and more accessible, user-friendly, publicly available data.

In December 2013, Leap of Reason and PerformWell partnered to host After the Leap, the first-ever national conference on performance management. Themes from this conference have been repeated in many circles, including the recent Social Impact Exchange (SIE) conference which included a panel on how funders can support the capacity of nonprofits. Although this annual conference  is generally geared toward nonprofits and funders interested in scaling social impact, many participants acknowledge that performance and evaluation strategies and practice evolve along a continuum of practice – a message echoed by Nancy Roob at the ATL conference and more recently in her blog post in the Stanford Social Innovation Review series on the “Value of Strategic Planning and Evaluation”.

And not to be left out, foundations are now likely to face increasing scrutiny around investment choices, thanks to a recent entrant to the field – Philamplify – designed by the National Committee for Responsive Philanthropy. Just launched in May, the new site is described by the Washington Post as “Yelp for the philanthropy sector.” What distinguishes Philamplify from other efforts to hold foundations accountable is that reviews are conducted independently and with or without the consent of foundations. The goal is to grow the number of foundation assessments from 3 to 100 of the largest foundations in the United States.

A bright future for performance management

As my colleague Jeremy Koulish pointed out last year, “getting to measures that can be applied uniformly across the whole sector is a challenging endeavor.” Yet these resources and initiatives are indicative of growing attention and a sense of urgency around issues of measurement and evaluation for the nonprofit, government, and philanthropic sectors, neither of which are likely to diminish anytime soon.

Photo: AP Photo/J. Scott Applewhite. 

Follow Mary Winkler on Twitter @MaryKWinkler.

A version of this piece was originally published in the PerformWell newsletter (July 2014).

Filed under: Center on Nonprofits and Philanthropy, Cross-Center Initiatives, National Center for Charitable Statistics, Nonprofit data and statistics, Nonprofits and government policy, Nonprofits and Philanthropy, Performance Management and Measurement, Performance measurement and management, PerformWell, Public and private investment, Tracking the economy |Tags: , , , , ,
Add a Comment »

Is student debt hindering homeownership?

Author: Maia Woluchem and Taz George

| Posted: July 17th, 2014

Since 2004, student loan debt has tripled to $1.1 trillion, surpassing both outstanding auto and credit card debt. Many have sought to connect the dots between the rise in student debt and the five percent decline in homeownership, but research presented this week at the Urban Institute raises questions about the evidence.

Debt1

While there is some indication of a possible link, it’s not nearly strong enough to fuel a narrative casting debt-ridden graduates as a significant economic burden, permanently lowering the homeownership rate.

Regardless of the evidence, we still need to monitor student debt’s effects to the best of our ability, given its outsized role in household balance sheets, said Meta Brown of the Federal Reserve Bank of New York. In previous years, borrowers with student loans were associated with better credit profiles and higher homeownership rates among young households relative to their student debt-free counterparts. Now, the relationship is less clear, and those with student debt are slightly less likely to hold home-secured debt (a proxy for homeownership). Holders of student loan debt also have worse credit scores, meaning it could be more difficult for them to qualify for a loan, find housing, and obtain a credit card.

Debt2

If student debt really is hurting homeownership, the panelists agreed that certain types of students are bearing the bulk of the damage. Research presented by Jeffrey Thompson of the Federal Reserve Board found a connection between student debt and lower homeownership almost entirely attributable to students that took out loans but did not successfully attain a degree.

Other factors besides student debt are more certainly at play in driving down the homeownership rate, such as changing interest in homeownership and the broader issue of restricted credit availability. Notably, the homeownership rate has declined steeply for 27-30 olds both with and without student debt. Many of these young potential homeowners have been locked out of the housing market at an opportune time to buy a home due to their inability to meet the debt-to-income ratios required in this tight lending environment. Others have been stymied by low credit scores, which hamper their ability to secure loans for a home. Others still may have decided that homeownership is not the best financial decision at this point.

Sandy Baum of Urban’s Income and Benefits Policy Center also critiqued the notion that student loan debt is weighting down the  homeownership rate. She notes many student debt measures are flawed because of weak data and questionable assumptions about borrowing patterns.  And from the perspective of students planning on taking out loans to continue their education, what is the alternative? Forgoing college is rarely the best long-term financial decision, demonstrated by the well-established earnings gap between those with and without a college degree. Moreover, a large proportion of the borrowers with more than $40,000 in student loan debt have borrowed for graduate or professional school, raising long-run earning power even higher.

The mainstream discourse often ignores the nuances surrounding the issue of growing student debt and it’s linked to financial hardship and broader economic woes, suggested Beth Akers from Brookings Institution. As student loans comprise an increasingly large share of total household debt, expect the conversation to continue.

Correction: The original version of this post mislabeled the series in the second chart. The chart plots only 30-year-olds with home-secured debt, broken out by those who did and did not have student loan debt at any time between ages 27 and 30. We originally implied that it showed homeowners between ages 27 and 30. Our apologies.

Filed under: Credit availability, Economic Growth and Productivity, Education and Training, Employment and education, Higher education, Homeownership, Housing and Housing Finance, Housing Finance Policy Center, Labor force, Tracking the economy |Tags: , , , ,
1 Comment »

Is residual income the key to the superior performance of VA loans?

Author: Laurie Goodman and Ellen Seidman and Jun Zhu

| Posted: July 16th, 2014

Default rates on loans guaranteed by Veterans Affairs (VA) are consistently lower than on loans insured by the Federal Housing Administration (FHA). For loans originated in 2007, the worst origination year, 36 percent of FHA loans have experienced at least one delinquency of 90 days or more, compared with only 16 percent of VA loans, as shown in the figure. These differences persist; for 2012 origination, the 2.3 percent FHA default rate was 64 percent higher than the VA’s 1.3 percent default rate.

Va1

While FHA and VA borrowers spend roughly the same percentage of their income on their mortgage payments, FHA borrowers have lower incomes and lower credit scores. When controlling for income and credit score, VA borrowers still have considerably lower default rates. For 2008 loans, for example, the default rate for FHA loans was 26.1 percent compared with just 11.6 percent for VA loans. But even if we apply VA borrower characteristics to FHA borrowers, the FHA default rate for 2008 loans would still have been 20.1 percent.

Why does the difference persist over time? In a commentary posted today, we looked at some possible explanations:

  • Military culture – Could military culture or special incentives not to default, such as potential loss of a security clearance, cause a significant difference? Evidence is weak to support this theory and in 2013, only 17 percent  of VA borrowers were on active duty when they took out their loan.
  • Direct contact – The VA has a statutory requirement to service its borrowers and contact them directly. FHA does not engage in direct contact; the servicer contacts the borrower. As a result, the VA intervenes at an earlier point in a more uniform manner. While this might improve the likelihood that a delinquent loan reperforms, often referred to as the cure rate (it actually doesn't seem to), it is unlikely to explain the difference in the substantially higher rate at which FHA loans go 90 days delinquent.
  • Skin in the game – Unlike the FHA’s 100 percent insurance, VA lenders remain on the hook for losses after the VA’s limited guaranty is exhausted. As a result, VA loans tend to be concentrated in lenders who are familiar with the VA’s special underwriting and servicing systems. We hope to explore FHA and VA default rates for lenders who originate both types of loans.
  • Residual income test -- While the VA’s uses a residual income test and debt-to-income (DTI) guidelines to assess a borrower’s ability to pay, the FHA and conventional lenders rely exclusively on DTI. The residual income test measures whether a borrower will have enough money left after paying their mortgage and related expenses each month to meet unanticipated expenses. Although the expense side of the VA’s test has not been updated for years, and therefore probably understates the residual income a family actually needs, it works. For 2008 originations by borrowers with incomes under $50,000, the VA default rate was about 60 percent of the FHA default rate.

While adding a residual income test may cause some families to rethink or delay a home purchase or purchase a less expensive house, it also appears to be an effective way to reduce default rates and ensure borrowers take out mortgages they can afford. FHA and conventional programs should consider adding residual income to their underwriting. Moreover, lenders making higher cost Qualified Mortgages may want to consider using a residual income screen to provide more certainty that their borrowers can truly repay the loan.

Filed under: Agency securitization, Credit availability, Economic Growth and Productivity, Federal programs and policies, GSE reform, Homeownership, Housing and Housing Finance, Housing and the economy, Housing finance, Housing Finance Policy Center, Tracking the economy |Tags: , , , , , ,
3 Comments »

Why government data sites are so hard to use

Author: Jon Schwabish

| Posted: July 14th, 2014

 

0711blog

A couple of weeks ago over at FlowingData, Nathan Yau wrote a post about how to improve government data sites. The post was mostly a constructive critique of the difficulties users have extracting and using data provided by the federal government. (Surely state and local governments create similarly poor interfaces). It’s not that I disagree with Nathan, but I think it’s worth digging a little deeper into why government web sites and data sets aren’t particularly user-friendly.

Having worked at a government agency for nearly a decade and spoken to countless agencies about data visualization, presentation techniques, and technology challenges over the past few years, I thought I might add my own perspective.

In his post, Nathan suggests three reasons why government data sites are inexcusably poor:

Maybe the people in charge of these sites just don't know what's going on. Or maybe they're so overwhelmed by suck that they don't know where to start. Or they're unknowingly infected by the that-is-how-we've-always-done-it bug.

In my experience, government web sites aren’t difficult to use or extract data from because government workers don’t “know what’s going on” or are “overwhelmed by suck.” The real answer is probably closer to the “that-is-how-we’ve-always-done-it bug”—but even that simplifies a more complicated story.

Let’s say for the moment that you work at a large government agency and your job is to process a large household survey and make it available to the public (think, say, the Census Bureau). Up until the past couple of years or so, your target audience was other government workers, academics, and researchers in similar fields. And most of those analysts use tools similar to the ones you’re using: Stata, SAS, SPSS, MATLAB, maybe a little Fortran or C++. So what do you do? You create a data file so that they can download it, unpack it, and analyze it using those programming languages. Your primary audience is not journalists (data-driven journalism had not yet taken off) or bloggers (in-depth data blogging was just beginning) or data scientists (the term didn’t even exist).

Now, however, with the Open Data movement, interest in and demand for Big Data, expanded open source programming languages and tools, and the general explosion of DATA EVERYWHERE, everyone is clamoring for more of your government data. So the mandate has changed. And you, as the government worker who has for so long processed this survey the same way, now are being asked to provide that data in a variety of formats. You’re not familiar with those different file formats or tools, so you ask about training or maybe even hiring some additional staff. Unfortunately, that’s probably not going to happen. Demand for more (or better) data has not translated into more funds to train existing staff or hire new staff. For example, between fiscal years 2011 and 2013, the overall budget appropriation for the Census Bureau fell from $1.2 billion to $859.3 million, a decline of over 25 percent. (It’s hard to tell, but that may actually be an overstatement of the decrease, if there were still some extra funds in the 2011 appropriation to process the 2010 decennial census.) At the Bureau of Economic Analysis, the producer of the National Income and Product Accounts, total appropriations fell by a smaller amount: from $93 million in 2011 to $89.8 million in 2013.

I don’t believe that government agencies can’t or don’t want to make their data more accessible or are so overwhelmed by the technology that they’re unable to come up with solutions. Instead, I think many agencies have yet to adjust to a world that demands data, and demands that it be easily accessible at all times. It’s going to take time, money, and training for the government to catch up.

Filed under: Economic Growth and Productivity, Income and Benefits Policy Center, Monetary policy and the Federal Reserve, Tracking the economy |Tags: , , , , , ,
3 Comments »

The Federal Reserve is not ending its stimulus

Author: Donald Marron

| Posted: July 10th, 2014

 

federalReserve

Yesterday, the Federal Reserve confirmed that it would end new purchases of Treasury bonds and mortgage-backed securities (MBS)—what’s known as quantitative easing—in October. In response, the media are heralding the end of the Fed’s stimulus:

  • “Fed Stimulus is Really Going to End and Nobody Cares,” says the Wall Street Journal.
  • “Federal Reserve Plans to End Stimulus in October,” reports the BBC.

This is utterly wrong.

What the Fed is about to do is stop increasing the amount of stimulus it provides. For the mathematically inclined, it’s the first derivative of stimulus that is going to zero, not stimulus itself. For the analogy-inclined, it’s as though the Fed had announced (in more normal times) that it would stop cutting interest rates. New stimulus is ending, not the stimulus that’s already in place.

The Federal Reserve has piled up more than $4 trillion in long-term Treasuries and MBS, thus forcing investors to move into other assets. There’s great debate about how much stimulus that provides. But whatever it is, it will persist after the Fed stops adding to its holdings.

(P.S. I have just espoused what is known as the “stock” view of quantitative easing, i.e., that it’s the stock of assets owned by the Fed that matters. A competing “flow” view holds that it’s the pace of purchases that matters. If there’s any good evidence for the “flow” view, I’d love to see it. It may be that both matter. In that case, my point still stands: the Fed will still be providing stimulus through the stock effect.)

Filed under: Economic Growth and Productivity, Monetary policy and the Federal Reserve, Public and private investment, Taxes and Budget, Tracking the economy |Tags: , , , ,
Add a Comment »

What you need to know about the new workforce development bill

Author: Lauren Eyster

| Posted: July 9th, 2014

 

0709blog

After more than a decade of continuing resolutions, a bipartisan bill to reauthorize the Workforce Investment Act of 1998 (WIA) has passed Congress and should go this week to the White House for the president’s signature.

While in no way perfect, the new Workforce Innovation and Opportunity Act (WIOA) is a clear improvement over its predecessor. It builds on 16 years of learning and knowledge and will provide better opportunities for workers who need new skills for the new economy.

Our paper last year on the innovations and future directions of workforce development highlights some of the key ideas that are embedded in WIOA.

Encouraging innovation. WIOA encourages local workforce boards to use promising strategies such as career pathways and sector strategies to better serve workers and employers. The advantage of these approaches is that they connect employer demand for skills and worker characteristics and abilities with the design of education and training programs. WIOA would also restore the provision for governors to reserve a full 15 percent of WIOA funds for statewide activities, allowing them to support greater innovations in their states.

Attaining industry-recognized credentials. One of the new core performance indicators under WIOA measures a student or trainee’s progress toward recognized postsecondary credentials. Again, this is designed to link employer and worker needs, as employers can be more confident that graduates have the right skills.

Improving data for measuring performance. While the original Workforce Investment Act introduced common measures of performance, WIOA strengthens performance reporting by enhancing and aligning a set of performance indicators across adult (including adult education) and youth programs. The legislation also supports efforts to link participant data to earnings data across all WIOA- funded programs and coordinate state and federal evaluation efforts.

Refocusing on disadvantaged populations. The Workforce Investment Act dismantled most requirements around serving disadvantaged populations. WIOA does not reinstate these provisions but does require boards and One-Stop operators to develop practices that encourage providing services to individuals with barriers to employment. This could be challenging considering that states need to meet performance level requirements but could help ensure that more disadvantaged individuals receive the often longer-term services they need.

What WIOA does not do is return overall workforce development funding to pre-sequestration levels immediately. Funding would be increased annually until 2020, but states and local areas will continue to be asked to do more with less.

For more of the legislative details, see the National Skills Coalition’s great side-by-side analysis of WIA and WIOA provisions.

Photo from Shutterstock.

Filed under: Economic Growth and Productivity, Education and Training, Income and Benefits Policy Center, Income and Wealth, Job Market and Labor Force, Job training and apprenticeships, Labor force, Low-wage workers, Tracking the economy, Unemployment, Wages and nonwage compensation, Wages and nonwage compensation, Work support strategies, Workforce development, training, and opportunity |Tags: , , , , , , , ,
3 Comments »

Maps need context

Author: Jon Schwabish and Bryan Connor

| Posted: July 2nd, 2014

It might be the case that maps are the most data-dense visualizations. Consider your basic roadmap: it includes road types (highways, toll roads), directions (one-way, two-way), geography (rivers, lakes), cities, types of cities (capitals), points of interest (schools, parks), and distance. Maps that encode statistical data, such as bubble plots or choropleth maps, are also data-dense and replace some of these geographic characteristics with different types of data encodings. But lately we’ve been wondering if most maps fail to convey enough context.

India
As an example, consider this map of poverty rates by districts in India. It’s a fairly simple choropleth map and you can immediately discern different patterns: high poverty rates are concentrated in the districts in the northernmost part of the country, on part of the southeast border, and in a stretch across the middle of the country. Another set of high-poverty areas can be found in the land mass in the northeast part of the map. But here’s the thing: we don’t know much about India’s geography. Without some context—plotting cities or population centers—we can only just guess what this map is telling me.

Many readers will be more familiar with the geography of the United States. So when maps like this one from the Census Bureau show up, we are better equipped to understand it because we’re familiar with areas such as the high-poverty South and around the Texas-Mexico border. But then again, what about readers familiar with basic U.S. geography, but not familiar with patterns of poverty? How useful is this map for them?

To more completely understand data encoded to maps, context is important. Where are the city centers? What are the patterns of population or income or other metrics that may be important?

For example, a team (including Bryan Connor) from this year’s civic hack day in Baltimore built something to directly address this problem. This mapping tool lets you build and compare every map possibly generated from the city data (as collected and published by the Baltimore Neighborhood Indicators Alliance). Poverty can be placed next to population, racial diversity next to education data, and so on.

BNIA

The data visualization studio Interactive Things provides us with another good example with their Daily Swiss Maps project for NZZ. Over several months, they worked with an editorial team to build maps that reveal new insights about Switzerland. Some feature small multiples, others highlight population distribution, and all of them link to a corresponding editorial explanation on nzz.ch.

Maps are some of the most popular visualization types—and not just those that map favorite beer types or accents—but also those maps that provide dense levels of data in a familiar format. It’s just that sometimes those maps leave out a level of context that will help us better understand the information being shown.

Are we suggesting that every single map needs several other maps to give context? Well, maybe. Or perhaps single maps need more and better annotation in order to highlight regions and patterns of relevance.

Filed under: Baltimore, Economic Growth and Productivity, Finance, Geographies, Income and Wealth, International |Tags: , , , ,
4 Comments »