What happened to the good jobs? This is the question posed by fast-food workers who walked out in New York and Chicago in recent weeks. It is the question posed by activists in those corners of the economy—including restaurants and domestic work and guest work—where the light of state and federal labor standards barely penetrates. And it is the question posed (albeit from a different set of expectations) by recent college graduates for whom low wages and dim prospects are the dreary norm.
There is no shortage of suspects for this sorry state of affairs. The stark decline of organized labor, now reaching less than 7 percent of private-sector workers, has dramatically undermined the bargaining power and real wages of workers. The erosion of the minimum wage, with meager increases overmatched by inflationary losses, has left the labor market without a stable floor. And an increasingly expansive financial sector has displaced real wages and salaries with speculative rent-seeking.
New work by John Schmitt and Janelle Jones at the Center for Economic and Policy Research recasts this question, posing it not as a causal riddle but as a political challenge: what would it take to get good jobs back?
Schmitt and Jones start with a basic distinction between good jobs (those that pay $19 an hour or better and offer both job-based health coverage and some retirement coverage) and bad jobs(those that meet none of these criteria). Each of these categories accounts for about a quarter of the workforce (the rest fall somewhere in between), with the share of good jobs slipping since 1979 and the share of bad jobs creeping up. The goal, by simulating the impact of different policy interventions, is to increase the share of good jobs and to eliminate—as much as possible—the bad jobs entirely.
Some policies—however salutary—would have little impact on this “good job-bad job” distribution. Raising the minimum wage, for example, would boost the earnings of 30 million workers, but it would do so by transforming bad jobs into not-quite-so-bad jobs. A worker earning $10 an hour without benefits, after all, is still pretty far removed from a good job.
The graphic below summarizes the findings of Schmitt and Jones, for men and women, for five policy changes. Gender pay equity, not surprisingly, would yield some small gains for women—a slightly higher percentage of good jobs, and slightly lower percentage of bad jobs. A 25 percent increase in college attainment yields only a modest improvement, a finding consistent with other research suggesting that wages are falling despite increasing educational attainment and not because there is some “skills” mismatch between available workers and available jobs.
There is a stronger payoff for collective bargaining, which Schmitt and Jones simulate with an increase in union density sufficient to capture the same number workers as the increase in college attainment (in the first scenario, 8.7 percent of the workforce are given college diplomas; in the second, 8.7 percent of the workforce are given union cards). This yields not only a union wage premium but higher rates of job-based health and pension coverage. But the payoff is not as big as one might expect, probably because labor’s ability to deliver such benefits to its members has fallen as its share of the workforce has gone down. Simply bumping up the union density rate, in other words, is not the same thing as reclaiming the labor movement of past generations.
The strongest payoff comes with socializing and universalizing health and retirement coverage. Adopting either would erase the bad jobs entirely. Adopting both would push the share of good jobs to nearly half (50 percent for men, 39 percent for women). This resonates with our understanding of the perverse logic of job-based social policy—which tends to widen inequalities (good jobs, after all, are the ones with good benefits) rather than close them. It resonates with our understanding of the broader benefits of universal social policy—which wipes away not only the waste and stigma associated with risk-rating and means-testing, but the crushing insecurity of going uncovered or uninsured. And it resonates with our political and economic realities, in which incremental progress on social policy (maybe just in the states) seems more likely than a surge in labor organization and more resourceful than deep personal investments in education.
Organized labor in the United States has always been an urban institution. Early organizing depended heavily on the natural solidarities of workplace and neighborhood, and on the ability of urban labor unions to support one and other—through boycotts and secondary pickets, and through political bodies like City Feds or Central Labor Councils. And while the CIO shifted labor’s attention from metropolitan to sectoral organizing, its sustained successes—autoworkers in Detroit, packinghouse workers in Chicago, dockworkers in San Francisco—were still rooted in urban settings.
The relationship between urban and union density remains important. Cities facilitate organizing. This month’s fast food strikes in New York City, for example, would have been much harder to pull off—and much less visible—in New Rochelle or New Paltz. It is easier to sustain organizational victories in urban settings, where other workers, customers, and the broader public are attentive to working conditions and employer tactics. And, once organized, metro unions can make it harder for marginal employers to bid down wages; they can block the low road and pave the high road of local economic development.
But new estimates of union density in American metropolitan settings show that relationship unraveling. The first graphic below ranks 284 metropolitan areas by their 2012 union density (private, public, or all workers), and—in the second panel—shows the sectoral breakdown for each of those metro areas. The second graphic maps private sector employment and union density across all (continental) metro areas.
In some respects, the results are unsurprising. The densest union settings remain clustered in the Northeast, the Rustbelt, and California. Reflecting the importance of public sector unionism, some of the strongest overall union settings are state capitals: Lansing, Olympia, Sacramento, and so on. And the weakest union presence—especially evident on the map—can be found in the “right-to-work” settings of the Deep South and Mountain West.
But the metro advantage is also surprisingly weak. More than half (155 of 284) of the nation’s metros have a private sector union density below the national rate of 6.2 percent. Indeed, union strength—in terms of overall numbers and density—is largely vested in ten or twelve northern and western cities (use the slider on the map to narrow the range of cites by the number of union members). Unions are not thriving in cities, but they are hanging on where they claim some historic organizational base.
There are a number of things going on here. For starters, the city of 2012 is clearly different from the city of 1935 or 1955. It is thinner, less urban in form, and—as a consequence—less able to sustain solidarity. Metropolitan St. Louis, to cite one example, had a population of just under two million in 1960, spread over four counties and about 2,000 square miles. By 2010, the metro population was still under three million, but the metro area now sprawled across seventeen counties and over 9,000 square miles—for a population density (about 325 persons/square mile) a third of what it had been a generation ago.
In turn, the occupational base of those cities has changed dramatically. Deindustrialization has devastated the Rustbelt: in just the last twenty-five years, cities like Detroit, Cleveland, and Milwaukee have lost anywhere from a third to three-quarters of their unionized manufacturing jobs. In some settings (as in much of the Northeast) such losses have been accompanied by an equally dramatic surge in (largely non-union) service employment. And many of nation’s newer metro areas, especially in the South and the Sunbelt, have been postindustrial settings all along.
For all of these reasons, as Enrico Moretti and others have suggested, the generic economic advantages enjoyed by cities (and their workers) have given way to stark differences among them. While some may be able to thrive as hubs of innovation and high-wage employment, others are likely to slip further and further behind. And as the gaps between cities widen, so too do the gaps within cities: economic segregation—the physical, political, and social isolation of the rich and poor—is starker now than it has ever been.
In this world, a little bit of new organizing—hinted at by recent action in big-box retail and fast food—could make a big difference. And it could make a big difference not just for the workers directly involved, but for the economic vitality and equity of the cities in which they live.
Almost four years into the “recovery,” the employment picture is still grim. It’s not just the unemployment rate’s agonizingly slow descent. We still face persistently high rates of underemployment (including those who would like to work but have given up looking, and those working part-time because they cannot find full-time work). And there is little sign of recovery from an unprecedented collapse in labor-force participation, as many bail out of the workforce entirely.
But perhaps the starkest trend is the spike in long-term unemployment. Prolonged joblessness has always risen during recessions, but has never shot up the way it did after 2007 or stuck around so long into the “recovery.” The mean (average) duration of unemployment (see graphic below) jumped to over forty weeks in December of 2011—nearly double its previous peak of twenty-one weeks in late 1982—and still sits at nearly thirty-seven weeks. The median duration (half of the jobless spend less time unemployed, half spend more) rose as high as twenty-five weeks—more than double its previous (1982) peak. Indeed, because these rates have been so high for so long, the Bureau of Labor Statistics and the Census Bureau had to update their surveys in 2010 to allow respondents to report unemployment durations of up to five years (the previous upper bound was two years).
While there is no official definition of long-term unemployment, the commonly-used threshold is six months or twenty-seven weeks. As of February, about 40 percent of the jobless fit this description, a share that has not budged in a year. This is also the point (twenty-six weeks) at which unemployment insurance coverage would lapse in most states. Federal extensions push this out to between forty and seventy weeks (depending on state law and state unemployment rates), but workers are still exhausting their benefits (the red line on the graph) at starkly higher rates than ever before.
The cruel punch line here is that some states (most recently North Carolina) are meeting unprecedented need by slashing benefits. And because of the way in which the costs of unemployment insurance are shared, state action pares access to both state and federal benefits. Consider Georgia, which has chopped its eligibility for state benefits from twenty-six to eighteen weeks. This, in turn, reduces the number of weeks of extended (federal) benefits available to the state’s unemployed, for a net loss of nineteen weeks of eligibility. At Georgia’s average weekly benefit (about $275), the net loss in 2013 to each unemployed worker is over $5,000.
And then, of course, the sequester kicks in. While state unemployment insurance funds are not affected by the across-the-board budget cuts, federal benefits will take a big hit. And because the 5 percent fiscal year cut has to be absorbed in just six or seven months (the fiscal year ends in October), the reduction in federal benefits is likely be on the order of 10 or 12 percent.
All of this promises to do lasting damage. The burden of long-term unemployment, as John Schmitt and Janelle Jones have shown, falls disproportionately on those already disadvantaged in the labor market—African Americans, Latinos, less-educated workers, and young workers. Long stretches of joblessness bring with them not only economic insecurity but (as Dean Baker and the Pew Fiscal Analysis Initiative underscore) stark personal and social costs—including real barriers to re-entering the workforce, physical and psychological costs to workers and their families, and general productivity losses.
Political scientists and party activists continue to sift through the demographic tea leaves left behind by the last election—the Democrats’ white South problem, the Republicans’ larger race problem, the growing generational divide. But the most striking gap between the parties is not the race or age or even income of voters. It’s how close they live to each other.
The map below follows up on the insights of Dave Troy, who finds a decisive break between Democratic and Republican support at a population density of about 800 persons/per square mile. In less-densely populated counties, about two-thirds voted for Romney; in more-densely populated counties, about two-thirds voted for Obama. The distinction here is not between suburban conservatism and skyscraper liberalism: even most small cities and suburbs have thousands of persons per square mile (Cedar Rapids, Iowa, for example, has a population density of about 2400/sq mile).
The 2012 results by county appear below (using the conventional red-blue color ramp). I’ve sorted the counties by density, and divided them into five groups (percentiles), each encompassing about one-fifth of the total population. The first percentile, for example, starts from the least-densely populated county and moves up the scale until we have enough counties to capture a fifth of the population. The number of counties in each group varies widely: the least-densely settled fifth live scattered across 2,288 counties; the most-densely settled fifth live in a mere 58 counties.* The population and electoral shares, for whatever counties are selected, show in the boxes at right. (You’ll need to select the “all” button below if you want to see every county at once).
The core pattern is confirmed by toggling between the first and fifth percentiles (the least- and most-densely populated counties). The first percentile encompasses almost three-quarters of the nation’s counties and over 80 percent of its land area (all totals here are for the continental U.S. only). Romney’s margin in this group (60:40) is decisive, but this is only a fifth of the electorate. The fifth percentile, by contrast, is virtually invisible on the map (58 scattered urban counties) but, with a few exceptions (Orange County, California; Cobb County, Georgia), they all belong to Obama.
There is, of course, a lot going on here, including a long history of regional and metropolitan patterns in partisan alignment. But a hopeful reading of the map would go something like this: People who live close to one another are more likely to know someone of a different color, a different income group, or a different sexual orientation. They therefore rely upon and appreciate the provision of public goods and public services (transit, parks, garbage collection), even as they consume fewer public dollars than their less-densely populated counterparts.
If pressed to reduce the last century of economic history into one graphic, I would go with something like this. The blue line traces the rise and decline of organized labor since the end of the First World War. The red line, in an uncanny reflection, traces the income share of the richest 10 percent of Americans. The drop-down menus, offering other union density and income-share metrics, serve up variations on the same theme: as union power has declined, so too has the share of national income going to wages and salaries, and to the bottom and middle of the income spectrum.
There are, of course, a lot other things going on here. The heyday of shared prosperity (the middle years of the last century, where the two lines converge) depended on a more elaborate policy framework. While federal support for collective bargaining rights sustained a surge in labor organization, other political innovations of the New Deal (including social security and the minimum wage) secured a floor for working-class incomes. Postwar social movements girded that floor by closing off avenues for discrimination. The tax system and regulatory obstacles to speculative finance erected something of a ceiling for higher incomes. And substantial public investments (in things like the GI Bill, mortgage subsidies for veterans, housing projects, and the interstate highway system) kept the rest of the structure in pretty good repair. Since then, to put it bluntly, we’ve pretty much torn the whole house down.
On this score, union decline is—for a number of reasons—a pretty good marker for the broader dismantling of the New Deal. First, the policies driving and shaping inequality across the last generation—steep cuts in social spending, the political abandonment of organized labor, deregulation and privatization, tax cuts, and punitive cycles of unemployment—shared a common goal: to redistribute income upward by eroding the hard-fought bargaining power of ordinary Americans. Union losses account for a large chunk of rising inequality, especially for men and especially in the 1970s and 1980s.
Second, union losses have also shaped the political environment. The “right-to-work” push of the 1940s, the business offensive of the 1970s (captured in Lewis Powell’s infamous 1971 memo to the Chamber of Commerce), and the attack on public sector unions in recent years all shared the conviction that union power needed to be checked at the bargaining table and at the ballot box. Representing a third of the private workforce, mid-century unions fought and won battles over trade, workplace safety, social policy, and civil rights. With union membership at 6.6 percent of the private labor force in 2012 and falling, those battles are no longer even taking place.
And third, union decline has fed broader inequality because, in the American context, so much is at stake at the bargaining table. In settings where workers (and employers) can count on a decent minimum wage, universal health care, and expansive public retirement accounts, the stakes of private employment (and collective bargaining) are not that high. In the United States, economic security remains shackled to private job-based benefits that are increasingly elusive (or expensive), and public policies are crafted and calibrated as supplements—or as reluctant and lean alternatives.
In his State of the Union address, President Obama heartened many progressives with a call for raising the minimum wage to $9.00 (from its current $7.25), and then pegging its value to increases in the cost of living. This would be a bold move, and it raises an important question: What should the minimum wage be? What is the appropriate floor for the labor market?
This question was first raised a century ago in a scattering of state-level efforts. At the time, many reformers saw wage regulation as a means of pressing some workers—women, children, blacks, and immigrants—out of the labor market entirely. The goal was not so much a floor to lift wages as a door to shut out low-wage “chiselers.” In an era when most other industrial democracies were forging ahead with broader minimum-wage laws, American efforts were constrained to a few states, aimed at a few workers, and routinely disdained by the courts—which saw them as violations of the freedom to make contracts.
The economic crisis of the 1930s recast this debate. Early on, New Dealers saw the minimum wage as part of a broader push for “fair competition,” sustaining responsible employers by penalizing their low-wage competitors. The idea of calibrating the minimum to the wages of other workers was embedded in “prevailing wage” benchmarks for all government contracts. Some keyed labor standards to living standards, targeting—as FDR put it in his second inaugural address—the “one-third of a nation ill-housed, ill-clad, ill-nourished.” Some saw the minimum wage as an essential complement to the protection of labor’s rights; it was a way to “underpin the whole wage structure….[to] a point from which collective bargaining could take over.” And, most broadly (in an argument that resonates today), the minimum wage was pursued as a recovery strategy, using increased purchasing power to achieve renewed prosperity.
The end result, the Fair Labor Standards Act (FLSA) of 1938, banned most child labor, established a maximum workweek of 44 hours (rolled back to 40 hours in 1940), and set a minimum hourly wage of 25 cents (about $3.60/hour in inflation-adjusted, 2012 dollars). This was a modest starting point, an unhappy compromise between labor and New Deal interests looking to raise the floor and business and southern interests looking to keep it low (and full of holes). In the decades that followed, the scope and level of the minimum wage remained a political struggle. The federal minimum has been raised twenty-three times since 1938. Most of these amounted to a bump of a dime or a quarter—and never more than 70 cents in one shot.
Every legislative battle over the minimum wage has been marked by grave concerns about interfering with markets or freedom of contract, and by dire predictions (debunked here andhere and here) that each increase would drive businesses into bankruptcy and workers into the soup lines. In the absence of a professional wage commission (common in other countries, including the United Kingdom), every increase had to run the gauntlet of Congress. This not only pared any increase back to what was politically feasible in any given session, but also ensured—given the long reign of “Jim Crow” southerners in Congress—that many sectors and occupations (agriculture, domestics) would be exempted from its coverage altogether.
As a result, the American minimum wage hits a lower target, and covers a smaller share of its workforce, than those in most of its peer countries. Of the subset of thirteen rich OECD democracies with comparable data, all but two (Spain and Portugal) have higher minimum-wage rates (calculated at either the exchange rate or the purchasing power parity of the U.S. dollar) than the United States. On the ratio of minimum-wage rates to the median earnings of full-time workers in each country, the United States ranks dead last.
But let’s set the politics aside for the moment and imagine that the minimum wage could be linked—now and across its history—to some clear and objective criteria. We chart these in the graphic below. A first gambit, following the president’s logic, would be to use the poverty level. To stay above the 2013 federal poverty line, a full-time worker (the lowest of the reference lines on the graph below) would need a minimum wage of $9.55 for a family of three.
It is widely recognized, however, that the poverty level is an outdated and insufficient measure of family needs and expenses: it is based on spending patterns of the 1960s, it underestimates the costs of transportation and housing and health care, and it omits other expenses—most notably child care—altogether. For these reasons, the Census Bureau has begun to experiment with an alternative supplemental poverty measure. This index, asShawn Fremstad points out, puts minimum-wage workers and their families even further behind. The supplemental poverty wage for a family of four would be about $12.87/hour. An even better yardstick would be a “living wage” that accounts for both the actual cost of living and any taxes or transfers. Such a threshold varies across and within states, but even where the cost of living is modest, it would suggest a minimum wage two or three times the current level (in Harlingen, Texas, the living wage for a family of three is $20.97; in Des Moines, Iowa, it is $25.38; in New York City, $32.30).
The second option, suggested by the second half of the president’s pitch, is that once the wage is pulled to a more respectable level, its future value be indexed to inflation. In this way the maintenance of its real value would not be dependent on the whims of Congress—which has twice in the last generation allowed a decade to pass between increases (1981-1991 and 1997-2007).
The inflation-adjusted minimum wage is shown in the above graph as the “real minimum wage” (the statutory minimum expressed in 2012 dollars). This yields the commonly cited benchmark of about $10.50 (adjusting for inflation using the basic consumer price index [CPI-U]), the value of the minimum at its peak in 1968. If we use the preferred CPI-U-RS index of inflation (which applies our current method for measuring price increases to earlier periods, yielding a slightly lower rate of inflation), the 2012 minimum wage would be $9.25.
The danger here, of course, is that a built-in cost-of-living adjustment would likely displace or at least delay future legislated increases. As a result, the level and impact of a higher minimum would depend greatly on the starting point. Starting north of $9.00/hour would put us about where we were in 1968. Starting any lower might condemn us to a meager minimum wage for the foreseeable future.
A better option would be to calibrate the minimum wage to productivity, or to the growth of the economy. There is a certain logic and justice to the assumption that the rewards of economic growth be shared equally. In fact, the minimum wage did keep pace with productivity from the end of the Second World War into the late 1960s, helping (alongside strong unions and other political commitments) to ensure shared prosperity during that era.
There are a few ways of imaging this link between economic growth and the minimum wage. If the minimum wage had tracked productivity growth (using the output per hour of all persons in the nonfarm business sector) since 1947, it would be at $14.73/hour today. If it had tracked productivity growth just since 1968 (a higher starting point, when the minimum’s real dollar value peaked at $9.25), the minimum wage would be $20.85 today. And even if the minimum reflected only half of the productivity gains since 1968, it would still be $15.05/hour.
The growing gap between productivity and the minimum wage (and between productivity and earnings more generally) is pretty stark. This is partly explained by the changing character of that productivity: since the 1970s, a growing share of investment has been siphoned off for depreciation (much of it to pay for the rapid turnover of computer hardware and software). As Dean Baker and Will Kimball have argued, this reduces the “usable productivity,” or the share of economic growth that might reasonably be expected to show up in paychecks. But even if we account for this, the gap still yawns: even if it were pegged to this more conservative measure of growth since 1968, the minimum wage would be $16.54/hour today.
Finally, we could tie the minimum to the wages of other workers. This, in effect, retreats from the assumption that all wages (and the minimum) should rise with productivity, suggesting instead that the minimum wage should be calibrated to the earnings of the typical worker. Unfortunately, the Bureau of Labor Statistics does not have consistent, reliable data on the wages earned by the median worker (the typical worker, who earns more than half of all workers and less than half of all workers). Instead, we use the average hourly wage of production and nonsupervisory workers, who constitute around the bottom 80 percent of the workforce.
At its inception in 1938, the minimum wage was a little more than a third of the average production wage. The 1950 increase (to $.75/hour) pulled the minimum wage to over half the average production wage—a ratio it maintained into the late 1960s. But, as noted, the value of the minimum has dropped since then. Even in an era of general wage stagnation, the minimum wage has dropped back to barely a third of the average production wage. If it had stayed at half the average production wage, it would be $9.54 today, more than $2.25/hour higher than the actual federal minimum wage.
The takeaway from all of this is simple: even the low benchmarks suggested here (one half the average production wage, the poverty level for a family of two, simply recapturing the minimum’s 1968 value) come in at more than $9.00. The benchmarks that actually sustain the value of the minimum or tie it to economic growth over time come in at close to twice that. And those that tie the minimum wage to the actual cost of living for working families run three times that or more. However bold the president’s pitch seems in the current political climate, a minimum wage of $9.00/hour is still a modest threshold by any sensible measure.
Last month the Institute of Medicine (IOM) released an exhaustive survey of U.S. Health Care in International Perspective, measuring the United States against sixteen peer countries (other high-income democracies) on a wide range of health outcomes. The results—summed up in the report’s subtitle, “Shorter Lives, Poorer Health”—are not pretty, but they aren’t surprising. The IOM’s Report is accompanied by an interactive graphic, ranking the United States on specific causes of death (from drowning to diabetes to dengue fever), on which the country persistently falls near the bottom of the pack.
The bigger tragedy, of course, is that while underperforming all of our peers we manage to spend more—indeed, a lot more—than any of them. The graphic below plots the IOM’s basic health metrics (deaths, deaths from communicable diseases, deaths from noncommunicable diseases, life expectancy at birth) against the most recent data from the OECD on health spending. The United States is in red, its sixteen peers are in blue (hover over the dots to identify individual countries), and the dotted black lines plot the basic trends. On each measure, the United States is a stark outlier—spending more and getting less in return than any of its peers.
The relationship between spending and outcomes is complicated. Some of the spending differences reflect background differences in wealth: the United States spends more per capita on health because it is a rich country (it spends more per capita on cars and breakfast cereal too). And some of the outcome gaps (such as the high American rates of traffic accidents or gun violence) reflect factors other than the reach or effectiveness of the health care system. But even accounting for this, the gap between what we spend and what we get is jarring.
The sources of that gap are familiar. As a rule, we pay more than our peers for the same health care goods and services (especially drugs). Much “health spending” is wasted on administrative overhead, on marketing, and on the important business of figuring out who is insured and who isn’t. And that spending is starkly uneven, lavishing services on those with good insurance coverage and bypassing those without.
Most of these problems, unfortunately, will remain even when the Affordable Care Act (ACA) is fully implemented. In papering over some of the gaps in private coverage, the ACA’s mandates and subsidies are unlikely to do much to rein in costs. The recent IRS ruling, holding that the “affordability” test for job-based coverage would be based on individual rather than family coverage, is likely to leave many uninsured or underinsured. Expanded coverage, in turn, seems likely to be accompanied by a simultaneous decline in quality.
When we look at these same measures three or five years from now, it is unlikely that the United States will have moved any closer to the pack—on what we spend (or squander) and what we get (or don’t get) in return.
A few things stand out. First, we see no break in the unrelenting decline in union power and presence. The national economy has shed about 3.3 million union jobs since 1983, more than a third of those disappearing in the last recession (since 2007), and just under 400,000 between 2011 and 2012 alone. Private sector union membership, which reached over a third of all workers in the early postwar era, has shriveled to 6.6 percent. Globalization, technological change, and recession have played a part in this, but the losses are much starker in the United States than in other settings experiencing the same economic pressures. What sets the United States apart, as Kris Warner, John Schmitt, and others have pointed out, is a regime of state and federal labor law that makes it hard to form or sustain unions, and easier to get rid of them.
Second, we see wide variations across states. Union numbers are shaped by both patterns of economic growth and the density of union membership or coverage. Not surprisingly, private sector losses are starkest in rust-belt states (Pennsylvania, Illinois, Missouri, Indiana) marked by both deindustrialization and slow population growth. And scattered gains can be found in states adding population (even if the rate of union membership is flat or falling).
Third, we are beginning to see the combined effects of recent attacks on public sector unions and austerity budgets. Nationally, we lost over 230,000 public sector union members between 2011 and 2012. The losses are notably stark in those states (Wisconsin, down 48,000; Ohio, down 37,000) in which Republican governors have taken the offensive against public workers and their bargaining rights.
In recent posts I’ve suggested various ways of looking at the national job numbers. In Unemployment Numbers: The Long View , I used a simple “back to pre-recession jobs” threshold to compare the 2007 recession and recovery to the trajectory of all other postwar recessions. In Back to Full Employment (posted at the CEPR Blog), I added three other thresholds or targets: the December 2007 unemployment rate, the December 2007 unemployment and labor force participation rates, and the “full employment” UE and labor force participation rates of the late 1990s. In The Good Jobs Deficit, I used the BLS occupational projections for 2010-20 to drive home the importance of considering the quality of whatever jobs we might add.
One more run at these numbers is, I think, instructive. At the suggestion of Chris Brenner, I’ve tweaked the graphic comparing all postwar recessions to isolate the recovery phase of each business cycle. The first metric, “since the start of recession,” tracks nonfarm jobs by month from the onset of each downturn, until the job numbers return to their pre-recession level. The second metric, “since start of recovery,” tracks nonfarm jobs from the trough of each recession for four years (or until the start of the next business cycle). This throws into sharp relief the peculiar character of our more recent recoveries. From the 1940s through the 1980s, recovery was accompanied by significant job growth –on the order of between 10 and 20 percent after 4 years. In our last three recessions, by contrast, we actually continued to lose jobs through the first months of “recovery,” and then added jobs at a glacial pace.
In 2003-4 and again over the last three years, this combination is often passed off as a curiosity: a “jobless recovery” in which the economy gets better but the labor market doesn’t. But that’s not really what’s happening. Job growth is slow because the recovery is slow. From the 1940s through the 1980, recoveries were relatively short and robust—usually adding about 10 percent to Gross Domestic Product in the first two years after the trough of the business cycle. In the 1991-3 recovery, GDP grew only 6 percent. In 2001-3, GDP grew only 5.9 percent. In the first two years of our current recovery (through July 2011), GDP grew only 4.4 percent. That’s not a jobless recovery. It’s no recovery at all.
By the conventional “peak to trough” measure, the recession that began in December 2007 ended 18 months later, in June 2009. But you’d be hard-pressed to find much evidence of “recovery” in the labor market. Job creation is barely keeping up with population growth. The marginal decline in the unemployment rate (from about 10 percent at its worst to just under 8 percent at the end of 2012) has been driven mostly be people dropping out of the labor force. And long-term unemployment remains stubbornly high.
All of this begs a bigger question: What would real recovery look like?
The first and simplest measure (see graphic below) is simply to chart our progress towards regaining the jobs lost during the downturn. This yields a flat threshold at the December 2007 employment levels, and a jobs deficit that pushed past 8 million in late 2009 and now sits at about 3 million.
This “struggling back to the surface” measure has some utility, especially in comparing the recovery trajectories of different recessions. But it becomes less useful the longer the downturn lasts, as population growth creates a new baseline for the labor force. Getting back to December 2007 employment has little meaning after five years of immigration, retirements, and high school and college graduations.
The second measure, the pre-recession unemployment rate of 4.7 percent, yields a more ragged threshold. This measure holds the unemployment rate constant, while allowing for both growth in the labor force and changes in rates of labor force participation. This is a starkly artificial measure, since rates of labor force participation and unemployment are closely related: People drop out of the labor force (stop looking for work, go back to school) when employment prospects are dim. But this, in effect, is the way job numbers are often reported, as if the unemployment was driven solely by a loss or gain in jobs.
The third measure, which holds both the unemployment and labor force participation rates at their December 2007 levels, offers a better (but more dismal) appraisal of where we are. It assumes not only that we want to return to the pre-recession levels of unemployment—but also that we want to return to pre-recession levels of labor force participation.
Labor force participation grew steadily across the latter third of the twentieth century (driven largely by the entry of women into the labor force), peaking at just over 67 percent in the late 1990s (see Table 1). Some of the decline since then reflects long-term demographic trends, include the rising share of older workers (whose participation rates are lower) and higher rates of postsecondary enrollment. But most of that decline (about two-thirds by one estimate) is cyclical. Indeed, since the onset of the recession, the participation rate has declined more sharply than in any preceding 5-year period. The third measure allows for the natural (demographic) decline, and assumes a labor force participation rate unaffected by the economic conditions.
Table 1: unemployment and labor force participation, 1979-2012
Source: BLS (CPS) Series LNS14000000 and LNS11300000
The result is a jobs deficit (just over 8 million) that is not much less than it was at the recession’s trough, and a jobs threshold that we are unlikely to meet in the next decade.
But even this understates the damage. The December 2007 rates of employment and labor force participation were themselves hangovers from the previous (2001) recession. A better threshold would be that of effectively full employment, akin to the conditions we enjoyed in the late 1990s. As Jared Bernstein and Dean Baker have argued tirelessly, full employment—especially in a setting marked by weak labor standards and a tattered safety net—is the best defense against insecurity and inequality.
The fourth measure, then, uses the unemployment and labor force participation rates (the latter again adjusted to reflect only the cyclical decline) of the late 1990s as our benchmark. This raises the threshold even more. We start the business cycle a couple of million jobs behind, and the gap widens quickly, reaching—and sticking at—a jobs deficit on the order of 11 million.