Before Social Security, almost 80 percent of American seniors lived in poverty. As Social Security contributions and payments became established early in the postwar era, poverty among seniors began to fall—and continued to do so under the Great Society, abetted by the passage of Medicare in 1965. Since the program’s growth slowed in the 1980s and 1990s, poverty among seniors has leveled out at about 10 percent.
Cuts to social programs have only widened the gap between poor Americans and everyone else—especially the retreat from AFDC since 1996. By any measure, the TANF program is aweak substitute for the program it displaced. By the mid to late 1970s, AFDC reached about a third of all poor families, and over 80 percent of poor families with children. With the implementation of TANF, this coverage shrank almost immediately (1996-1997) to about half of all poor families and about two-thirds of those with children. By 2010-2011, only 20 percent of all poor families, and just over 27 percent for those with children, were receiving TANF assistance. Between 1992 and 2010 alone, one million more American children slipped below the poverty line. The share of Americans living in severe poverty (below 50 percent of the poverty line) has almost doubled since 1972.
As a rule, social insurance programs (like Social Security pensions) are generous but poorly targeted [see graphic below], while means-tested programs are well-targeted but meager. As a result, American social policy closes most of the poverty gap for elderly families and individuals, for whom social security benefits flow to rich and poor alike. But it accomplishes progressively less for single-parent, two-parent, and childless families—for whom means-tested benefits are both less generous and less universal. The gap is especially acute for non-elderly childless families who—regardless of their income—rarely meet the eligibility threshold for public assistance.
Across the OECD, the net redistributive impact (cash benefits received minus direct taxes paid) of social policy for low income households stands at about 40 percent of market income. But it is barely half that in the United States, making our rate of redistribution one of the lowest in the industrialized world. Much the same pattern holds for other kinds of social spending, including such in-kind benefits as education or public health care. Indeed, in each category of social spending, the United States spends significantly less than its peers [see graphic below]—the only exception being health care, a reflection of unusually high U.S. health costs rather than more generous coverage.
The latest numbers from the OECD—which compare inequality, incomes, and poverty rates across its member countries, before and after the impact of taxes and transfers—present yet another reminder of the United States’ dismal ranking among its peers. They also make a remarkable case for the power of social policy to combat inequality. At the pre-transfer or market rate of poverty, the U.S. poverty rate is pretty close to those in other settings [see graphic below]. But after taxes and transfers—that is, after social policies and the mechanisms for paying for them have kicked in—the U.S. poverty rate leaps ahead of its peers.
This graphic below traces almost twenty years (January 1995 to February 2014) of gains and losses in US manufacturing, finance, and public employment. Job growth (or loss) is indexed, with three choices for a base point: the start of the series (January 1995), the end of the boom of the late 1990s (January 2000), and the onset of the last recession (December 2007). On each graph, the national numbers are represented by the red line and job trajectories in the states (mouse over the graph, or filter the state list, to identify particular states.
The basic trends are stark and familiar. Manufacturing sheds about 20 percent of its job base during the recession of the early 2000s, and then nearly another 20 percent during the 2007-9 downturn. These losses reflect cyclical shocks, but also the relentless pressure of trade and currency manipulation. Indeed, losses began mounting well before the 2001 recession, when the “high dollar” response to the Asian financial crisis put American manufacturing at a stark and lasting disadvantage.
Financial employment, by contrast, shows fairly steady gains across this era—rising about 20 percent through 2007 before suffering some losses early in the recession. Employment alone understates the rise of finance, which has—over the same span—captured even greater relative shares of value-added and corporate profits. And the larger pattern (finance up, manufacturing down) is hardly accidental: predatory, high-dollar, boom-and-bust finance guts real productive employment by design.
Public sector employment rises slowly and steadily from 1995 to 2007, but then a couple of curious (and debilitating) things happened. First, the impact of recession-era stimulus spending is virtually invisible, as increased federal spending was matched by widespread cuts in state and local spending. After mid-2009, public sector employment leveled off and then began to fall—as state and local losses mounted, and as austerity was generalized by the Budget Control Act of 2011. Since the recovery began (June 2009), the public sector has shed 725,000 jobs—the vast majority (628,000) at the state and local level. This austerity, unprecedented in our recent history, is a drag on recovery and a direct contributor to slow job growth (or continued losses) across the economy.
The graphic below plots national minimum rates across the OECD, both as a share of each country’s median and average wage and in real U.S. dollars. As a share of median or average wages, the U.S. minimum trails the pack—well behind our richer peers, and most of the poorer cousins in the OECD as well. At the U.S. exchange rate, the U.S. minimum trails all the other rich countries; and at U.S. purchasing power (a more stable measure) it trails all but Japan in that group. From “The Bare Minimum: Labor Standards and American Inequality,” Dissent (March 2014)