Yes, Virginia, There are Unions in this State

In 2005, the state of Virginia had 165,000 members of labor unions, and an additional 46,000 workers who were covered by unions. That’s some 6.2% of the work force represented by organized labor.

Of course, if some people had their way, that number would be zero–Virginia’s “right to work” laws are an article of faith among the state’s political elite, including Gov. Tim Kaine. According to the Bureau of Labor Statistics, only six states nationwide have a lower proportion of workers represented by a union: Arkansas, Georgia, Idaho, the Carolinas, and Utah.

The average (unweighted) median income in those six states in 2004 totaled $41,437. Average median income in the eight states (excluding Hawaii and Alaska) with the highest proportion of workers covered by unions was $48, 308. Those states include New York, Washington, New Jersey, Michigan, California, and Illinois.

But Virginia, like Utah, stands out as a state with a low unionization rate but also median income well-above average. But this is in some respects a misleading picture: Virginia’s high median income ($53,275) is largely a by product of the very high levels of prosperity in Northern Virginia: the average (unweighted) median income in NoVa’s six counties is some $76,101. That prosperity in turn is a direct result of proximity to the federal government.

All this is by way of noting that the AFL-CIO’s James Leaman is on firm ground when he claims there is general correspondence between unionization and prosperity, as he does in an editorial in Saturday’s RTD. Leaman also does a nice job laying out a standard liberal-labor reform agenda; it’s a welcome sight to see organized labor get a chance to speak for itself in the RTD.

A key point Leaman makes is the hostility and harassment workers who wish to organize unions face in the current climate. Indeed, a study released this week by the Center for Economic Policy Research suggests that pro-union organizers and activists run nearly a 20% risk of being illegally fired over the course of an organizing campaign.

Published in: on January 6, 2007 at 5:01 pm  Comments (12)  

Why Inequality Matters

Finally, the third part of our response to Barton Hinkle’s critique of economic populism.

Hinkle in effect poses this question: why care about inequality at all, as opposed to simply caring about poverty?

After all, he reasons, if real living standards are improving for everyone, why worry that some are getting much more than others?

That question invites five kinds of responses.

The first is simply to observe that for the bottom quintile, life has not gotten a whole lot better as measured by income standards in the last quarter century. In 1979 the bottom quintile had an average post-tax income of $13,500 (in 2003 dollars). In 2003, they had $14,100. Over that same time period, the proportion of families in poverty has actually risen, from 9.7% in 1975 to 10.2% today. All this has taken place over the same time period that the real (post-tax) average income of the top 1% of the income pile has more than doubled, from $305,000 to over $700,000.

Nor, as Hinkle suggests, does the advance of technology make up for the stagnant prospects of the poor. Yes, some of the poor have access to cable TV and computers and cell phones and Playstations—more sophisticated forms of entertainment. But do they really derive dramatically greater utility, satisfaction, and happiness from those items than they did 30 years ago from black-and-white network TV and old-fashioned pinball machines? That’s questionable. What’s not questionable is that an American child or family that does not have access to most of those items is going to feel left out, socially excluded.

That observation points us to our second response: there is good reason why the “goalposts” (Hinkle’s term) of living standards should change over time as a society develops. The necessaries of life are to a substantial degree socially determined. In some societies historically, it was not a big deal not to have a pair of shoes. But in contemporary societies, to go shoeless would be unthinkable, and a sure sign of utter exclusion from mainstream society.

 In short, what people need is not simply calories and shelter and medicine, but also the goods which make it possible to be a fully functioning, fully-respected, and indeed self-respecting member of society. The content of those goods changes over time, and as societies get richer, people need access to more and/or better goods in order to perceive themselves and be perceived by others as full members of the society.

Third, consider again the issue of class mobility across generations. Many conservatives, cogently, insist we should be concerned with not just inequality but with social mobility. But few recognize or acknowledge that there is an internal connection between increases in inequality and rates of mobility. Simply put, the wider the gap there is between classes, and in particular between the very top and everyone else, the more difficult it will be for those in the bottom to climb all the way to the top (and the harder it will be for those at the top to slide very far down the ladder).

Fourth, apologists for growing inequality often write as if workers are simply getting their just desserts in the marketplace. But there is strong evidence that since the mid-1970s, the American worker has simply not been getting a fair share of the economic growth his or her efforts have helped produce. Average productivity per hour jumped 76% between 1973 and 2004; but the median compensation only increased by 18.5% over that same time period. If compensation had increased at the same rate as increases in productivity over that entire time period, the median compensation in 2004 would have been $25.76 per hour, not the $17.36 it actually was.

That huge gap can be explained by two primary factors. First, average compensation only went up 46.4% for workers as a whole over that time period, compared to productivity growth of 76%. In short, workers’ compensation as a whole grew just over 60% as fast as the increase in their own productivity. Second, the best-off workers captured the lion’s share of the increase in compensation which did take place. When the top end gets huge gains and the majority not very much, you end up with what the data show–a big difference between the average increase and the median increase in compensation.

Finally, and perhaps most fundamentally, large-scale inequalities call into serious question the meaning and relevance of two fundamental American ideals: equal opportunity and democracy. Hinkle (and others) seem all too willing to accept as “normal” the fact that some persons within this society have dramatically less promising life prospects than others. But would conservatives who say inequality is no big deal be willing to take their chances and trade places with someone in the bottom quintile of the income bracket? Would they be willing to send their kids to a randomly selected public school within the city of Richmond?

The idea that anyone can make anything that want to of themselves is fundamental to Americans’ conception of this country and what it stands for. The fact that, increasingly, it just ain’t so points to a troubling and growing contradiction between what American claims (or aspires) to be and what it actually is.

The other threatened value is democracy. Democracy is not simply about the right to cast a ballot; it’s about the right and ability to exercise meaningful self-governance over the conditions that shape one’s life. In short, the ability to have a genuine say about decisions and policies which affect them, and the ability to have one’s ideas and viewpoints be taken seriously by others.

 Democracy in this sense requires a fairly substantial degree of equality, if it is to be real. Incomes and wealth do not need to be literally equal, but opportunities, skills, and resources to participate in politics need to be broadly distributed over the population. Moreover, no group should be so wealthy or so powerful that they can exercise disproportionate influence over the political process and claim unequal access to and influence over decisionmakers.

That’s a test that American democracy simply can’t meet right now, and growing inequality is both cause and symptom of that failure.

So what would a more fair distribution of income look like? That’s a difficult question to answer with a high degree of specificity, but we can begin to gauge the gap between where we are and where we might and should be by considering this hypothetical scenario: what if, between 1979 and 2003, the bottom three quintiles of the income distribution experienced an increase in income equivalent to the average growth of the income of society as a whole over that time period?

Well, people in those groups would be dramatically better off. After taxes, the average family in the poorest quintile would now earn in $17,900, not $14,100—equivalent to an annual raise of over 25%. Families in second poorest quintile would have post-tax income of $36,200, not $30,800. And families in the middle quintile would earn $51,600, not $44,800. (All figures in 2003 dollars.)

That would have been an economy in which economic growth led to broadly shared prosperity. It also would have been an economy that lifted millions of people out of poverty and made life better and easier for the bulk of workers and middle class folk who form the backbone of American society.

But that’s not the economy we have, and it’s not the economy we are going to have in the absence of some substantial shifts in public policy aimed at bolstering workers’ bargaining power and distributing the benefits of economic growth much more equitably.

That’s where economic populism comes in.

We can’t go back and undo the enormous increase in inequality of the 25 years. But we can take steps to assure that the next quarter century (and beyond) produces something quite a bit better for ordinary people.

 All data derived from charts compiled by the Economic Policy Institute

Defending Economic Populism, Part One and Part Two

Published in: on December 7, 2006 at 4:44 am  Comments (1)  

Income Mobility and the Social Structure

Okay, let’s get down to brass tacks here in looking at Barton Hinkle’s critique of Jim Webb’s populism. The topic here is one of the most fundamental social questions we can possibly ask—whether or not the American social order is a just one—so it’s worth sinking our teeth in just a bit.

In tackling that question, we need to distinguish between two related yet distinct concerns. The first is whether the basic structure of American society is just or fair; the second is whether the long-term trend in the United States has been towards more or less fairness and equality. This is an important distinction:  if, for instance, long-term trends are static, but the basic structure of society is unjust, then we should be less than heartened to learn that an unjust society is not getting any more just.  

Keeping that in mind, let’s look at the data.

 

The specific data in question here are snapshot analyses of income distribution, divided by quintile, i.e. how much income is the top 20% getting compared to the bottom 20%, and each sector in between? Hinkle, like many others, correctly notes that this sort of snapshot, taken in itself, provides only limited information about the fairness of the overall structure of society.

Why isn’t the snapshot data enough? Because the snapshots don’t provide us information about mobility over time between the quintiles. Consider the child of an affluent family who goes to a selective private college. As a young adult that person might well be in the middle or bottom quintiles of income as he or she finds her feet in the labor market or struggles through graduate school. But eventually that person has a very good chance of making it to one of  the higher quintiles—at least until he or she retires (or gets laid off), when they will likely see income decline.

The quintile snapshot essentially abstracts from all this churning and provides a static shot of how the income distribution looks at a given time. So it’s a limited tool, if we think that we should take not just absolute levels of inequality but also social mobility into account in evaluating the justice of the social structure.

If we compare several quintile snapshots over a long period of time, however, we can garner useful information about the long term trend in the distribution of income, towards more or less inequality. Indeed, looking at how these snapshots have changed over the past three decades produces some striking results:

In 1974, the bottom (poorest) quintile of American families captured 5.7% of aggregate family income; in 2004 that same group captured just 4.0% of such income. In 1974, the top (richest) quintile of American families captured 40.6% of aggregate family income; in 2004 that same group captured 47.9% of such income. Perhaps most strikingly, in 1974 the top 1% of American families, captured 14.8% of aggregate family income; in 2004 those same fortunate few claimed 20.9% of such income. (This data comes from the Economc Policy Institute.)

This is exceptionally strong evidence that the distribution of income in the United States has gotten (just as Jim Webb claims) substantially more unequal over time. In fact, the trend is so strong that it simply is not in dispute among economists and other social scientists who study inequality—in those circles the live debate is not about whether inequality has grown sharply, but what the causes of that growth have been.

Even so, we might not be so disturbed by this growing inequality if it were offset by an increase in social mobility. But how best to measure social mobility?

To answer this question we must again introduce another distinction: between the movement of individuals up and down the quintiles due to variations in income over the course of the life cycle, and between genuine social mobility, in which an individual sees a permanent increase (or decline) in one’s relative position. Conservatives are correct to point out that the income quintiles are not very static over time, with individuals moving in and out of each group all the time, but many (including Hinkle in this case) make the mistake of confusing variation in income over the course of the life cycle—the fact that you’re likely going to make more money in your 40s and 50s than when you’re in your 20s or 70s– with genuine mobility.

The best way to measure mobility is not via snapshots of the whole population, but by tracking a set of individuals over the course of their lives and seeing how they do compared to how their parents did. Economists who undertake such studies have found that, at a minimum, genuine social mobility has not increased over the past generation, and in fact may have actually slowed.

This is important because if mobility has been static, but the distribution of family income has gotten sharply more unequal, than we can only conclude that the American social system as a whole has in fact become more unequal and less fair to the folks on the bottom over the past generation.

But the mobility data can also give us needed insight into the justice of the social structure itself. If you are born into the bottom 10% of families, income-wise, what are your chances of making it out of that bottom 10%? What are your chances of making it into the top 10?

The best recent data on that question comes from Tom Hertz’s study “Rags, Riches, and Race,” which examines mobility among black and white families using data from the Panel Study of Income Dynamics. (The paper is reprinted in the book Unequal Chances: Family Background and Economic Success, the best collection of recent academic work on this set of questions.) 

After adjusting for changes in household size, Hertz finds that if you are born into a family in the bottom decile (poorest 10%) of the income distribution, you have a 36.6% chance of remaining there as an adult, and a 57.1% chance of staying in the bottom quintile. You have just a 2.3% chance of making it into the top quintile, and a mere 0.5% chance (1 in 200) of making it into the top decile.

Conversely, if you are born into a family in the top decile, you have a 26.7% chance of staying there as an adult, a 43.2% of being in the top quintile, and a 77.7% chance of being somewhere in the top half of the income distribution. You have just a 5% chance of falling into the bottom quintile, and only a 1.4% chance of falling into the bottom decile.

In short, if you are born in the poorest rung (decile) of American society, you are over 26 times more likely than someone born in the top rung to stay on that bottom rung as an adult. And if you’re born into the top rung, you’re over 53 times more likely to get there yourself as an adult that someone born on the lowest rung.

Is that fair? Not if you take seriously the notion that America should be characterized by substantive equality of opportunity. (And by the way, from the point of view of African-Americans, the actual picture is even worse than these figures suggest, as Hertz found that upward mobility among African-Americans from the bottom to top quartile was less than half the rates observed among whites.)

Confronting the actual data about intergenerational mobility in the United States forces one to confront some hard truths about the basic structure of this society. Where you start has a huge impact on where you end up, and there is no evidence that it’s getting easier for people to move up. And, as we have also seen, the consequences of ending up near the bottom as opposed to near the top have become more severe, as income inequality has grown over time.

None of those conclusions are controversial among academics who study these questions, and in fact some of those scholars have been trying to ring the alarm on this issue for a number of years. Jim Webb just happened to be the Virginia politican who answered the bell.

Next installment: Do the rich pay too much in taxes? 

Published in: on November 22, 2006 at 4:09 pm  Comments (2)  

Sense and Nonsense on Distributive Justice

It must be nice being a nationally syndicated conservative political columnist. As far as I can tell, the main requirement is being willing to spin the same basic arguments and stories over and over again, month after month, year after year, regardless of shifting circumstances.

Now of course I’m being a little unfair–there are important exceptions. I’m usually interested in what William F. Buckley has to say (anyone who was a skiing buddy of John Kenneth Galbraith couldn’t be all bad), and David Brooks of The New York Times makes a good faith effort to engage facts and social science. And while we’ve been a little hard on the RTD’s Barton Hinkle in this space, in reading his stuff one detects someone who is thinking for himself.

It’s difficult to be so generous towards long-time syndicated columnist Walter Williams, especially judging from his latest offering. (The article ran opposite a column by Paul Krugman concerning the causes of America’s stagnant wages.)

Williams addresses the topic of distributive justice. His basic claim, drawing on the metaphor of a poker game, is that no matter how unequal the distribution of income we witness in the economy, no one has any basis for complaint so long as the rules of the game generating that distribution have been fair.

That is a coherent argument, echoing that of Robert Nozick’s classic libertarian argument in the 1974 book Anarchy, State and Utopia. Unfortunately, Williams conflates that process-based conception of distributive justice with an entirely different distributive maxim, the notion that we ought to be paid according to how much individuals “serve their fellow man.”

But as Nozick recognized, rewards in the market often don’t correspond to objective merit or objective contributions to social welfare. For Williams, if someone is willing to pay me $1,000,000 and my brother only $50,000, it must be the case that what I am producing is in fact twenty times more valuable to society than what my brother is producing.

But what if my brother is a public schoolteacher teaching history to 120 pupils a year, whereas I am a history tutor who gives private history lessons to one child, the son of a wealthy industrialist? (Let’s assume the industrialist has identified me as the best possible tutor for his child, and that he insists on the best for his kid rather than on a lower-paid alternative. But I say that the work of helping the spoiled brat kid of a rich scion is so distasteful to me that I won’t do it for anything less than $1 million.) The extremely rich man is willing to pay me $1 million for my services. But that doesn’t imply I’m any more productive than my brother, much less that I’ve made a contribution to my fellow man larger than that of my brother.

Obviously, this example is extreme, but it makes an important point: what the market values and what objectively contributes the most to the common good are two different things. To take another of Williams’s examples, I suppose the invention of Google has helped my life and that of others in some small way, but Google doesn’t contribute nearly as much to my welfare as the sanitation workers who pick up my garbage and recycling each week.

The price of one’s labor depends not on one’s contribution to the public good, or even upon one’s productivity, but on the price at which the buyer of labor can find a suitable replacement for me. Sanitation workers are more easily replaced than skilled celebrity divorce lawyers, hence they get paid less–even though most people would probably admit the sanitation workers collectively make a greater contribution to society than the celebrity divorce lawyers.

Indeed, sometime the market rewards those whose activites damage social welfare–for instance, to take an example close to home, cigarette manufacturers.

Considerations such as these led Nozick to recognize that if you are going to make the fairness of the underlying process the standard of distributive justice, you shouldn’t rely on any other moral maxim, or expect that the results produced by your favored process will overlap with notions of desert, social contribution, utility maximization, and so on. Maybe they do, maybe they don’t, but that doesn’t matter as long as the process is fair.

This of course leads us to consider the Nozickean view proper, as opposed to the fallacious Williams version of it. There are multiple problems with Nozick’s full-blown argument that I won’t belabor here, and even greater problems with the assumption that the actual American economy corresponds to a fair game in which rules are fairly enforced for all. But I’ll content myself with pointing out two fundamental problems with the poker game metaphor.

First, as deployed by Williams, the metaphor totally ignores the question of what happens when poker players bring unequal initial resources to the table. A poker player with $10 million in chips has all kinds of advantages over a player with just $20 in chips, and other things being equal is going to last a lot longer in the game, barring spectactular stupidity or spectacularly bad luck. The well-endowed player can be more patient and wait for a sure winning hand. Or he can risk some of his resources on a bluff that frightens the less well-endowed players into folding on hands they might have won, so as not to risk losing everything.

The truth is, in the American economy people enter the “game” with vastly unequal resources; those initial inequalities then translate into unequal opporunities to develop one’s personal capacities, as well as into unequal bargaining power. The result? Ever larger and more entrenched inequalities, that in turn carry over to unequal starting points for the next generation. (And if you doubt that Americans have unequal life opportunities at the start of life, ask yourself how many parents of children in Henrico or Chesterfield County schools would be happy to have their children randomly assigned to a public school in the city of Richmond.)

The second point to make about the poker game metaphor is that, obviously, the possible outcomes the game generates vary greatly according to the ground rules of the game one agrees upon in advance.

If life really were a poker game, we might all agree to a winner-take-all policy in which one person gets everything and everyone else gets the shaft. But since these are people’s lives we are talking about, it seems more likely that we would want a set of rules that made sure that the winnings of the game were broadly shared, and that no one completely got the shaft (especially since we ourselves might turn out to be the one who goes broke). There are multiple ways one could go about doing that–devising “insurance policies” against bankruptcies, putting a “tax” on winning hands above a certain size, placing absolute limits on the size of bets, making sure everyone starts the game with an equal or almost equal stash of chips, and so on.

This observation is not at all original; it is derived from the landmark work of 20th century American political philosophy, John Rawls’s A Theory of Justice (1971). The fundamental claim of that book is that we should structure the rules of the game in a way that preserves liberty, upholds the levels of social equality required to maintain a democratic state in which everyone’s citizenship and civic voice is valued equally, and improves the lot of the least well off.

One can quibble (as many have) with Rawls’s ideas on the best way to do all that. But examining the depressing statistics about rising income and wealth inequality, stagnant and perhaps declining social mobility, and wages which have remained largely stalled for years even as productivity has increased reminds us that Rawls was absolutely right about one big thing:

We ought to be able to design a better poker game.

Published in: on October 12, 2006 at 1:33 am  Comments (1)  

A Living Wage for Richmond

The RTD weighed in today with a shockingly ill-informed attack on the efforts of Richmonders Involved to Strengthen Our Communities (RISC) to promote a local living wage ordinance. The RTD calls on all the old standbys–that working adults won’t really benefit, that it will cause unemployment, that RISC doesn’t trust the free market.

Most of these arguments have already been discussed in this space, so I won’t rehearse them now. Instead I’m going to be working in the next couple of days on an op-ed submission to the paper on this topic.

Here’s a quick preview of a point that I haven’t made already: even if living and minimum wage laws caused a small increase in unemployment (and even the few studies that do find negative effects, find small negative effects), that doesn’t mean a mandated wage increase is bad policy.

Almost all public policies produce both winners and losers; a public policy that produces a net gain but hurts a few people can and should be rounded out by supplemental policies designed to help the “losers”. These might take the form of, for instance, more generous unemployment benefits, expanded training and educational opportunities for the unemployed, and more effective job networking assistance. This is a conventional argument among proponents of “free trade” like Thomas Friedman of The New York Times who argue that aggregate benefits to consumers outweight job losses due to international trade and globalization, but that there need to be trade adjustment policies to assist firms and workers who are directly affected by trade openness.

This observation allows us to look at the living/minimum wage debate in a new light. The academic argument at this point is not whether the costs to low-wage workers outweigh the benefits of a wage increase; it’s about whether there are any costs at all!

Another point I’ll try to work in is how the anti-wage increase argument depends on an orthodox, texbook view of the labor market that ignores lots of evidence that employers have considerable power and discretion in setting wages.

Coincidentally, wage data released by the Bureau of Labor Statistics this week shows that wages have fallen 2% nationally since 2003, even as productivity continues to rise.

Published in: on August 29, 2006 at 2:29 am  Leave a Comment  

President McGovern (!), Wal-Mart II, Drug Policy

It might seem ungracious to say anything too critical on a day the RTD saw fit to publish most of the letter on Iraq I submitted ten days ago. (Good for them!) So we’ll limit this post to three relatively brief comments.

First, the main news section carried an interesting “big ideas”-type piece on retrospective voting, based on a recent Ohio U. Poll: If they could do it all over, who would Americans have voted for in Presidential elections dating back to 1960, knowing what they do today? The dominant trend is that Americans tend to retrospectively identify with winners, irrespective of party. Kennedy, LBJ, Reagan, and Clinton all “win” by much larger margins in the present-day polls than they did in real life, as do Carter against Ford in ’76 and Bush (41) against Dukakis in ’88. The exceptions to this rule? Richard Nixon and George W. Bush. The poll shows George McGovern actually beating Nixon in ’72, and Al Gore and John Kerry each rather comfortably defeating W in the past two elections. That can’t be comforting news for the president. (Interestingly, despite a well-known liberal backlash against Ralph Nader after the events of 2000, the Green candidate actually pulls much higher support in the recent “votes” than he did at the polls 2000 and 2004.)

Second, over in the business section Bob Rayner serves up a very simplistic defense of Wal-Mart. The core argument goes like this: because people shop and work at Wal-Mart in great numbers, it must be good for society. Well, maybe. People produce and consume tobacco in large quantities too, but I’m not sure that’s so good for society.

The question of whether Wal-Mart’s prices justify its other social costs deserves its own treatment which I’m sure we’ll have occasion to take up in this space before too long. For now it’s enough to note that just because an outlet has low prices doesn’t mean it’s providing a service to society; to take an extreme example, no one (I hope) would say that an outlet store that specialized in selling stolen goods and/or goods produced by slaves at low, low prices is doing a society a favor.

Today let’s look at the labor argument Rayner offers. He writes, “The long lines of would-be workers whenever a new Wal-Mart prepares to open suggests that many American believe it’s a good employer.” Well, not necessarily–those lines are a better indication of how desperate many Americans are for any employment than they are a comment on Wal-Mart.

A better indicator of Wal-Mart’s fitness as an employer is its turnover rate. It’s estimated that 70% of Wal-Mart employees leave within the first year, and that overall annual turnover in the company is around 50%. In addition, Wal-Mart is the target of the nation’s largest ever class action sex discrimination lawsuit and has been charged with violating child labor law in multiple states, forcing employees to work off the clock, illegal anti-union activities, and myriad other violations of labor laws. No wonder so many employees seem anxious to leave.

Rayner also writes that Wal-Mart appears to have good pay and benefits compared to other retailers. Three points here: first, he could only possibly mean compared to other discount retail chains–but this is probably not the right question to ask. Instead, we should compare wages at Wal-Mart compared to wages at the independently owned hardware stores and the like which it displaces. In fact, a 2005 study found that opening a Wal-Mart in an urban or suburban area tends to reduce wages in that area’s overall retail sector; a new Wal-Mart in rural areas has no net effect on local wages (since fewer higher paying jobs are being displaced).

Second, as is well known, the retail chain Costco pays much higher wages than Wal-Mart. It’s not surprising that Costco also has a turnover rate about half that of Wal-Mart, but it might be a surprise that as a consequence of its policies, Costco actually has lower labor costs as a percentage of sales than Wal-Mart.

Third, according to a 2005 UCal-Berkeley study, Wal-Mart’s low wages require its employees to rely on a various forms of public assistance, estimated to be $86 million a year in California alone. That’s not the “free market” at work–it’s a public subsidy to a low-wage employer.

Finally, whatever else you might think about the RTD, be glad that it carries Neal Peirce, one of the best-informed writers on state and local issues out there. Peirce does what a columnist should do: he engages with the best empirical evidence and most creative thinking and practice on a given topic, and focuses on constructive steps as much as on criticism. Today he weighs in on the failures of America’s war on drugs and possible alternative strategies; I’ll add a link to it in this space as soon as it’s available.

On the Minimum Wage

The featured op-ed in the RTD Tuesday is a relatively lengthy piece on the economics of the minimum wage by David Henderson of the Hoover Institution. Not surprisingly, Henderson concludes that the minimum wage has perverse effects, and certainly should not be increased.

This piece is actually more sophisticated than many standard conservative statements about the minimum wage, which simply ignore the mass of empirical evidence indicating that the employment effects of minimum wage increases are negligible, especially among adult workers. Henderson at least takes the trouble to cite influential research by David Card and Alan Krueger on the (limited to non-existent) impact of minimum wage increases on fast food employment.

Along the way, however, Henderson makes some baffling claims. For instance, Henderson charges the pro-labor, pro-minimum wage Economic Policy Institute with “admitting” that a higher minimum wage may lead not to reduced employment but reduced training and increased productivity. Henderson concludes that this must mean cutbacks in training of workers and a faster work pace. Wrong. Productivity might increase and training costs decline if turnover among low-wage workers decreases in response to a higher minimum wage. When wages go up, the cost to workers of losing their job go up as well, so they may become inclined to work harder to hold onto it even in the absence of greater workplace discipline. Likewise, if there is less turnover in a firm, there will be less need for firms to spend money training new workers.

Second, Henderson argues that the minimum wage doesn’t really benefit the least well off, since “only” 9% of those affected (some 1.4 million workers) are likely to be affected by the increase are single parents with children. (Never mind that this is a higher proportion than the percentage of single parents with children in the workforce as a whole–here is the data.) So we shouldn’t care about poor households in which both parents are working, or individuals who happen to be poor? In any case, Henderson draws a seriously misleading picture: some 80% of affected wage earners are adults, 54% are full time workers, and 26% are parents.

Third, in his conclusion, Henderson makes the bizarre claim that unions’ support for the minimum wage is motivated by “greed” and is akin to protectionism. Well, it’s no secret that unions support increases in the price of labor–if that is “greed,” then fair enough. But Henderson implies that unions’ interest is in reducing the jobs available at the bottom of the labor market. This is just silly: higher unemployment rates weaken, not strengthen, unions’ bargaining power and their ability to organize new workers. No one has a more vested interest in a full employment economy than labor unions.

The empirical evidence for large employment affects resulting from an increased minimum wage is suspect at best. Interested readers might check out Card and Krueger’s 2000 article, this summary of the issue from a “neutral” government economist that explores some of the theoretical reasons why minimum wages don’t damage employment, or a recent journalistic piece on economists’ shifting views on the minimum wage.

A few orienting observations might help put the discussion in perspective: the debate about raising the minimum wage can be more accurately characterized as a debate about keeping the minimum wage level from falling further. With each passing day, Americans earning the minimum wage effectively recieve a wage cut.

Indeed, the real value of the minimum wage is now about one-third lower than in the 1960s. The value of the minimum wage rose over the course of the 1960s from around $6/hr (current dollars) to nearly $8/hr by 1969. Yet unemployment fell during that same decade from 5.5% in 1961-1965 to just 3.9% during 1966-1970.

Three decades later, history repeated itself. Unemployment during the first Clinton term (1993-1996) averaged 6.0%; although the minimum wage was increased in 1997, unemployment during the second term (1997-2000) fell to just 4.4%.

In short, even if the increased minimum wage had some slight negative impact on employment during these periods, that effect was simply overwhelmed by larger macroeconomic factors.

One more example: In 1999, the United Kingdom implemented a minimum wage for the first time; according to this study (and others), the sky has hardly fallen.

Given this set of experiences, it’s not reasonable to assert that a modest increase in the minimum wage will have serious perverse effects on employment, especially when combined with sensible macroeconomic policies. And if we reject that conclusion, two fundamental moral reasons for maintaining and increasing the minimum wage carry the day.

The first as that we as a society have an interest in forbidding certain kinds of deeply exploitative relationships. A fundamental tenet of American labor law (and that of every other advanced nation) is that there is an asymmetry in power between employers and employees. The employment relationship is not a simple transaction like buying a banana, but represents an “incomplete contract”; employers and employees agree on what wage will be paid, but not on how much work will be performed. That depends on how much labor is extracted during the labor process. Employers use the threat of unemployment as well as systems of authority to extract as much labor as possible from their workers. The role of the minimum wage is simply to provide a counterweight to the power employers wield, by establishing a minimum threshold of compensation and prohibiting purely exploitative relationships between employers and workers.

Contrary to conventional economic theory, the wages paid by employers are not determined by the marginal contribution made by a given employee; rather they are based on how easily the employee in question can be replaced by a functional equivalent.

Current fast food technology, for instance, might allow a worker to produce $10 of “value” an hour. In a full employment economy in which it is not so easy to find a replacement worker, the employer may feel it necessary to pay $9/hour to a fast food worker. But in an economy with 10% unemployment and the employer is receiving dozens of job applications a day, the employer might be able to hire an equivalent employee who will produce the same value of goods for as little as $4/hour–or much less, in the absence of a minimum wage law. While a minimum wage law does not limit on what some might term the “rate of exploitation” in a given employment relationship, it does put a limit on how little a worker can receive. Indeed, taken to its logical conclusion, the argument against minimum wage morphs into what is in effect an argument for legalizing voluntary slavery.

The second point is more straightforward: Our aim as a society should be to reach the point where if you work, you will not be poor. A higher (and eventually, inflation-indexed) minimum wage is one tool for reaching that goal; the Earned Income Tax Credit is another.

In theory, the EITC could do all the work–some economists claim this is a more efficient approach. But this is at best a politically unrealistic proposition (since it would depend on greater taxation of the better off), and flat out cynical when proposed by conservatives who know that a Republican Congress will never approve large increases in the EITC. Moreover, not all eligible workers claim the EITC to which they are entitled, meaning its efficiency as an anti-poverty measure is often overrrated.

Independent of that concern about the EITC, there is also a strong case to be made for the notion that a higher minimum wage will contribute to the social respect conferred upon low-wage workers. Americans tend to regard income received from employers–whether it’s our own income or someone else’s–as “earned,” and a greater source of moral worth and pride than income channelled through the government.

Seen in this light, higher minimum wages have a key role to play in securing what should be seen as a central goal of social policy: to ensure that all citizens and especially all workers are treated with basic respect.

Published in: on August 22, 2006 at 4:19 pm  Comments (1)  

“Boxed Out”

Today the Times-Dispatch editorial writers decided to provide advice to the Chicago city council. At issue is a recent council ordinance that will require large retailers within the city to meet a living wage standard of $10/hr + $3/hr in benefits, to be phased in over several years.

The essence of the RTD argument seems to be that the ordinance may jeopardize Wal-Mart and Target’s expansion plans in the Windy City. Well, that’s precisely the point: the city leaders don’t want Chicago’s vast retail market captured by a handful of big box enterprises who compete for market share and try to drive out competitors on the basis of low wages.

Not all “economic development” is healthy for a locality. When an employer says it’s going to come in and create x number of jobs, that claim needs to be taken with a grain of salt. Even if the promised jobs do materialize (not always the case)–if a Wal-Mart really does come in and hire 2,000 new people, for instance–this does not mean that the local economy has in fact gained 2,000 net new jobs.

From that figure of 2,000 we need to subtract the following, for starters: jobs lost by competitors to the new business, who may find themselves out of business; the proportion of the Wal-Mart jobs that will go to outsiders, rather than current Chicago residents; and jobs not created by firms who might have invested in Chicago but will stay away rather than go head to head with a Wal-Mart.

Even if Wal-Mart’s arrival in the city led on a full accounting to a net gain in jobs, there still would be sound reasons to oppose this form of development if the quality of these jobs turn out to be significantly worse than the jobs it displaces, or if the employer’s presence helps under-cut labor standards throughout the city. It’s perfectly possible that a Wal-Mart coming to town could increase total employment but lead to a reduction in the total wages and benefits paid to workers. That’s not economic “progress” in any meaningful sense–that’s shifting from a high-road to a low-road model of economic development.

It is often claimed that Wal-Mart has made the American economy more “efficient.” But squeezing workers and suppliers is not efficiency at all–it’s redistribution from one sector of the economy to the other. If Wal-Mart were truly efficient in the sense of being able to technically organize retail better than competitors, it should be able to thrive without needing to squeeze workers below locally prevalent wage standards.

Finally, some analysts of the Chicago ordinance believe that the Chicago market for retail is so strong that having to pay workers a higher wage will not deter Wal-Mart or others from seeking access to the city. These analysts note that living wage laws in Santa Fe, NM and San Francisco have not deterred employment growth in the retail sector in those cities.

High-road economic development strategies are, of course, largely alien to the historic practices of the Southern states, whose approach to economic development has largely consisted of suppressing labor and writing large subsidy checks to mobile corporations. But it’s not quite that way everywhere in the country. A good journalistic approach on this issue would be to investigate the range of living wage ordinances that have been implemented around the country and attempt to sort out their effects on wages and employment–or at least to survey existing studies.

Here are two good examples: The Los Angeles Living Wage Study, chaired by a UC-Riverside economics professor, and a study of the Santa Fe living wage by economists at the University of New Mexico. Both studies found that the laws produced minimal negative effects on employment. A 2006 literature review of existing studies by the Economic Poilcy Institute provides further corroboration of that conclusion.

It would not take too long for a smart editorial writer to familiarize him or herself with such studies. Until then, to paraphrase Bob Dylan, the RTD should refrain from criticizing what it doesn’t understand.

Published in: on August 21, 2006 at 6:55 pm  Leave a Comment