"Misapplying the theory I mislearned in college."
By James Kwak
“An unwavering focus on the customer.” Those words grace the cover of Wells Fargo’s 2014 annual report:
The 2015 annual report features this:
One can only wonder what the bank’s public relations whizzes will think of for the 2016 version.
We know now that, over the past five years, more than five thousand Wells Fargo employees illegally opened more than 1 million bank accounts and applied for hundreds of thousands of credit cards on behalf of existing customers—all to meet aggressive “cross-selling” targets set by bank executives. At the moment, we’re not exactly sure who knew what and when they knew it. But as with the rest of the once-shocking-but-now-mundane banking scandals of the past decade—the London Whale, fixing LIBOR, manipulating foreign exchange markets, money laundering, and so on—either the bank’s top executives were unaware of what was going on, which is recklessly incompetent, or they were aware of it, which is worse.
The mantra of “the customer” is standard corporate PR fare, of course, but Wells Fargo takes it to absurd and now apparently Orwellian extremes. The bank’s “strategy” highlights the following sentence:
We start with what the customer needs — not with what we want to sell them.
This is exactly the opposite of how the bank actually behaved.
Even when caught ripping its customers off, the bank responded by saying:
Wells Fargo is committed to putting our customers’ interests first 100 percent of the time.
Now, there actually are businesses that do put their customers’ interests first, at least almost all of the time. I used to work at one. At Guidewire Software, our primary goal in everything we did was to make our customers successful using our software. If we did that , we believed that everything else would take care of itself.
This was a sensible strategy, for a few reasons. We were selling big, expensive software systems to large property and casualty insurers—a relatively small world in which everybody knows everybody else. One failed project and our reputation would be seriously harmed, perhaps fatally. The industry had seen a long succession of large and costly software project failures, in which companies spent tens of millions of dollars and had little or nothing to show for it. Promising that we would move heaven and earth to make our customers successful—and actually doing it—was a way of differentiating ourselves from the competition. Finally, we knew that each customer relationship would last for years and years, through upgrades and additional sales of new products. For us, “the customer” was, in each case, a group of real people with whom we had real relationships. And when you know people personally, you genuinely want them to succeed.
Wells Fargo is not that kind of business.
For Wells Fargo, as for any megabank, “the customer” is not a person—it’s a dataset, with means, medians, correlations, and metrics, like “cost of acquisition” and “churn rate” and “marginal profitability of product X.” Yes, customers matter, but in the generic sense that applies to all businesses: customers are where the money comes from.
There is another, more specific reason why customers matter to Wells Fargo: they play an important role in the bank’s pitch to investors. It’s cheaper to market to your existing customers than to people who aren’t your customers. You can put ads in their credit card bills, on your online banking web pages, on signs in your branches, and in the scripts that your tellers and call center operators repeat no matter what the question is. You can (supposedly) mine your customers’ purchasing data to figure out when they are going to need a mortgage or a home equity loan, and you can pounce before they have a chance to look at their other options. You can bait them with a loss-leader free checking account and reel in a mortgage with big up-front fees and a steady stream of servicing revenue, or an Individual Retirement Account invested in a crappy, high-fee mutual fund.
This is what those magical words cross-selling mean to bank executives. And the idea that you can continuously increase profits by pushing more and more overpriced junk onto the same set of suckers is music to the ears of investors. But if your story is based on cross-selling, you need to provide numbers to back it up—like the number of products per customer, the number of customers with more than one product, the percentage of deposit account customers with a credit card, and so on. To get those numbers, you set targets for tens of thousands of frontline employees . . . and we know how that story ends.
The irony of the cross-selling story is that it isn’t actually good for customers. There are business models that are good for a company and its customers. In fact, that’s the usual state of affairs: you pay $5 for a taco that your local taco truck made at a marginal cost of $2 and is so delicious that you would have paid $8 for it. Everyone wins.
But if you buy all your financial products from the same bank, the only winner is your bank. Wells Fargo probably has perfectly good checking accounts, with big ATM networks, online systems, and fancy smartphone apps. But you should get your credit card from whoever charges the lowest interest rate or offers the best rewards; you should get your mortgage from whoever charges the lowest rate, without hidden fees and penalties; and you should invest your money with Vanguard (or some other fund management company that has low-cost index funds). Banks like to talk about the advantages of “one-stop shopping,” but in the age of technology there really aren’t any. You can set up automatic payments from one institution to another, and that’s that. Cross-selling is one of those strategies that generates profits not by providing better tacos to taco lovers, but by taking advantage of existing customers’ limited attention to sell them mediocre products that they wouldn’t have chosen otherwise.
So, yes, Wells Fargo is focused on its customers—but not in the sense that they care about the people who use their products. The Customer is a story to tell Wall Street in order to prop up the stock price for as long as possible.People who have Wells Fargo accounts are the ore that has to be mined for golden nuggets of data to embellish that story. That’s the only sense in which Wells Fargo puts its customers first.
By James Kwak
There’s been a fair amount of triumphalism about the Census Bureau’s recent report on income and poverty, which showed a 5.2% increase in median real household income from 2014 to 2015. For example:
I usually try to be restrained, but this is unambiguously the best Income, Poverty & Health Insurance report ever. https://t.co/YdN4HgtIvR
— Jason Furman (@CEAChair) September 13, 2016
But, and I don’t think Jason Furman would disagree, this is not particularly strong evidence that everything is rosy, or that “America is already great,” as some would have it. As many people have pointed out, median household income in 2015 was only back to its 1998 level. Actually, when you take into account a methodological change in 2013, it’s still 5% below its 1999 peak.
Also, if you’re going to celebrate the good years, you should acknowledge the bad years. Here is the annual change in median income for every year since 1985, ranked from best to worst:
The years in red are the years of the current recovery. As you can see, this is the first time annual growth has exceeded 0.3%, despite the fact that the economy has been growing every year.
Now, it’s possible that 2015 will be the first of several good years. Maybe unemployment has finally reached the point where companies have to offer higher wages to workers, instead of telling them to apply for government benefits. On the other hand, going simply by how long recoveries usually last, we are due for a recession—which would mean another economic cycle in which ordinary households became worse off.
There’s no way to know for sure, of course, because this is macroeconomics. But on its own, one data point does not make a trend. And so far, this century has not turned out so well for the median American family.
By James Kwak
In the Times a couple of days ago, Gregory Mankiw made a half-hearted case for eliminating the estate tax that was so weak I’m not even sure he convinced himself. The core of his argument is that the estate tax violates the principle of horizontal equity, according to which “similar people should face similar burdens.” The problem, on his view, is that between two rich couples that each amass $20 million, the Profligates who consume their wealth before death end up paying lower taxes than the Frugals who maintain a modest lifestyle. “To me, this does not seem right,” Mankiw concludes.
First of all, it’s not even clear why this example violates horizontal equity. The Profligates and the Frugals are not “similar people”—Mankiw specifically constructed the example that way. They may have each earned the same amount of money, but they have vastly different consumption habits.
Second, it’s not clear that the Frugals are paying more tax than the Profligates. Their estate will pay higher taxes, but by then they are dead; the estate tax does not directly limit their personal consumption in the slightest. In fact, the ones whose estate will pay the tax are the ones who apparently are not interested in consumption in the first place. Now, the defense of Mankiw is that the Frugals do care about how much money they can pass on to their children, so the estate tax does affect their utility. But that brings up the third, and most important point . . .
Only an economist, and an economist of a certain type, could evaluate the fairness of the estate tax by comparing two wealthy families. Mankiw’s point is that the estate tax is unfair to the Frugals—as compared to whom, exactly? Remember, Mr. and Mrs. Profligate spent most of their money before they died; their children get next to nothing. The Frugals’ kids end up with about $16 million ($20 million less the 40% federal estate tax on the amount above the exemption), but they’re still the richest people in the story. The Profligates’ kids get the remaining crumbs of the parents’ once-impressive fortune—yet we’re supposed to feel sorry for the Frugals.
But the people we should really be thinking about are everyone else’s children. It’s a little peculiar to profess to care about equal treatment and then proceed to talk solely about rich people. What about Mr. and Mrs. Poor and their children, who far outnumber the Profligates and the Frugals? The Poors’ children inherit nothing because their parents died with nothing; the Frugals’ kids inherit $16 million although their parents died with $20 million. There’s no reason to think the second generation of Frugals is any more deserving than the second generation of Poors—yet they are born into comfort and security, while the Poors face hardship and anxiety. From this perspective, the only fair estate tax would be one with a rate of 100%. And even then, the Frugals’ kids would be better off than those of the Poors, since they would have the most productivity-enhancing childhood that money can buy.
The bottom line is this: You can’t argue against the estate tax on fairness grounds, unless your powers of abstraction are so awesome that you can some how overlook the fact that most people wish their parents had to pay the estate tax.
As I said, I’m not sure that Mankiw bought his own argument, because he then concedes that “not all economists share my judgments about the estate tax.” So he pivots into a different point that he thinks “everyone can agree on”: that “we need more stability in the tax code than we have had in the past.” And we can only have stability, he claims, if opposing sides can compromise on outcomes that both find tolerable.
This argument about tax code stability is just a special case of the more general claim you often hear that businesses need regulatory “predictability” so that they can make long-term plans. And in both cases, this position is either naive or disingenuous. The United States is still a democracy. And in our democracy, a party that controls both the White House and Congress can do more or less what it wants, at least when it comes to economic policy. For an example, look no further than the George W. Bush administration, which eliminated the estate tax (for only one year, because it had 51 but not 60 votes) back in 2001. Even if some nonexistent master statesmen were able to come to sort of grand bargain on the estate tax, nothing would prevent Paul Ryan, Mitch McConnell, and (say) President Ted Cruz from eliminating it in 2021. Complaining about “uncertainty” is just a sophisticated way of complaining about the fact that your side might lose.
The big misdirection in Mankiw’s column, however, goes unstated: talking about the estate tax in isolation from the rest of the tax code and, for that matter, the economy. At the end of the day, the estate tax is about the inter-generational transmission of wealth. From that standpoint, the rest of the tax code effectively imposes a negative estate tax.
The main reason is step-up of basis at death. Ordinarily, when you sell an asset, you have to pay capital gains tax on your profits (sale price minus purchase price). But when you die, your heirs get to increase their cost basis to the value of the asset at the time of your death—so no one ever pays tax on the appreciation during your lifetime. This is very clearly a negative estate tax, since it makes assets worth more to your heirs than they were to you. In addition, you don’t pay capital gains tax until you sell an asset—so the longer you hold onto an asset, the lower your effective tax rate. This obviously benefits the wealthiest families the most, since they have the least need to sell assets. And if you do ever sell an asset, you get to pay capital gains tax at a preferential rate.
In short, most of the tax code benefits families with more wealth than they can consume in one lifetime. In this context, the estate tax is really just an imperfect, partial, insufficient way to slightly mitigate the inter-generational transmission of wealth and the development of an aristocracy of hedge fund managers and their children. The big question isn’t whether we should have an estate tax or not. It’s whether we should take much more aggressive measures to give all children a fair shot at a comfortable and prosperous future. Eliminating the estate tax, without addressing the sources of inter-generational inequality, would only accelerate the transformation of American into a new feudal society.
By James Kwak
Imagine that while George W. Bush was governor of Texas and president of the United States, various people and companies decided to write him checks for hundreds of thousands of dollars, just because they thought he was a great guy. Those people and companies, just coincidentally, happened to have interests that were affected by the policies of Texas and the United States. But when he thanked them for their money, Bush never promised to do anything in particular for them. You would be suspicious, right?
Now, that’s roughly what has been happening with the Clinton Foundation. Various people and companies have been writing checks for millions of dollars to the Foundation during the same time that Hillary Clinton was secretary of state and, following that, the most likely next president of the United States—a title she has held since the day Barack Obama’s second term began. (The Clintons finally decided to scale back the Foundation earlier this week.)
There are two main defenses for the Clintons’ actions. Both are distressingly naive.
One, made by Kevin Drum among others, is that Clinton didn’t actually do any favors for her Foundation donors. So even if people were trying to buy access and influence, they didn’t get any, and there’s nothing to see here.
First of all, there is evidence, some compiled by Jeff Stein, that Foundation donors were more likely to gain access to the secretary of state. On an individual basis, I’m sure that each of these meetings could be justified . But the same thing is true whenever a lobbyist arranges a meeting for a client with a member of Congress. The question is whether giving money increases your chances of getting through the door. We’ll probably never have the data you would need to answer that question.
More generally, what matters the impact that a donation has on the donee. Anyone who will want to raise money in the future naturally finds it difficult to take actions contrary to the interests of the people who are most likely to give that money. Donating to the Clinton Foundation is a great way to signal that you might donate more money in the future. And that means that, somewhere in the corner of her massive brain, Hillary knows that making a certain decision will reduce her foundation’s future revenues. That’s why we worry about campaign contributions, remember? If you want to argue that Hillary Clinton is so incorruptible that the standards we apply to other politicians shouldn’t apply to her—well, be my guest.
The second defense is: The Clinton Foundation is a charity, for God’s sake! It helps people! This is even more naive, for a simple reason that I don’t think has been emphasized enough.
The Clintons are vastly wealthy. Since 2007, they have earned more than $150 million. They have far more money than any family can reasonably consume in a lifetime. Bill and Hillary are getting on in years, they only have one child, and she is married to a hedge fund manager. When you have that much money, a dollar in your foundation is as good as a dollar in your bank account.
Once you have all your consumption needs covered, what do you need money for? If you’re a Clinton, you want to have an impact in the world, reward your friends, and burnish your legacy. A foundation is an excellent vehicle for all of those purposes, for obvious reasons.(That’s why it’s hardly a sacrifice for Mark Zuckerberg to donate the vast majority of his Facebook stock to a private company that he controls.) It is also an excellent way to transfer money to your daughter free of estate tax, since she can control it after you die. The fact that it may or may not do good things for the world is irrelevant. A $1 million donation to the foundation might as well be a $1 million donation to you, because, at the end of the day, your marginal $1 million is going to your foundation either way.
So the real question is this: Do you think it would be appropriate for people and companies affected by U.S. policy to be writing $1 million checks directly to the Clintons? If the answer is yes, then you should be against any campaign finance rules whatsoever. If the answer is no, you should be worried about the Clinton Foundation.
By James Kwak
Last week, the Washington Post summarized a draft paper by Jonathan Rothwell of Gallup on the demographic correlates of support for Donald Trump. As various people have noted, the headline was a bit over-the-top:
The “widespread theory,” of course, is the idea that Trump supporters are, at least in part, motivated by economic anxiety—an idea that sophisticated columnists like Matt Yglesias like to make fun of, as I discussed recently.
The article itself, as many people have noted, is considerably more circumspect than its headline. (Note to those who don’t know: Headlines are written by editors, not the people on the byline.) This is the summary near the top of the article:
According to this new analysis, those who view Trump favorably have not been disproportionately affected by foreign trade or immigration, compared with people with unfavorable views of the Republican presidential nominee. The results suggest that his supporters, on average, do not have lower incomes than other Americans, nor are they more likely to be unemployed. [Actually, according to the paper, they are more likely to be unemployed, but that’s not particularly important.]
Yet while Trump’s supporters might be comparatively well off themselves, they come from places where their neighbors endure other forms of hardship. In their communities, white residents are dying younger, and it is harder for young people who grow up poor to get ahead.
The paper itself is more circumspect still. Here’s an excerpt:
Higher household income predicts a greater likelihood of Trump support overall and among whites, though not among white non-Hispanic Republicans. In other words, compared to all non-supporters or even other whites, Trump supporters earn more than non-supporters, conditional on these factors, but this is partly because Republicans, in general, earn higher incomes, and the difference is no longer significant when restricted to this group. …
On the other hand, workers in blue collar occupations (defined as production, construction, installation, maintenance, and repair, or transportation) are far more likely to support Trump, as are those with less education. … Since blue collar and less educated workers have faced greater economic distress in recent years, this provides some evidence that economic hardship and lower-socio- economic status boost Trump’s popularity.
Before we go further, let’s make sure we understand exactly what this paper does and does not show. For the most part, it’s based on a probit regression of the likelihood a person will support Trump (that’s the dependent, or left-side variable) on a long list of variables for that person (employment status, religion, etc.) and a long list of variables measured for the area in which that person lives (share with BA degree, share of manufacturing jobs, etc.). For each variable, there is a regression coefficient that shows the impact of that variable on the likelihood of supporting Trump, and then an indication of whether that variable is statistically significant. For example, in model 1, looking at all people, being unemployed increases the chances that someone will support Trump by about 5%, which is significant at the 99% level.
There are two reasons why this paper says less than readers might think. The first is that many of the right-side (explanatory) variables are highly correlated. When you have highly correlated explanatory variables, you can get wildly inaccurate results. Let’s say you are trying to figure out what factors determine the number of words in a child’s vocabulary. In your model, you include age, since kids learn more words as they get older. You also include grade in school, since they learn more words the longer they spend in school. Do you see the problem? Age and grade are almost perfectly correlated; you’re basically using two variables when there is only one in real life—so the actual results of your model will be highly volatile. You might find that age is significant but not grade; or vice-versa; or that both are significant. If both are significant, you might conclude that both have a positive impact on vocabulary: that is, fourth graders know more words than third graders, but within any grade, older kids know more words than younger kids. That sounds plausible—but it would be a mistake. When explanatory variables are highly correlated, results are extremely sensitive to outliers. If you have one older kid in fourth grade who knows lots of words, you could get a positive coefficient on age; but if that one kid doesn’t know very many words, you could get a negative coefficient.
How does this apply to this paper? The individual explanatory variables include, among other things: employment status (e.g., self-employed); religion; “works for government”; sex; marital status; “works in blue collar occupation”; union member, non-government; race and ethnicity; highest degree; and household income. The regional explanatory variables include: share of college graduates; share of manufacturing jobs; median income; share of white people; and white mortality rate. All of those variables are obviously correlated with income, particularly highest degree. So we have the same problem described above—too many variables for the amount of variance in our sample—which produces arbitrary results. (One way to think about this is that you could use a bunch of those variables to predict household income pretty accurately, at which point the household income variable itself becomes unnecessary.)
The Washington Post writeup remained blissfully unaware of this problem:
After statistically controlling factors such as education, age and gender, Rothwell was able to determine which traits distinguished those who favored Trump from those who did not, even among people who appeared to be similar in other respects.
This is the argument that the statistical significance of the income coefficient means that, among people who are otherwise identical, higher income does have an effect (pro-Trump, in this case). But as explained above, that’s a fallacy. Multicollinearity, as this statistical problem is called, means that individual coefficients are unreliable. The model as a whole may predict support for Trump pretty well, but you have no way of knowing which variables are doing the predicting.
That’s the first problem with this paper: we can’t trust the coefficients. The second problem is one of interpretation. Even if we accept for a moment the coefficients on the explanatory variables, the paper says nothing about why people actually support Trump; it’s just a long list of correlations.
So imagine this simple world. There are 100 people. 50 are poor and 50 are rich. In each group, one half (25 people) vote based on their feelings, such as economic anxiety. The other half vote based on their interests. So the electorate looks something like this:
Of the people who vote their feelings, let’s say economic anxiety does increase support for Trump. So Trump gets 15 of the people in the Feelings/Poor box but only 10 people in the Feelings/Rich box. For people who vote their interests, however, income is positively correlated with Trump support, since he has promised to cut their taxes. So Trump gets 20 of the people in the Interests/Rich box but only 5 people in the Interests/Poor box (because the other 20 realize that Hillary Clinton’s policies will be better for them).
Now our exit poll looks like this:
Trump gets only 40% of the poor voters, but 60% of the rich voters.
That’s what the Gallup paper shows, and that’s what the Washington Post editors used as their headline: rich people prefer Trump, so economic anxiety is a myth. But I constructed this outcome using a model that explicitly incorporated economic anxiety as a factor (in the Feelings row, Trump does better with poor people). In other words, the economic anxiety story is consistent with a study showing that, on average, rich people prefer Trump.
The lesson is very simple, and it’s one that everyone knows before becoming a poll-reading pundit: People make decisions for different reasons. Something can be an important factor—here, it gives Trump a 20-point advantage among half the population—but get outweighed by some other important factor. Or, to put it in sophisticated language, you can’t use income as an instrument for economic anxiety, because income affects Trump support through other channels (in this example, because some rich people realize that Trump’s tax cuts will be good for them). This is really just the same mistake that Matt Yglesias made yesterday with race and age.
As should be obvious, I think that economic anxiety is a reason why some people support Donald Trump. I can’t prove it from poll breakouts, or from the Gallup paper, because this type of hypothesis can’t be proven or disproven with that type of data. That’s the one thing you should remember the next time someone argues that some demographic statistics show why some politician is popular.
By James Kwak
For weeks now, Vox columnist Matt Yglesias has been mocking the idea that “economic anxiety” is a substantial factor in the Rise of Trump. Here’s one of dozens of examples:
It’s strange how even $12 million in illicit Ukrainian money wasn’t enough to slake Paul Manafort’s economic anxiety.
— Matthew Yglesias (@mattyglesias) August 15, 2016
It’s understandable where this particularly highbrow putdown (also used by other twitterers) came from. Belittling the economic anxiety explanation has two understandable if not entirely pure motivations. One is the idea that chalking up Trump’s success to economic factors minimizes the central role of racism in his campaign; pointing out other reasons people might have for voting Trump distracts from the main issue or can even be seen (in an illogical sort of way) as an apology for Trump’s racism. The second motivation is that, since Hillary Clinton decided to run on the poorly worded “America is already great” theme, talking about economic insecurity only plays into the hands of the enemy; instead, we should just pretend everything is hunky-dory. (Yglesias does not share this second motivation.) But to many people, including me, it seems bizarre to insist that economic anxiety has nothing to do with Trump’s success, and much simpler to simply acknowledge that some of his voters are racists, some are worried about their economic prospects, and some are both.
Today, instead of letting the by-now-stale joke simply fade away, Yglesias decided to double down with a column arguing that Trump is all about “white grievance politics,” not economic anxiety.
Yglesias’s first point is this:
not only is white racial resentment clearly a statistical correlate of support for Donald Trump, it’s a perfectly good reason to support Donald Trump.
(He uses “good” to mean reasonable given your perceptions of the world, not morally good.) That’s completely true.
Then he goes on to claim that “adding an economic anxiety factor to your account doesn’t actually help to explain anything.” But here his arguments don’t make any sense. Here’s the first one:
Trump’s supporters, for example, are considerably whiter and considerably older than the American population at large. If the economic problems of the past decade had been unusually hard on the white and the old, then an economics-focused explanation could be valuable. In reality, things have been rougher on nonwhites and rougher on younger cohorts.
To see how silly this argument is, consider the racial dimension. The fact that Trump has less support among nonwhites is explained by the fact that he is a Republican and a racist. Let’s say there is such a thing as economic anxiety, and it makes you more likely to be a Trump supporter. African-Americans are somewhat more likely to have economic anxiety, so more of them should vote Trump, all other things being equal. That’s Yglesias’s point. But other things aren’t equal; being African-American makes you much, much less likely to be a Trump supporter for other reasons (party, racism). Add those factors together, and voilà! Trump has better numbers among whites than among African-Americans. This is entirely consistent with the economic anxiety interpretation. (Conceptually, Yglesias is using race as an instrument for economic anxiety when the dependent variable is Trump support. This only works if race has no effect on Trump support other than via economic anxiety.)
The age dimension behaves the same way, just less obviously. Young people skew liberal and non-racist compared to old people. (For the record, I’m middle-aged.) So they will support Trump at lower rates than old people, even though they are poorer.
Besides, while it is true that Trump runs better among whites than blacks, the question should be: relative to what? I don’t place a lot of faith in poll breakouts (low sample sizes), but it’s not clear he’s doing better among whites (or old people) than Mitt Romney did in 2012, and he may be doing considerably worse. That comparison is complicated by the fact that Barack Obama is himself African-American. But if anything, the poll data (which, again, I am not convinced by) tend to undermine the idea that this is an election about white privilege.
Wait—I just reread the column, and that was the only actual argument against the economic anxiety explanation. Most of the rest is Yglesias acknowledging that people do have real economic grievances.
Here’s a half-argument, near the end:
But when Trump voters say they’re upset about needing to press one for English, mad that Black Lives Matter protesters are slandering police officers, and worried that Muslim and/or Mexican immigrants are going to murder their children, it’s perverse to interpret them as secretly hankering for a refundable child care tax credit.
First of all, do we know what proportion of Trump voters are worried that brown people are going to murder their children? This doesn’t rebut the idea that different people vote Trump for different reasons. It’s possible that there are people supporting Trump because they are worried about the cost of child care.
Second, there are many reasons to think that insecurity, economic or otherwise, makes people more receptive to racial appeals. See, for example, the relative support for Hitler among small businesspeople and industrial workers. (I believe that Godwin’s Law has been suspended until November 8, and perhaps—though hopefully not—beyond.)
There is a hint of another argument here:
If Clinton becomes president and has the opportunity to enact her agenda of higher minimum wages, expanded Social Security benefits, expanded Medicaid eligibility, subsidized child care and college tuition, and$275 billion in new infrastructure spending, a huge share of the benefits will flow to economically struggling white people — and rightly so.
The argument would be that since Hillary Clinton’s policies are more likely to actually help poor people than Trump’s, it doesn’t make sense that economically anxious people would support Trump. But this argument is so silly that I don’t think the very smart Matt Yglesias is making it, because it assumes that people know and vote their economic interests. Ronald Reagan disproved that, and I’m sure he wasn’t the first one.
The simple economic anxiety argument goes like this: Many Americans face real economic insecurity—stagnant real wages, higher health care costs, lower homeownership rate, “gig economy,” low workforce participation rate, etc. They think “the system”—whatever they mean by that—isn’t working for them. Hillary Clinton represents “the system” much more than Donald Trump, particularly since she’s claiming most of the legacy of Barack Obama. So they vote Trump. And to repeat: The reason white people support Trump at much higher rates than black people, even though white people are richer than black people, is that Trump is a racist. Is that so hard to understand?
It was a clever joke. But it’s time to move on.
… goes to Chain of Title, by David Dayen (with apologies to Jennifer Taub, Alyssa Katz, Michael Lewis, and many others, including my co-author, Simon Johnson).
Chain of Title isn’t primarily about the grand narrative of the financial crisis: subprime lending, mortgage-backed securities, collateralized debt obligations, credit default swaps, synthetic CDOs, the collapse of the global financial system in 2008, and the frenzied bailout that followed. Instead, it’s about foreclosure fraud: how mortgage servicers, banks, and the law firms they hired systematically broke the law to force people out of their homes. At the same time, it’s about securitization fraud: the fact that an untold number of securitizations were not properly executed, meaning that they violated the terms of their underlying agreements, meaning that their investors should have been able to force rescission of the entire deal.
The substance of the argument has been well known for years, so I’ll try to pack it into one sentence: The banks creating mortgage-backed securities failed to properly transfer notes (the documents proving a borrower’s obligation) to the trusts that issued the MBS, so not only was the securitization itself faulty, but the trust did not have legal standing to foreclose on homeowners—so the banks paid third-party companies to forge the required paper trail, and lawyers knowingly submitted fraudulent evidence to courts, who usually accepted it.
This has been common knowledge on the Internet since 2009 or 2010. But Dayen does what good writers do: he tells the story of a few real human beings figuring out the workings of this vast fraudulent system on their own, fighting against it … and ultimately, for the most part, losing. The book makes you feel the anger, disbelief, hope, and disappointment of those days over again. Even though I knew how the story ended—in a whimper of liability-eliminating settlements and self-congratulatory back-patting by politicians—it was still painful to read.
As I said earlier, Chain of Title isn’t about the grand narrative of the collapse of the financial system. Because even if the banks had been pushing the paperwork properly, the crisis still would have happened: Washington Mutual still would have paid mortgage brokers to push Option ARMs onto homebuyers who could have qualified for prime loans, AIG still would have sold all those credit default swaps on senior tranches of CDOs, John Paulson and Fabrice Tourre still would have concocted ABACUS, and small towns in Norway still would have bought those MBSs and CDOs. The missing transfers weren’t a cause of the financial crisis.
But Chain of Title is about something bigger and more important: the corruption of our legal system and the political system behind it. The banks and their enablers—their lawyers (including the big-city law firms they hired for the big cases and negotiations) and the document production companies that churned out fraudulent paper—didn’t just forget to sign some documents and mail them to the right place; they covered up those mistakes by going into courtrooms all across the country, submitting documents that in many cases were obviously fake, and lying about where those documents came from. If you are, say, a defendant in a drug possession case, I strongly advise you not to try this. But apparently if you are a bank, it works just fine.
The banks clearly knew what was going on. They were ordering replacement documents from companies like DocX that sold them off a price list (p. 218). In the rare cases that lawyers were called out for submitting obviously fraudulent evidence—for example, a notary stamp used to notarize a signature dated before that stamp even existed—they simply withdrew the evidence and replaced it with a newly forged copy.
Why did they get away with it? Because, as many people have noted, we have two legal systems in this country: one for the wealthy and well connected, and one for the rest of us. You don’t get much better connected than the major banks. Arthur Levitt wasn’t much of an SEC chair, but he did say this back in 2000 (p. 59): “It won’t come out for ten years, and the banks know it. By then they’re already on to the next scam.” Or, as one bank executive said to a lawyer contesting false documents (p. 129), “You’re going to destroy the country. And if you don’t stop, we’ll just go to Congress and get the laws changed.”
Which is more or less what happened, at both the state and federal levels. In Florida, many judges simply refused to entertain defense lawyers’ claims, even though they went to the very core of the foreclosing banks’ case: whether or not they had standing to sue in the first place. Staff members in the attorney general’s office investigating foreclosure fraud were tossed out of their jobs.
In Washington, things were handled much more … professionally, you might say. Barack Obama talked about the importance of helping homeowners, while quietly protecting the banks’ backs. The White House told the IRS not to investigate wholesale violation of tax laws (because the trusts did not hold the mortgages they said they did) (p. 262); the Department of Justice hindered an investigation by a U.S. attorney’s office in Florida (p. 263); and the Department of Housing and Urban Development tried to get state attorneys general to fall in line behind a toothless settlement (p. 266). In the end, the banks paid a few tens of billions of dollars in penalties—most of it fake, as Yves Smith showed long ago—and promised to obey laws they were already supposed to obey, and which they then found new ways to break.
So we have two legal systems: one for banks and law firms, and one for ordinary homeowners. And the reason we have two legal systems is because our political system is, well, rigged—how else would you put it? As Dayen’s story shows, there was widespread evidence of systematic lawbreaking by banks that, by rights, should have cost them hundreds of billions of dollars and should have sent hundreds of people to jail (for knowingly forging evidence or knowingly submitting forged evidence). Yet the political establishment, from Republican attorneys general to a Democratic administration in Washington, closed ranks behind the big banks—at best because they thought it was necessary to keep the economy going, at worst because they were bought and paid for by campaign contributions and promises of private sector jobs.
This is the defining issue of our day. The financial crisis itself was produced by reckless bankers, aided and abetted by credulous or self-interested politicians and regulators. But the bigger scandal is that, in the wake of that colossal example of economic devastation, the powers that be chose to protect the big banks at the expense of ordinary families. They did so because that was the path of least resistance. It was easier to bail out a handful of large banks—overlooking both securitization fraud and foreclosure fraud constituted a far bigger bailout than TARP—than to uphold the law, and it didn’t hurt that bank campaign contributions and lobbying expenses keep Washington afloat. That’s what happens when you have an electoral system dominated by large donors and big corporations and when you have a political class so jaded by the system that it treats rampant lawbreaking as an inconvenient problem to be swept under the rug as a favor to a constituent. That’s the world we live in. And that is the ultimate lesson we should all learn from the financial crisis.
By James Kwak
These days, some papers get more attention when they are in draft form than when they are published, in part because of the length of the review and publication cycle. Recall the Romer and Romer paper on the impact of tax changes, or the Philippon and Reshef paper on the financial sector, both of which made huge splashes years before they were finally published. My best-known paper also falls in that category. “The Value of Connections in Turbulent Times” began knocking around the Internet in 2013, and is only now being published by the Journal of Financial Economics—nine years after we began working on it, and at a time when the world seems to have completely moved on from its subject. (Note: that link will allow you to download the published version of the paper for free, but only until September 4, 2016. Thanks Elsevier, I guess.)
The paper, as you may have heard years back, shows that financial institutions with connections to Tim Geithner experienced abnormal positive market returns when his nomination to be treasury secretary was leaked and then announced in November 2008, and suffered abnormal negative returns when the news of his tax issues threatened to undermine his confirmation in January 2009. The interesting thing is that this is not ordinarily supposed to happen in the United States. Having connections to important government officials is not supposed to provide financial benefits to a company, and therefore nominations of those officials do not usually produce stock market bumps. The evidence is not completely one-sided, but in one representative example, researchers found that companies with connections to Dick Cheney did not experience abnormal returns in response to unexpected news about Cheney. This is in contrast to developing countries, where numerous studies have found that connections to important politicians are reflected in stock market valuations.
But it’s less clear why the markets (which, remember, are made up of at least some supposedly rational investors) thought that having connections to Geithner would pay off. Our main argument—after testing and discarding a bunch of other possibilities, like the effect was due to Citigroup, or to very large banks—is that, in the confusion of the time, it seemed likely that the treasury secretary would be given a large amount of discretion; and the more discretion that is available to an official, the more valuable it is simply to be able to get a meeting with him, or get him to return your phone call. You don’t have to think that Tim Geithner would consciously help out someone he served on a board with, or someone he had spent time with as president of the New York Fed; you just have to think that people are influenced by the people they spend time with, and so access matters.
This isn’t how we think our government is supposed to operate, but of course it’s how we all realize that it does operate. That’s one reason why individuals and corporations are willing to donate huge amounts of money to super PACs—so they can get access when they need it. What was unusual about the financial crisis was that, with the financial system and economy apparently falling apart, the value of those connections was much higher than usual. It also showed how, when push came to shove, the United States’ political institutions behaved more like those of a developing country than we would care to believe—the central point of Simon’s famous Atlantic article.
By James Kwak
Apparently, both parties have platform planks calling for the reinstatement of the Glass-Steagall Act of 1933, the law that separated investment banking from commercial banking until it was finally repealed in 1999 (after being watered down by the Federal Reserve beginning in the late 1980s). Bringing back Glass-Steagall in some form would force megabanks like JPMorgan Chase, Citigroup, and Bank of America to split up; it would also force Goldman Sachs to get rid of the retail banking operations it started in a bid to get access to cheap deposits.
In his article discussing this possibility, Andrew Ross Sorkin of the Times slips in this:
“Whether reinstating the law is good idea or not, the short-term implications are decidedly negative: It would most likely mean a loss of jobs as part of a slowdown in lending from the biggest banks.”
I looked down to the next paragraph for the explanation, but he had already moved on to another unsubstantiated claim (that the U.S. banking industry would be at a competitive disadvantage). So, I thought, maybe it’s so obvious that Glass-Steagall would reduce lending that Sorkin didn’t think it was worth explaining. I thought about that for a while. I couldn’t see it.
In fact, basic intuitions about finance indicate that Glass-Steagall should have no effect on lending whatsoever. Banks should loan money to borrowers who are good risks: that is, those who pay an interest rate that more than compensates for the risk of default. (I’m simplifying a bit, but the details aren’t relevant.) Common sense tells you that whether the bank doing the lending is affiliated with an investment bank shouldn’t make a difference.
To dig a little deeper, banks should be making loans whose expected returns exceed the appropriate cost of capital. So, maybe Sorkin thinks that grafting an investment bank onto a commercial bank will lower its cost of capital. I can’t think of any obvious reason why this should be the case. Even if it does, however, we do NOT want the commercial bank to now start making more loans than it did before it was affiliated with the investment bank. Capital markets are supposed to direct funds to households and companies that can put them to their best use. Whether X (a house, a shopping mall, a factory, whatever) is a good use of capital does not depend whether some bank merged with some other bank. If a lower cost of capital causes banks to start making more loans, those are bad loans, not good ones.
Let’s look at this from another angle. Assume Commercial Bank has a cost of capital of 10% and Investment Bank has a cost of capital of 8%. (In practice it’s usually the other way around, but then the argument for a combination is even weaker.) Say they merge, and new Universal Bank has an overall cost of capital of 9%. This does not mean that the appropriate cost of capital for Commercial Bank (a subsidiary of Universal Bank) is now 9%. It’s still 10%. That’s because the cost of capital is based on the risk profile of a company’s business—and, once again, that business hasn’t changed. And, indeed, even after the merger, Commercial Bank and Investment Bank will continue to be run as two separate entities, with a few specific touchpoints (e.g., Commercial Bank will sell its loans to Investment Bank to be securitized, and Investment Bank will try to sell wealth management services to Commercial Bank’s customers). And in the executive suite, the CFO and treasurer will charge an internal cost of capital to each business, based on its intrinsic attributes.
Now, maybe Commercial Bank will want to issue more loans because Investment Bank wants to securitize them. (Does this story sound familiar?) But first, this shouldn’t happen. If demand from Investment Bank is causing Commercial Bank to increase its lending, then that should happen whether or not they happen to have the same parent (Universal Bank); Commercial Bank can already sell its loans to Investment Bank (or any of its competitors) without a merger. Second, even if it does happen—because, say, the CEO of Universal Bank orders Commercial Bank to increase its lending—those are loans we don’t want to exist. There is such a thing as too much credit, as we all should remember.
In sum, the idea that separating commercial and investment banking will result in fewer loans, and hence higher unemployment, seems like another of those industry talking points that, repeated often enough, become conventional wisdom. It’s one of those threats bankers like to make when politicians try to shrink their empires: Come after my bank, and look what happens to your economy. But in this case, it’s an empty threat.
By James Kwak
In an article about political correctness in contemporary politics, Amanda Hess of the Times writes:
“Politically correct” was born as a lefty in-joke, an insidery nod to the smugness of holier-than-thou liberals. As Gloria Steinem put it: “ ‘Politically correct’ was invented by people in social-justice movements to make fun of ourselves.”
As far as I can tell from publicly available sources, Amanda Hess went to college during the George W. Bush administration, so I take it she is working from sources (like Gloria Steinem) here. But she’s not far off the mark.
I went to college in the late 1980s, which is when the concept of political correctness was spreading. My first recollection of political correctness is of a friend saying, “That’s so PC,” talking about someone else who was always sure to participate in the left-wing cause of the day. “Politically correct” absolutely was a phrase that lefties came up with to make fun of themselves. And it did not have the connotation of criticizing other (politically incorrect) people that it has today. If you were PC, that just meant that you were against the Nicaraguan contras, in favor of divesting from companies that invested in South Africa, against discrimination against people with AIDS, in favor of a nuclear freeze, and so on. Those were the issues–not the vocabulary used by rich white frat boys.
In other words, being politically correct meant adopting the appropriately subversive position on every issue. It was a faintly derogatory term because it implied that you didn’t think about issues independently; you just lined up on whatever side the left was supposed to line up on. “Politically correct” was a way to describe the herding behavior of left-wing people–not a way to criticize right-wing people.
Today, political correctness has become one of the favored bogeymen of the Trump campaign and of conservatives in general. People of my generation could genuinely be either baffled or aghast: It was a JOKE! Don’t you get it? But etymology is not destiny, of course. Conservatives have changed political correctness into something it wasn’t back in the old days, and that’s just the way it is.
But in its original meaning–the idea that you have to toe the party line, to be the hardest of the hard core–it is among conservatives that political correctness reigns supreme. On virtually every issue–taxes, Obamacare, abortion, Medicaid block grants, Dodd-Frank, guns, climate change, even the theological status of Barack Obama and Hillary Clinton–every Republican falls in line for fear of offending the omnipotent Base. Do you really think that every Republican member of the House and Senate honestly believes that human activity has not had an impact on the climate? Do they honestly believe that allowing anyone to carry a gun makes the world a safer place? But they have to pretend that they are as stupid as they sound for fear of offending Exxon Mobil, the NRA, and the conservative activists who really do believe that climate change is a fantasy concocted by intellectuals and that the best solution to crime is more guns.
So yes, political correctness is a problem. It’s a problem among Republicans. As for Democrats, who can’t even figure out if we are for or against the TPP, we can’t even get our act together enough for political correctness to be an issue.
By James Kwak
You may know that SSRN, the shared web server for social science and law papers, was recently bought by Elsevier, a publishing company that charges what many people think are outrageous amounts for subscriptions to its journals or access to individual papers. Recently, Elsevier appears to have started taking down papers from SSRN without notifying the authors, even when the authors in some cases had valid permission to publish those papers on SSRN.
Elsevier’s defense is that this was a simple employee mistake (maybe like forgetting to rewrite direct quotes from someone else’s speeches?): “A couple of processing emails were sent incorrectly and in the wrong order.” I’m not buying it, though. Even if the wrong email was sent, they were still taking down papers unilaterally without bothering to ask if the author had the appropriate rights. If they’re not doing it in response to a DMCA notice, and they have people doing it manually, they could at least send the email first before deleting the paper.
If you’re interested in the issue, there is some detailed analysis in the comment section of PrawfsBlawg. In any case, it was enough for me to stop using SSRN. In my view, SSRN is really just ugly, clunky PDF hosting anyway. The main way I use it is as follows:
- Find out about paper through some better filtering mechanism (email, blog, Twitter, or, most often, Google).
- Google the title of the paper.
- See link to paper on SSRN.
- Follow link and download paper.
As you can see, nothing about that process relies on SSRN; if the paper were hosted anywhere else within reach of Google’s robots, it would work just as well. In theory, SSRN could be a place for people to actually discover relevant work, but for the most part it fails miserably at that because (a) it’s not as comprehensive as Google, so you can’t rely on a search there and (b) its usability is stuck in the mid-1990s.
So anyway, I uploaded my papers to a new page on my personal website, which allows you to download PDFs just as well as SSRN does. It’s hosted by WordPress.com, which means that you could do the same with about ten minutes of setup effort and another minute or so per paper, all for free. Or I imagine you could use bepress or SocArKiv. It really doesn’t matter. As long as your paper is somewhere on the Internet that is visible to Google, it will work just as well.
Now: How can I completely eliminate my papers from SSRN (not just take down the PDFs) so they don’t appear at all? It’s not at all apparent from their horrible user interface.
Update: Thanks to anon for pointing out the MODIFY button. SSRN’s support page discusses a REMOVE button that doesn’t actually exist. Now my papers are all inactive on SSRN.
By James Kwak
“This is a Hillary Clinton, Elizabeth Warren, Bernie Sanders party. Our party has moved right, their party has moved really left.”
That’s Paul Ryan on the Democratic Party. In Vox, Matt Yglesias points out that Ryan is being disingenuous, but only “in part.” Yglesias goes on to say this:
“In a fundamental way, Ryan is correct — in 2016, the center of gravity in the Democratic Party is much closer to Bernie Sanders than it was in 2006 or 1996.”
Except, that just isn’t true.
You can look at this question in a couple of ways. You can look at the actual accomplishments and priorities of actual Democratic politicians over the past decade. You would see the adoption of Romneycare, the relatively moderate Dodd-Frank Act, the extension of most of the Bush tax cuts, a decline in domestic discretionary spending, the failure to do anything about the criminal justice system, the failure to very much about climate change, and now the push to ratify the TPP. I don’t see a party shifting to the left.
But, you might say, that’s because Obama has been blocked by the GOP at every turn. So let’s look at the data:
Those are the ideological positions of the two parties’ Congressional delegations since 1995, from the absolutely indispensable Vital Statistics on Congress project, led by Norman Ornstein and Thomas Mann. (The years on the X-axis are the years of Congresses.) And, of course, they confirm what everyone knows: The Republicans have been getting more extreme, while the Democrats have stayed roughly the same. Even in the House, which should be more sensitive to ideological shifts, the Democrats remain the party of Bill Clinton, Barack Obama, and Hillary Clinton—none of whom is to the left of, well, anyone significant in recent party history.
Why does Yglesias, who is usually very sharp, make this mistake? His evidence is a campaign brochure created by Nancy Pelosi and Rahm Emanuel for the 2006 elections, which is relatively moderate; he then asserts, “Whatever you make of Hillary Clinton’s current policy agenda, there’s no denying that it’s far more left-wing across the board even as the status quo in many of these areas has shifted to the left.”
But that’s mistaking tactics for substance. In 2006, the Democrats were running against George W. Bush, a man widely seen at the time as a corrupt, incompetent warmonger; they only had to be as inoffensive as possible in order to win the elections. By contrast, Hillary Clinton is just emerging from what was, in some ways, a pretty standard primary campaign in which the establishment centrist tacked left to siphon votes away from the left-wing challenger. Furthermore, Democrats have controlled the White House for the past eight years, and although Barack Obama is personally popular, Americans in general feel insecure about their economic prospects and unhappy about the political system. Clinton has to run on something different, because few people think Obama’s centrist economic policies have worked. (Whether they have worked is an entirely different question.)
Or maybe Yglesias means to focus on tactics rather than substance. His concluding point is that his 2006 version of the Democratic Party was better at winning elections than the ideological version he sees today:
“Positioning themselves as a kind of big tent catchall alternative to [the post-Reagan, ideologically rigid Republicans] worked very well for Democrats across the 2006 and 2008 election cycles. Their ongoing reinvention as a more ideological party has coincided — not entirely coincidentally — with a period of weakness in down-ballot races, especially in midterm elections where turnout by young people is pathetically low.”
But again, I think this is just wrong. The Democrats won in 2006 because Bush was unpopular and they won in 2008 because the world was collapsing. They have not reinvented themselves in a more ideological form—see the chart above—and they have done poorly beginning in 2010 because of the rise of the Tea Party and ideologically extreme big money, particularly on the state level. Generic Democrats remain more popular than generic Republicans. Democrats get fewer House seats than their popular vote totals would warrant because of state-level gerrymandering; and that gerrymandering exists because right-wing Republicans, backed by extremist billionaires, have taken over state legislatures. If Republicans had managed to nominate anyone remotely plausible as president, they would be on the verge of a complete sweep in November (legislative, executive, and, thanks to playing hardball with Merrick Garland, judicial). In short, the real story of the Democratic Party is that it has more or less stayed the same, but it has been overwhelmed by ideological rigidity backed by lots and lots of money.
Unfortunately, Yglesias’s advice to Democrats is to continue pitching that big tent, chasing moderates, and backing away from any positions that would actually excite young people or attract ideologically minded donors. The irony is that we have a blueprint for political success staring us in the face: become more ideologically rigid, shift the Overton window as far as you can (dragging the other side with you), prevent your opponents from accomplishing anything, gradually take over all the branches of government, and use those branches to consolidate your power.
Democrats may not be able to completely follow that blueprint, because our positions tend to be less attractive to billionaires (which is why electoral reform is, at the end of the day, the only thing that matters). But the big tent strategy only works when the Republicans shoot themselves in the foot (see Bush, George W.), and even then it just gives us a filibuster-prone majority that changes little in the long term and only lasts for two years (see the 1993 and 2009 Congresses). We need more ideology, not less. Because what we’re doing isn’t working.
By James Kwak
I am, on paper, a corporate law professor, because—well, I guess because I used to work for a corporation (two, actually), and the books I write sometimes have corporations in them, and I teach business organizations as part of my day job. (Secret for those looking for a job as a law professor: UConn was looking for someone to teach corporate law, and I wanted the job, so that’s what I said I could do.) But I’ve made it this far writing exactly one corporate law paper (my summary here), and that was actually about corporate political activity—namely, whether and how shareholders can challenge political contributions that they think are not in the corporation’s interests.
It is well known by now that, in Citizens United, Justice Kennedy committed one of the true howlers of recent Supreme Court history:
With the advent of the Internet, prompt disclosure of expenditures can provide shareholders and citizens with the information needed to hold corporations and elected officials accountable for their positions and supporters. Shareholders can determine whether their corporation’s political speech advances the corporation’s interest in making profits, and citizens can see whether elected officials are “‘in the pocket’ of so-called moneyed interests.”
The obvious problem is that there is no disclosure of corporate contributions to 501(c)(4) social welfare organizations and 501(c)(6) associations (such as the Chamber of Commerce), and even contributions to 527 Super PACs can be easily laundered through intermediary entities whose owners are secret. The second, slightly less obvious problem is that, under existing standards, there is precious little that shareholders can do to “hold corporations accountable” for political donations. Given the traditional deference that courts show to decisions made by corporate directors and officers, the latter have pretty much free rein to do what they want with their shareholders’ money.
My paper argued that existing law could and should be interpreted to impose a higher standard on corporate political activity, making it easier for shareholders to challenge contributions motivated by the CEO’s personal interests rather than the interests of the corporation. Luckily, other people in the field do not have as short an attention span as I do. In an earlier paper (my quick summary here), Joseph Leahy argued that corporate political contributions can be challenged as acts in bad faith. (Note: “bad faith” is a term of art in corporate law, and no one is really sure what it means.) Now Leahy has a new paper (to be published next year), “Intermediate Scrutiny for Corporate Political Contributions,” which makes a more detailed case that corporations should have to specifically justify such contributions.
“Intermediate scrutiny,” in this context, is also a term of art known only by corporate law professors (and law students for those few hours before a final exam or before the bar exam). In this context, Leahy boils it down to this:
a court evaluating a corporate political contribution should ask whether (1) management had reasonable grounds to believe that the contribution would directly or indirectly advance specific corporate interests, rather than some general political viewpoint; and (2) whether the contribution was reasonable, both as a method of addressing the specific corporate interest and in its amount.
That’s not so much to ask, is it? Ordinarily we don’t force CEOs to answer these questions about every business decision because we want them to make those decisions without fear of second-guessing by litigious shareholders (or plaintiff’s attorneys). But we’re not talking about launching products or entering markets; we’re talking about political donations, which are especially susceptible, as Leahy discusses, to being made for pretextual reasons. And if political expenditures really are an important part of your business strategy—say you’re part of a regulated oligopoly, like a telecom carrier—then lobbying for or against specific pieces of legislation would be trivially easy to justify.
The key thing about a higher standard of review isn’t whether a corporation’s board will be able to meet it in some specific case. It’s that by increasing the threat of litigation from zero to even some small, positive number, it will deter CEOs from treating the shareholders’ money as their own. Today, as Leahy says, “If management can use the corporate treasury to fund its favored political candidates, and get away with it, why use its own money?” Introducing just a little bit of litigation risk should be enough to induce executives to be much more careful to spend money on politics only when they can make a plausible case that it is a good investment—just like they do when it comes to ordinary business decisions.
This isn’t a silver bullet in the fight for a more fair political system; I think we need campaign contribution vouchers, or a massive multiple-match system for small donations, and nonpartisan redistricting, and federal standards for access to the polls, and many other things. But restricting the ability of CEOs to spend other people’s money on their pet political causes is a step in the right direction.
By James Kwak
Have you heard this story before?
The first assets deemed safe were coins made of precious metals. As a technology, coins had many problems: they could be clipped or, debased by the sovereign. They had to be assayed and weighed to determine their value in the best of times; whole currencies would collapse in the worst, when the “fraudulent arts” gained the upper hand. Coins were bulky, too, and vulnerable to theft. But they worked: they were always liquid, their edges could be milled to prevent clipping; and, for long periods of time, coins served as fairly reliable stores of value.
As trade expanded, problems with coins gradually led to the creation of paper money – privately-produced circulating debt in all its early forms: moneys of account; bank notes and bills; goldsmith notes; and merchants’ bills of exchange, all of them convertible on short notice into coins.
That’s David Warsh, paraphrasing Gary Gorton, who’s really just recounting conventional wisdom, handed down from economist to economist since time immemorial.
Except it leaves out the most interesting part of the story.
I’ve been reading Christine Desan’s book Making Money, on the history of money in late medieval and early modern Europe. It’s a fascinating story, full of both meticulous historical detail and compelling conceptual arguments about the relationship between forms of currency, political authority, and the creation of the modern state.
Let’s look at the usual creation story a little more closely. The central assumption of that story is that coins were simply a package in which precious metal traveled. Hence “they had to be assayed and weighed to determine their value in the best of times.” But even that is too optimistic, if the question is whether coins serve as safe assets. Coins did have a metal value, since they could theoretically be converted into bullion, which had its own price, albeit at some cost. But they also had a coin value, which was simply the value dictated by the sovereign, since coins could be used to pay taxes.
The metal value and the coin value were related, but they were related in the sense that the value of a currency today is related to the economic fundamentals of the country that issues it. That is, the relationship between metal value and coin value was managed by the government using a variety of policy instruments. One of those was setting the number of coins that would be minted from a given quantity of metal (and the number of those coins that would be skimmed off the top for the sovereign).
A central principle of late medieval English law, enshrined in the early 17th-century Case of Mixed Money, was that the sovereign had the absolute right to dictate the value of money (p. 272):
the king by his prerogative may make money of what matter and form he pleaseth, and establish the standard of it, so may he change his money in substance and impression, and enhance or debase the value of it, or entirely decry and annul it . . .
If Queen Elizabeth said that worn, clipped coins had the same value as brand-new coins from the mint, even if the former had only half the silver content of the latter, then they had the same value. She could say that because the value of pieces of metal depends on what you can use them for, and so long as you (or someone else) can use them to pay debts and taxes, they have value. Yes, this introduced complications: you would prefer to spend your old pennies and save your new ones, which you might either melt down to be re-minted or sold as bullion overseas. But the overarching point is that money was never simply precious metal in another form, but an instrument of commerce artificially created by kings.
Even in the heyday of coins, they were hardly the only form of money. For one thing, most everyday transactions were conducted using debt—what we would call trade credit, although it was used by consumers as well as businesses—because the smallest coin was simply too big to pay a day’s wages, let alone buy a beer, at least in England. For another, as early as the 14th century, carved sticks of wood known as tallies were circulating as money. Tallies began as records of taxes collected, then became receipts the crown gave to tax collectors for advances of coin (the idea being that, at tax time, the collector could show the tally and say, “I already paid”), and finally evolved into tokens that the government used to pay its suppliers (who could then cash them with tax collectors, who would use them at tax time). In most of the 15th century, a majority of tax receipts came in the form of tallies rather than cash (p. 177). Again, if the government is willing to take take something in payment of taxes, it becomes money.
Similarly, it is true that “problems with coins” led to the development of other forms of money—beginning with trade credit and tallies—but for the most part they were not the transactional problems faced by households and firms, but fiscal and military problems faced by governments. The Bank of England, which issued the first recognizably modern paper currency, was created because William III needed money to fight wars on the Continent, but there simply wasn’t enough coin in the country to both pay the required taxes and keep the economy functioning. Bank notes were able to function as money because the government was willing to accept them in payment of taxes—which was not true of the notes issued by purely private goldsmith-bankers. In other words, what made Bank notes money, rather than simply paper records of debt, was a political decision necessitated by a fiscal crisis.
Yet the Bank of England’s formation also coincided with the reconceptualization of money as simply precious metal in another form—a fable told most prominently by John Locke. In earlier centuries, everyone accepted that kings could reduce the metal content of coins and, indeed, there were good economic reasons to do so. Devaluing coins (raising the nominal price of silver) increased the money supply, a constant concern in the medieval and early modern periods, while revaluing coins (keeping the nominal price of silver but calling in all old coins to be reminted) imposed deflation on the economy. But Locke was the most prominent spokesperson for hard money—maintaining the metal content of coins inviolate. The theory was that money was simply metal by another name, since each could be converted into the other at a constant rate. The practice, however, was that the vast majority of money—Bank of England notes, bills of exchange issued by London banks, and bank notes issued by country banks—could only function as fiat money. This had to be the case because the very policy of a constant mint price had the effect of driving silver out of coin form, vacuuming up the coin supply. If people actually wanted to convert their paper money into silver or gold, a financial crisis could be prevented only through a debt-financed expansion of the money supply by the Bank of England—or by simply suspending convertibility, as England did in the 1790s.
To paraphrase Desan, at the same time that the English political system invented the modern monetary system, liberal theorists like Locke obscured it behind a simplistic fetishization of gold. The fable that money was simply transmutated gold went hand in hand with the fable that the economy was simply a neutral market populated by households and firms seeking material gain. This primacy of the economic over the political—the idea that government policy should simply set the conditions for the operation of private interests—is, of course, one of the central pillars of the capitalist ethos. Among other things, it justified the practice of allowing private banks to make profits by selling liquidity to individuals (that’s what happens when you deposit money at a low or zero interest rate)—a privilege that once belonged to sovereign governments.
Making Money is the most fascinating book about anything, let alone money, I’ve read in a while—thought-provoking like David Graeber’s Debt, but firmly grounded in the minutiae of English history. In these times when everyone from gold bugs (like Ted Cruz, let’s not forget) to Bitcoin enthusiasts is calling for a redefinition of money, it reminds us what a complicated and politically determined thing money always has been.
By James Kwak
Nine months ago I endorsed Larry Lessig for president because, as I wrote at the time, “If we want real change in the long term, we have to fix the system. That means real equality of political participation, not just the formal equality of one person one vote.” There is no more fundamental issue we face than a political system that is distorted by money from top to bottom. (If you think Donald Trump somehow disproves this idea, consider that fact that, right now, the campaign topic getting the most attention is the Trump campaign’s financial situation, and the strongest evidence that Clinton is likely to win is her financial superiority.)
Larry Lessig’s campaign, unfortunately, never got off the ground, in part because the Democratic establishment bent its own rules to keep him out of the debates. That’s one reason why I’m not giving money to Hillary Clinton or the DSCC or the DCCC—that and, frankly, none of them have prioritized political reform. Sure, I want Clinton to win, but I can’t afford to donate to everyone I’d like to see win. In the long run, what we need are candidates who will put political reform first—not second, or third, or fifteenth.
So here are a two. One is Zephyr Teachout, a law professor better known for embarrassing Andrew Cuomo by winning a third of the vote in the 2014 New York gubernatorial primary despite being outspent by seventy gazillion to one. She’s also an expert on corruption in the political system, having written a serious history of corruption in America. Teachout is running for Congress in New York’s 19th district (which has a primary on Tuesday). She’s already famous, so enough said. (There’s also a documentary about her run against Cuomo that’s raising money on Kickstarter, and could use donations.)
The other is Sean Barney, a classmate of mine at the Yale Law School who is running to be Delaware’s congressional representative. Sean has made political reform his top priority, and he supports a six-for-one public match for small contributions, a new Voting Rights Act, and non-partisan redistricting commissions to end gerrymandering of congressional districts. He’s also been endorsed by Larry Lessig. (And he’s a marine who was almost killed by a sniper in Fallujah before going to law school.)
Running for Congress is hard. Running on a platform of undermining the current system . But if we have a Congress that is wholly dependent on big money, we’re never going to roll back the influence of big money. At the end of the day, whether your big issue is climate change, or workers’ rights, or financial reform, that’s the only thing that matters.
I’m sure there are other candidates out there who are also dedicated to political reform. If you care about the political system, with the June 30 reporting deadline coming up—ironic as it may sound—these are the kinds of people you should consider donating to. So that one day, whether or not you can afford the donation will no longer matter.
By James Kwak
Now that Hillary Clinton has wrapped up the nomination, I have no problem with Clinton supporters saying that Sanders supporters should back her in the general election. I’m certainly voting for Clinton (not that my vote matters, since I live in Massachusetts), and every liberal Democrat I know who likes Sanders is going to do the same. (Yes, there are probably some Sanders voters who will vote for Trump or stay home, but they are largely anti-establishment independents who were always unlikely to vote for Clinton.)
Apparently that’s not enough for many in the Clinton camp, however, who insist that I should be happy that Hillary Clinton is the Democratic nominee, and that this is actually a good thing for progressives—defined loosely as people who want higher taxes on the rich, less inequality, stronger social insurance programs (including true universal health care), and better protections for workers. The argument is basically that Clinton is (a) more pragmatic, (b) more skilled at getting things done, and (c) more likely to be able to work with Republicans to achieve incremental good things, while Sanders would have simply flamed out in futility.
To which my first answer, which I’m sure I share with many other liberals is: Yes, I know how the Constitution works already. I know we have three branches of government, and that the Republicans control Congress.
And that’s exactly the point. We’ve had centrist Democratic presidents for sixteen out of the past twenty-four years. It turns out that having a pragmatic Democrat in the White House is good for some things, like maintaining four “liberals” on the Supreme Court, preserving the right to an abortion, and slowing down Republican plans to cut taxes on the rich. (Since 1992, the top tax rate on capital gains has only fallen from 28% to 23.8% and the top tax rate on dividends has only fallen from 31% to 23.8%.)
Having a moderate Democratic president, not surprisingly, also produces some major pieces of moderate legislation, ranging from the center-right (welfare reform) to the center-center (Dodd-Frank) to the center-left (2009 stimulus, Obamacare). The stimulus, for those who might think this is unfair, came in at $580 billion over its first two fiscal years—not even twice as much per year as the 2008 stimulus signed by George W. Bush, at a time when the economic situation was much less bleak. And Obamacare, lest we forget, was originally a Heritage Foundation proposal and then Mitt Romney’s health care plan as governor of Massachusetts. (If you want to know what I really think about Obamacare, look here.) The big progressive win of recent years, marriage equality, happened despite the opposition of Bill Clinton, and of both Barack Obama and Hillary Clinton during the 2008 campaign. Obama, who has flipped twice on the issue, may very well have secretly supported same-sex marriage for all these years, but the important point is that he didn’t come out in favor of it until after the writing had been engraved into the wall.
But when it comes to the structural factors that govern the changing tides of history, it turns out that having a Democrat, any Democrat, in the White House doesn’t count for much. This is what has been going on in Congress since Bill Clinton was first elected (data thanks to the Vital Statistics on Congress project):
(I estimated the impact of the 2014 elections, assuming that the average ideological position of each party remained the same and only the party split changed.) It turns out that the only thing that can shift Congress to the left is a spectacularly catastrophic Republican president mired in an unpopular war and then a catastrophic economic crisis. The popularity of both Clinton and Obama late in their terms has had little effect on Congressional elections.
So what accounts for the rightward drift of American politics? Having Democratic presidents who actively try to position themselves in between the two parties—Clinton beginning in 1995, Obama occasionally, such as in 2011—certainly hasn’t helped. More important, though, have been those structural factors. One is that Republicans have just been crushing Democrats at the state level. This chart comes from Philip Bump at the Washington Post:
Note the increases during both the Clinton and Obama administrations.
This is both an effect and a cause. It’s an effect of the fact that conservatives have better fundraising and training networks, more motivated local activists (e.g., people running for school board so they can stamp out evolution), and just more money. It’s a cause of the first picture, because Republicans have translated control of state governments into Congressional gerrymandering. In 2012, for example, Democratic House candidates received more votes than their Republican opponents, yet the Republicans ended up with a majority by more than thirty seats. The entire political system has been tilted more in the Republicans’ favor, to the point where the presidency is the only prize that Democrats can fight for on equal terms—because all we need is one charismatic (Obama) or well-connected (Hillary Clinton) candidate who can raise tons and tons of money.
Think about the situation that puts us in. Republicans are apoplectic at the idea that Hillary Clinton could appoint the deciding justice to the Supreme Court, but the smart ones realize that she will be able to accomplish little else; even if by some miracle Democrats retake the House, Republican unity will suffice to block anything in the Senate. Democrats, by contrast, are terrified because a Republican president means that they will get virtually everything, unless the Senate Democratic caucus somehow develops a backbone (which it certainly didn’t have under George W. Bush): not just the Supreme Court, but a flat tax, new abortion restrictions, Medicaid block grants, repeal of Dodd-Frank, repeal of Obamacare, Medicare vouchers, and who knows what else.
What’s the lesson here? It isn’t that Bernie Sanders could accomplish more than Hillary Clinton in four years against dug-in Republican opposition. He couldn’t. It’s that having a president isn’t enough. We need a movement. That’s what the conservatives have had for decades: embryonic in the 1950s, quixotic in the 1960s, on the rise in the 1970s, ascendant in the 1980s, and increasingly institutionalized, entrenched, and ideologically extreme ever since. We need to stop thinking that winning the presidency more often than not is a long-term strategy. What we’re doing isn’t working. It needs to change.
I wouldn’t call Hillary Clinton the lesser evil. She isn’t evil. I think she will be a decent president (except when it comes to foreign military intervention, where she frightens me, but a good deal less than Trump does) and she will more or less hold the line against conservative extremists for at least four years. And, of course, it will be nice to join the ranks of civilized countries that have chosen women as their leaders. But she’s the candidate of the Democratic status quo, and the Democratic status quo isn’t working.
We need to do something different. We can have a debate about what that is. I think we need two things: comprehensive electoral reform (which is why I supported Larry Lessig in this election) and a wave of unabashedly ideological candidates who push the overall debate to the left. But Hillary Clinton amounts to doing the same thing again and hoping for different results.
Update: I inadvertently (really) typed “Hillary Trump” when I meant “Hillary Clinton.” That’s been fixed.