"Misapplying the theory I mislearned in college."

Baseline Scenario

Syndicate content The Baseline Scenario
What happened to the global economy and what we can do about it
Updated: 7 min 26 sec ago

More Banking Mystifications

Tue, 07/26/2016 - 15:45

By James Kwak

Apparently, both parties have platform planks calling for the reinstatement of the Glass-Steagall Act of 1933, the law that separated investment banking from commercial banking until it was finally repealed in 1999 (after being watered down by the Federal Reserve beginning in the late 1980s). Bringing back Glass-Steagall in some form would force megabanks like JPMorgan Chase, Citigroup, and Bank of America to split up; it would also force Goldman Sachs to get rid of the retail banking operations it started in a bid to get access to cheap deposits.

In his article discussing this possibility, Andrew Ross Sorkin of the Times slips in this:

“Whether reinstating the law is good idea or not, the short-term implications are decidedly negative: It would most likely mean a loss of jobs as part of a slowdown in lending from the biggest banks.”

I looked down to the next paragraph for the explanation, but he had already moved on to another unsubstantiated claim (that the U.S. banking industry would be at a competitive disadvantage). So, I thought, maybe it’s so obvious that Glass-Steagall would reduce lending that Sorkin didn’t think it was worth explaining. I thought about that for a while. I couldn’t see it.

In fact, basic intuitions about finance indicate that Glass-Steagall should have no effect on lending whatsoever. Banks should loan money to borrowers who are good risks: that is, those who pay an interest rate that more than compensates for the risk of default. (I’m simplifying a bit, but the details aren’t relevant.) Common sense tells you that whether the bank doing the lending is affiliated with an investment bank shouldn’t make a difference.

To dig a little deeper, banks should be making loans whose expected returns exceed the appropriate cost of capital. So, maybe Sorkin thinks that grafting an investment bank onto a commercial bank will lower its cost of capital. I can’t think of any obvious reason why this should be the case. Even if it does, however, we do NOT want the commercial bank to now start making more loans than it did before it was affiliated with the investment bank. Capital markets are supposed to direct funds to households and companies that can put them to their best use. Whether X (a house, a shopping mall, a factory, whatever) is a good use of capital does not depend whether some bank merged with some other bank. If a lower cost of capital causes banks to start making more loans, those are bad loans, not good ones.

Let’s look at this from another angle. Assume Commercial Bank has a cost of capital of 10% and Investment Bank has a cost of capital of 8%. (In practice it’s usually the other way around, but then the argument for a combination is even weaker.) Say they merge, and new Universal Bank has an overall cost of capital of 9%. This does not mean that the appropriate cost of capital for Commercial Bank (a subsidiary of Universal Bank) is now 9%. It’s still 10%. That’s because the cost of capital is based on the risk profile of a company’s business—and, once again, that business hasn’t changed. And, indeed, even after the merger, Commercial Bank and Investment Bank will continue to be run as two separate entities, with a few specific touchpoints (e.g., Commercial Bank will sell its loans to Investment Bank to be securitized, and Investment Bank will try to sell wealth management services to Commercial Bank’s customers). And in the executive suite, the CFO and treasurer will charge an internal cost of capital to each business, based on its intrinsic attributes.

Now, maybe Commercial Bank will want to issue more loans because Investment Bank wants to securitize them. (Does this story sound familiar?) But first, this shouldn’t happen. If demand from Investment Bank is causing Commercial Bank to increase its lending, then that should happen whether or not they happen to have the same parent (Universal Bank); Commercial Bank can already sell its loans to Investment Bank (or any of its competitors) without a merger. Second, even if it does happen—because, say, the CEO of Universal Bank orders Commercial Bank to increase its lending—those are loans we don’t want to exist. There is such a thing as too much credit, as we all should remember.

In sum, the idea that separating commercial and investment banking will result in fewer loans, and hence higher unemployment, seems like another of those industry talking points that, repeated often enough, become conventional wisdom. It’s one of those threats bankers like to make when politicians try to shrink their empires: Come after my bank, and look what happens to your economy. But in this case, it’s an empty threat.

That’s So PC

Tue, 07/26/2016 - 08:00

By James Kwak

In an article about political correctness in contemporary politics, Amanda Hess of the Times writes:

“Politically correct” was born as a lefty in-joke, an insidery nod to the smugness of holier-than-thou liberals. As Gloria Steinem put it: “ ‘Politically correct’ was invented by people in social-justice movements to make fun of ourselves.”

As far as I can tell from publicly available sources, Amanda Hess went to college during the George W. Bush administration, so I take it she is working from sources (like Gloria Steinem) here. But she’s not far off the mark.

I went to college in the late 1980s, which is when the concept of political correctness was spreading. My first recollection of political correctness is of a friend saying, “That’s so PC,” talking about someone else who was always sure to participate in the left-wing cause of the day. “Politically correct” absolutely was a phrase that lefties came up with to make fun of themselves. And it did not have the connotation of criticizing other (politically incorrect) people that it has today. If you were PC, that just meant that you were against the Nicaraguan contras, in favor of divesting from companies that invested in South Africa, against discrimination against people with AIDS, in favor of a nuclear freeze, and so on. Those were the issues–not the vocabulary used by rich white frat boys.

In other words, being politically correct meant adopting the appropriately subversive position on every issue. It was a faintly derogatory term because it implied that you didn’t think about issues independently; you just lined up on whatever side the left was supposed to line up on. “Politically correct” was a way to describe the herding behavior of left-wing people–not a way to criticize right-wing people.

Today, political correctness has become one of the favored bogeymen of the Trump campaign and of conservatives in general. People of my generation could genuinely be either baffled or aghast: It was a JOKE! Don’t you get it? But etymology is not destiny, of course. Conservatives have changed political correctness into something it wasn’t back in the old days, and that’s just the way it is.

But in its original meaning–the idea that you have to toe the party line, to be the hardest of the hard core–it is among conservatives that political correctness reigns supreme. On virtually every issue–taxes, Obamacare, abortion, Medicaid block grants, Dodd-Frank, guns, climate change, even the theological status of Barack Obama and Hillary Clinton–every Republican falls in line for fear of offending the omnipotent Base. Do you really think that every Republican member of the House and Senate honestly believes that human activity has not had an impact on the climate? Do they honestly believe that allowing anyone to carry a gun makes the world a safer place? But they have to pretend that they are as stupid as they sound for fear of offending Exxon Mobil, the NRA, and the conservative activists who really do believe that climate change is a fantasy concocted by intellectuals and that the best solution to crime is more guns.

So yes, political correctness is a problem. It’s a problem among Republicans. As for Democrats, who can’t even figure out if we are for or against the TPP, we can’t even get our act together enough for political correctness to be an issue.

Good-Bye, SSRN

Thu, 07/21/2016 - 11:52

By James Kwak

You may know that SSRN, the shared web server for social science and law papers, was recently bought by Elsevier, a publishing company that charges what many people think are outrageous amounts for subscriptions to its journals or access to individual papers. Recently, Elsevier appears to have started taking down papers from SSRN without notifying the authors, even when the authors in some cases had valid permission to publish those papers on SSRN.

Elsevier’s defense is that this was a simple employee mistake (maybe like forgetting to rewrite direct quotes from someone else’s speeches?): “A couple of processing emails were sent incorrectly and in the wrong order.” I’m not buying it, though. Even if the wrong email was sent, they were still taking down papers unilaterally without bothering to ask if the author had the appropriate rights. If they’re not doing it in response to a DMCA notice, and they have people doing it manually, they could at least send the email first before deleting the paper.

If you’re interested in the issue, there is some detailed analysis in the comment section of PrawfsBlawg. In any case, it was enough for me to stop using SSRN. In my view, SSRN is really just ugly, clunky PDF hosting anyway. The main way I use it is as follows:

  1. Find out about paper through some better filtering mechanism (email, blog, Twitter, or, most often, Google).
  2. Google the title of the paper.
  3. See link to paper on SSRN.
  4. Follow link and download paper.

As you can see, nothing about that process relies on SSRN; if the paper were hosted anywhere else within reach of Google’s robots, it would work just as well. In theory, SSRN could be a place for people to actually discover relevant work, but for the most part it fails miserably at that because (a) it’s not as comprehensive as Google, so you can’t rely on a search there and (b) its usability is stuck in the mid-1990s.

So anyway, I uploaded my papers to a new page on my personal website, which allows you to download PDFs just as well as SSRN does. It’s hosted by WordPress.com, which means that you could do the same with about ten minutes of setup effort and another minute or so per paper, all for free. Or I imagine you could use bepress or SocArKiv. It really doesn’t matter. As long as your paper is somewhere on the Internet that is visible to Google, it will work just as well.

Now: How can I completely eliminate my papers from SSRN (not just take down the PDFs) so they don’t appear at all? It’s not at all apparent from their horrible user interface.

Update: Thanks to anon for pointing out the MODIFY button. SSRN’s support page discusses a REMOVE button that doesn’t actually exist. Now my papers are all inactive on SSRN.

Big Tents

Fri, 07/15/2016 - 10:02

By James Kwak

“This is a Hillary Clinton, Elizabeth Warren, Bernie Sanders party. Our party has moved right, their party has moved really left.”

That’s Paul Ryan on the Democratic Party. In Vox, Matt Yglesias points out that Ryan is being disingenuous, but only  “in part.” Yglesias goes on to say this:

“In a fundamental way, Ryan is correct — in 2016, the center of gravity in the Democratic Party is much closer to Bernie Sanders than it was in 2006 or 1996.”

Except, that just isn’t true.

You can look at this question in a couple of ways. You can look at the actual accomplishments and priorities of actual Democratic politicians over the past decade. You would see the adoption of Romneycare, the relatively moderate Dodd-Frank Act, the extension of most of the Bush tax cuts, a decline in domestic discretionary spending, the failure to do anything about the criminal justice system, the failure to very much about climate change, and now the push to ratify the TPP. I don’t see a party shifting to the left.

But, you might say, that’s because Obama has been blocked by the GOP at every turn. So let’s look at the data:

Those are the ideological positions of the two parties’ Congressional delegations since 1995, from the absolutely indispensable Vital Statistics on Congress project, led by Norman Ornstein and Thomas Mann. (The years on the X-axis are the years of Congresses.) And, of course, they confirm what everyone knows: The Republicans have been getting more extreme, while the Democrats have stayed roughly the same. Even in the House, which should be more sensitive to ideological shifts, the Democrats remain the party of Bill Clinton, Barack Obama, and Hillary Clinton—none of whom is to the left of, well, anyone significant in recent party history.

Why does Yglesias, who is usually very sharp, make this mistake? His evidence is a campaign brochure created by Nancy Pelosi and Rahm Emanuel for the 2006 elections, which is relatively moderate; he then asserts, “Whatever you make of Hillary Clinton’s current policy agenda, there’s no denying that it’s far more left-wing across the board even as the status quo in many of these areas has shifted to the left.”

But that’s mistaking tactics for substance. In 2006, the Democrats were running against George W. Bush, a man widely seen at the time as a corrupt, incompetent warmonger; they only had to be as inoffensive as possible in order to win the elections. By contrast, Hillary Clinton is just emerging from what was, in some ways, a pretty standard primary campaign in which the establishment centrist tacked left to siphon votes away from the left-wing challenger. Furthermore, Democrats have controlled the White House for the past eight years, and although Barack Obama is personally popular, Americans in general feel insecure about their economic prospects and unhappy about the political system. Clinton has to run on something different, because few people think Obama’s centrist economic policies have worked. (Whether they have worked is an entirely different question.)

Or maybe Yglesias means to focus on tactics rather than substance. His concluding point is that his 2006 version of the Democratic Party was better at winning elections than the ideological version he sees today:

“Positioning themselves as a kind of big tent catchall alternative to [the post-Reagan, ideologically rigid Republicans] worked very well for Democrats across the 2006 and 2008 election cycles. Their ongoing reinvention as a more ideological party has coincided — not entirely coincidentally — with a period of weakness in down-ballot races, especially in midterm elections where turnout by young people is pathetically low.”

But again, I think this is just wrong. The Democrats won in 2006 because Bush was unpopular and they won in 2008 because the world was collapsing. They have not reinvented themselves in a more ideological form—see the chart above—and they have done poorly beginning in 2010 because of the rise of the Tea Party and ideologically extreme big money, particularly on the state level. Generic Democrats remain more popular than generic Republicans. Democrats get fewer House seats than their popular vote totals would warrant because of state-level gerrymandering; and that gerrymandering exists because right-wing Republicans, backed by extremist billionaires, have taken over state legislatures. If Republicans had managed to nominate anyone remotely plausible as president, they would be on the verge of a complete sweep in November (legislative, executive, and, thanks to playing hardball with Merrick Garland, judicial). In short, the real story of the Democratic Party is that it has more or less stayed the same, but it has been overwhelmed by ideological rigidity backed by lots and lots of money.

Unfortunately, Yglesias’s advice to Democrats is to continue pitching that big tent, chasing moderates, and backing away from any positions that would actually excite young people or attract ideologically minded donors. The irony is that we have a blueprint for political success staring us in the face: become more ideologically rigid, shift the Overton window as far as you can (dragging the other side with you), prevent your opponents from accomplishing anything, gradually take over all the branches of government, and use those branches to consolidate your power.

Democrats may not be able to completely follow that blueprint, because our positions tend to be less attractive to billionaires (which is why electoral reform is, at the end of the day, the only thing that matters). But the big tent strategy only works when the Republicans shoot themselves in the foot (see Bush, George W.), and even then it just gives us a filibuster-prone majority that changes little in the long term and only lasts for two years (see the 1993 and 2009 Congresses). We need more ideology, not less. Because what we’re doing isn’t working.

CEOs, Politics, and Other People’s Money

Tue, 07/05/2016 - 10:57

By James Kwak

I am, on paper, a corporate law professor, because—well, I guess because I used to work for a corporation (two, actually), and the books I write sometimes have corporations in them, and I teach business organizations as part of my day job. (Secret for those looking for a job as a law professor: UConn was looking for someone to teach corporate law, and I wanted the job, so that’s what I said I could do.) But I’ve made it this far writing exactly one corporate law paper (my summary here), and that was actually about corporate political activity—namely, whether and how shareholders can challenge political contributions that they think are not in the corporation’s interests.

It is well known by now that, in Citizens United, Justice Kennedy committed one of the true howlers of recent Supreme Court history:

With the advent of the Internet, prompt disclosure of expenditures can provide shareholders and citizens with the information needed to hold corporations and elected officials accountable for their positions and supporters. Shareholders can determine whether their corporation’s political speech advances the corporation’s interest in making profits, and citizens can see whether elected officials are “‘in the pocket’ of so-called moneyed interests.”

The obvious problem is that there is no disclosure of corporate contributions to 501(c)(4) social welfare organizations and 501(c)(6) associations (such as the Chamber of Commerce), and even contributions to 527 Super PACs can be easily laundered through intermediary entities whose owners are secret. The second, slightly less obvious problem is that, under existing standards, there is precious little that shareholders can do to “hold corporations accountable” for political donations. Given the traditional deference that courts show to decisions made by corporate directors and officers, the latter have pretty much free rein to do what they want with their shareholders’ money.

My paper argued that existing law could and should be interpreted to impose a higher standard on corporate political activity, making it easier for shareholders to challenge contributions motivated by the CEO’s personal interests rather than the interests of the corporation. Luckily, other people in the field do not have as short an attention span as I do. In an earlier paper (my quick summary here), Joseph Leahy argued that corporate political contributions can be challenged as acts in bad faith. (Note: “bad faith” is a term of art in corporate law, and no one is really sure what it means.) Now Leahy has a new paper (to be published next year), “Intermediate Scrutiny for Corporate Political Contributions,” which makes a more detailed case that corporations should have to specifically justify such contributions.

“Intermediate scrutiny,” in this context, is also a term of art known only by corporate law professors (and law students for those few hours before a final exam or before the bar exam). In this context, Leahy boils it down to this:

a court evaluating a corporate political contribution should ask whether (1) management had reasonable grounds to believe that the contribution would directly or indirectly advance specific corporate interests, rather than some general political viewpoint; and (2) whether the contribution was reasonable, both as a method of addressing the specific corporate interest and in its amount.

That’s not so much to ask, is it? Ordinarily we don’t force CEOs to answer these questions about every business decision because we want them to make those decisions without fear of second-guessing by litigious shareholders (or plaintiff’s attorneys). But we’re not talking about launching products or entering markets; we’re talking about political donations, which are especially susceptible, as Leahy discusses, to being made for pretextual reasons. And if political expenditures really are an important part of your business strategy—say you’re part of a regulated oligopoly, like a telecom carrier—then lobbying for or against specific pieces of legislation would be trivially easy to justify.

The key thing about a higher standard of review isn’t whether a corporation’s board will be able to meet it in some specific case. It’s that by increasing the threat of litigation from zero to even some small, positive number, it will deter CEOs from treating the shareholders’ money as their own. Today, as Leahy says, “If management can use the corporate treasury to fund its favored political candidates, and get away with it, why use its own money?” Introducing just a little bit of litigation risk should be enough to induce executives to be much more careful to spend money on politics only when they can make a plausible case that it is a good investment—just like they do when it comes to ordinary business decisions.

This isn’t a silver bullet in the fight for a more fair political system; I think we need campaign contribution vouchers, or a massive multiple-match system for small donations, and nonpartisan redistricting, and federal standards for access to the polls, and many other things. But restricting the ability of CEOs to spend other people’s money on their pet political causes is a step in the right direction.

Mysteries of Money

Thu, 06/30/2016 - 07:58

By James Kwak

Have you heard this story before?

The first assets deemed safe were coins made of precious metals.  As a technology, coins had many problems: they could be clipped or, debased by the sovereign. They had to be assayed and weighed to determine their value in the best of times; whole currencies would collapse in the worst, when the “fraudulent arts” gained the upper hand. Coins were bulky, too, and vulnerable to theft. But they worked: they were always liquid, their edges could be milled to prevent clipping; and, for long periods of time, coins served as fairly reliable stores of value.

As trade expanded, problems with coins gradually led to the creation of paper money – privately-produced circulating debt in all its early forms: moneys of account; bank notes and bills; goldsmith notes; and merchants’ bills of exchange, all of them convertible on short notice into coins.

That’s David Warsh, paraphrasing Gary Gorton, who’s really just recounting conventional wisdom, handed down from economist to economist since time immemorial.

Except it leaves out the most interesting part of the story.

I’ve been reading Christine Desan’s book Making Money, on the history of money in late medieval and early modern Europe. It’s a fascinating story, full of both meticulous historical detail and compelling conceptual arguments about the relationship between forms of currency, political authority, and the creation of the modern state.

Let’s look at the usual creation story a little more closely. The central assumption of that story is that coins were simply a package in which precious metal traveled. Hence “they had to be assayed and weighed to determine their value in the best of times.” But even that is too optimistic, if the question is whether coins serve as safe assets. Coins did have a metal value, since they could theoretically be converted into bullion, which had its own price, albeit at some cost. But they also had a coin value, which was simply the value dictated by the sovereign, since coins could be used to pay taxes.

The metal value and the coin value were related, but they were related in the sense that the value of a currency today is related to the economic fundamentals of the country that issues it. That is, the relationship between metal value and coin value was managed by the government using a variety of policy instruments. One of those was setting the number of coins that would be minted from a given quantity of metal (and the number of those coins that would be skimmed off the top for the sovereign).

A central principle of late medieval English law, enshrined in the early 17th-century Case of Mixed Money, was that the sovereign had the absolute right to dictate the value of money (p. 272):

the king by his prerogative may make money of what matter and form he pleaseth, and establish the standard of it, so may he change his money in substance and impression, and enhance or debase the value of it, or entirely decry and annul it . . .

If Queen Elizabeth said that worn, clipped coins had the same value as brand-new coins from the mint, even if the former had only half the silver content of the latter, then they had the same value. She could say that because the value of pieces of metal depends on what you can use them for, and so long as you (or someone else) can use them to pay debts and taxes, they have value. Yes, this introduced complications: you would prefer to spend your  old pennies and save your new ones, which you might either melt down to be re-minted or sold as bullion overseas. But the overarching point is that money was never simply precious metal in another form, but an instrument of commerce artificially created by kings.

Even in the heyday of coins, they were hardly the only form of money. For one thing, most everyday transactions were conducted using debt—what we would call trade credit, although it was used by consumers as well as businesses—because the smallest coin was simply too big to pay a day’s wages, let alone buy a beer, at least in England. For another, as early as the 14th century, carved sticks of wood known as tallies were circulating as money. Tallies began as records of taxes collected, then became receipts the crown gave to tax collectors for advances of coin (the idea being that, at tax time, the collector could show the tally and say, “I already paid”), and finally evolved into tokens that the government used to pay its suppliers (who could then cash them with tax collectors, who would use them at tax time). In most of the 15th century, a majority of tax receipts came in the form of tallies rather than cash (p. 177). Again, if the government is willing to take take something in payment of taxes, it becomes money.   

Similarly, it is true that “problems with coins” led to the development of other forms of money—beginning with trade credit and tallies—but for the most part they were not the transactional problems faced by households and firms, but fiscal and military problems faced by governments. The Bank of England, which issued the first recognizably modern paper currency, was created because William III needed money to fight wars on the Continent, but there simply wasn’t enough coin in the country to both pay the required taxes and keep the economy functioning. Bank notes were able to function as money because the government was willing to accept them in payment of taxes—which was not true of the notes issued by purely private goldsmith-bankers. In other words, what made Bank notes money, rather than simply paper records of debt, was a political decision necessitated by a fiscal crisis.

Yet the Bank of England’s formation also coincided with the reconceptualization of money as simply precious metal in another form—a fable told most prominently by John Locke. In earlier centuries, everyone accepted that kings could reduce the metal content of coins and, indeed, there were good economic reasons to do so. Devaluing coins (raising the nominal price of silver) increased the money supply, a constant concern in the medieval and early modern periods, while revaluing coins (keeping the nominal price of silver but calling in all old coins to be reminted) imposed deflation on the economy. But Locke was the most prominent spokesperson for hard money—maintaining the metal content of coins inviolate. The theory was that money was simply metal by another name, since each could be converted into the other at a constant rate. The practice, however, was that the vast majority of money—Bank of England notes, bills of exchange issued by London banks, and bank notes issued by country banks—could only function as fiat money. This had to be the case because the very policy of a constant mint price had the effect of driving silver out of coin form, vacuuming up the coin supply. If people actually wanted to convert their paper money into silver or gold, a financial crisis could be prevented only through a debt-financed expansion of the money supply by the Bank of England—or by simply suspending convertibility, as England did in the 1790s.

To paraphrase Desan, at the same time that the English political system invented the modern monetary system, liberal theorists like Locke obscured it behind a simplistic fetishization of gold. The fable that money was simply transmutated gold went hand in hand with the fable that the economy was simply a neutral market populated by households and firms seeking material gain. This primacy of the economic over the political—the idea that government policy should simply set the conditions for the operation of private interests—is, of course, one of the central pillars of the capitalist ethos. Among other things, it justified the practice of allowing private banks to make profits by selling liquidity to individuals (that’s what happens when you deposit money at a low or zero interest rate)—a privilege that once belonged to sovereign governments.

Making Money is the most fascinating book about anything, let alone money, I’ve read in a while—thought-provoking like David Graeber’s Debt, but firmly grounded in the minutiae of English history. In these times when everyone from gold bugs (like Ted Cruz, let’s not forget) to Bitcoin enthusiasts is calling for a redefinition of money, it reminds us what a complicated and politically determined thing money always has been.

Candidates Who Matter

Fri, 06/24/2016 - 08:50

By James Kwak

Nine months ago I endorsed Larry Lessig for president because, as I wrote at the time, “If we want real change in the long term, we have to fix the system. That means real equality of political participation, not just the formal equality of one person one vote.” There is no more fundamental issue we face than a political system that is distorted by money from top to bottom. (If you think Donald Trump somehow disproves this idea, consider that fact that, right now, the campaign topic getting the most attention is the Trump campaign’s financial situation, and the strongest evidence that Clinton is likely to win is her financial superiority.)

Larry Lessig’s campaign, unfortunately, never got off the ground, in part because the Democratic establishment bent its own rules to keep him out of the debates. That’s one reason why I’m not giving money to Hillary Clinton or the DSCC or the DCCC—that and, frankly, none of them have prioritized political reform. Sure, I want Clinton to win, but I can’t afford to donate to everyone I’d like to see win. In the long run, what we need are candidates who will put political reform first—not second, or third, or fifteenth.

So here are a two. One is Zephyr Teachout, a law professor better known for embarrassing Andrew Cuomo by winning a third of the vote in the 2014 New York gubernatorial primary despite being outspent by seventy gazillion to one. She’s also an expert on corruption in the political system, having written a serious history of corruption in America. Teachout is running for Congress in New York’s 19th district (which has a primary on Tuesday). She’s already famous, so enough said. (There’s also a documentary about her run against Cuomo that’s raising money on Kickstarter, and could use donations.)

The other is Sean Barney, a classmate of mine at the Yale Law School who is running to be Delaware’s congressional representative. Sean has made political reform his top priority, and he supports a six-for-one public match for small contributions, a new Voting Rights Act, and non-partisan redistricting commissions to end gerrymandering of congressional districts. He’s also been endorsed by Larry Lessig. (And he’s a marine who was almost killed by a sniper in Fallujah before going to law school.)

Running for Congress is hard. Running on a platform of undermining the current system . But if we have a Congress that is wholly dependent on big money, we’re never going to roll back the influence of big money. At the end of the day, whether your big issue is climate change, or workers’ rights, or financial reform, that’s the only thing that matters.

I’m sure there are other candidates out there who are also dedicated to political reform. If you care about the political system, with the June 30 reporting deadline coming up—ironic as it may sound—these are the kinds of people you should consider donating to. So that one day, whether or not you can afford the donation will no longer matter.

Yes, I’ll Vote for HRC. No, I’m Not Happy About It.

Mon, 06/13/2016 - 07:30

By James Kwak

Now that Hillary Clinton has wrapped up the nomination, I have no problem with Clinton supporters saying that Sanders supporters should back her in the general election. I’m certainly voting for Clinton (not that my vote matters, since I live in Massachusetts), and every liberal Democrat I know who likes Sanders is going to do the same. (Yes, there are probably some Sanders voters who will vote for Trump or stay home, but they are largely anti-establishment independents who were always unlikely to vote for Clinton.)

Apparently that’s not enough for many in the Clinton camp, however, who insist that I should be happy that Hillary Clinton is the Democratic nominee, and that this is actually a good thing for progressives—defined loosely as people who want higher taxes on the rich, less inequality, stronger social insurance programs (including true universal health care), and better protections for workers. The argument is basically that Clinton is (a) more pragmatic, (b) more skilled at getting things done, and (c) more likely to be able to work with Republicans to achieve incremental good things, while Sanders would have simply flamed out in futility.

To which my first answer, which I’m sure I share with many other liberals is: Yes, I know how the Constitution works already. I know we have three branches of government, and that the Republicans control Congress.


And that’s exactly the point. We’ve had centrist Democratic presidents for sixteen out of the past twenty-four years. It turns out that having a pragmatic Democrat in the White House is good for some things, like maintaining four “liberals” on the Supreme Court, preserving the right to an abortion, and slowing down Republican plans to cut taxes on the rich. (Since 1992, the top tax rate on capital gains has only fallen from 28% to 23.8% and the top tax rate on dividends has only fallen from 31% to 23.8%.)

Having a moderate Democratic president, not surprisingly, also produces some major pieces of moderate legislation, ranging from the center-right (welfare reform) to the center-center (Dodd-Frank) to the center-left (2009 stimulus, Obamacare). The stimulus, for those who might think this is unfair, came in at $580 billion over its first two fiscal years—not even twice as much per year as the 2008 stimulus signed by George W. Bush, at a time when the economic situation was much less bleak. And Obamacare, lest we forget, was originally a Heritage Foundation proposal and then Mitt Romney’s health care plan as governor of Massachusetts. (If you want to know what I really think about Obamacare, look here.) The big progressive win of recent years, marriage equality, happened despite the opposition of Bill Clinton, and of both Barack Obama and Hillary Clinton during the 2008 campaign. Obama, who has flipped twice on the issue, may very well have secretly supported same-sex marriage for all these years, but the important point is that he didn’t come out in favor of it until after the writing had been engraved into the wall.

But when it comes to the structural factors that govern the changing tides of history, it turns out that having a Democrat, any Democrat, in the White House doesn’t count for much. This is what has been going on in Congress since Bill Clinton was first elected (data thanks to the Vital Statistics on Congress project):

(I estimated the impact of the 2014 elections, assuming that the average ideological position of each party remained the same and only the party split changed.) It turns out that the only thing that can shift Congress to the left is a spectacularly catastrophic Republican president mired in an unpopular war and then a catastrophic economic crisis. The popularity of both Clinton and Obama late in their terms has had little effect on Congressional elections.

So what accounts for the rightward drift of American politics? Having Democratic presidents who actively try to position themselves in between the two parties—Clinton beginning in 1995, Obama occasionally, such as in 2011—certainly hasn’t helped. More important, though, have been those structural factors. One is that Republicans have just been crushing Democrats at the state level. This chart comes from Philip Bump at the Washington Post:

Note the increases during both the Clinton and Obama administrations.

This is both an effect and a cause. It’s an effect of the fact that conservatives have better fundraising and training networks, more motivated local activists (e.g., people running for school board so they can stamp out evolution), and just more money. It’s a cause of the first picture, because Republicans have translated control of state governments into Congressional gerrymandering. In 2012, for example, Democratic House candidates received more votes than their Republican opponents, yet the Republicans ended up with a majority by more than thirty seats. The entire political system has been tilted more in the Republicans’ favor, to the point where the presidency is the only prize that Democrats can fight for on equal terms—because all we need is one charismatic (Obama) or well-connected (Hillary Clinton) candidate who can raise tons and tons of money.

Think about the situation that puts us in. Republicans are apoplectic at the idea that Hillary Clinton could appoint the deciding justice to the Supreme Court, but the smart ones realize that she will be able to accomplish little else; even if by some miracle Democrats retake the House, Republican unity will suffice to block anything in the Senate. Democrats, by contrast, are terrified because a Republican president means that they will get virtually everything, unless the Senate Democratic caucus somehow develops a backbone (which it certainly didn’t have under George W. Bush): not just the Supreme Court, but a flat tax, new abortion restrictions, Medicaid block grants, repeal of Dodd-Frank, repeal of Obamacare, Medicare vouchers, and who knows what else.

What’s the lesson here? It isn’t that Bernie Sanders could accomplish more than Hillary Clinton in four years against dug-in Republican opposition. He couldn’t. It’s that having a president isn’t enough. We need a movement. That’s what the conservatives have had for decades: embryonic in the 1950s, quixotic in the 1960s, on the rise in the 1970s, ascendant in the 1980s, and increasingly institutionalized, entrenched, and ideologically extreme ever since. We need to stop thinking that winning the presidency more often than not is a long-term strategy. What we’re doing isn’t working. It needs to change.

I wouldn’t call Hillary Clinton the lesser evil. She isn’t evil. I think she will be a decent president (except when it comes to foreign military intervention, where she frightens me, but a good deal less than Trump does) and she will more or less hold the line against conservative extremists for at least four years. And, of course, it will be nice to join the ranks of civilized countries that have chosen women as their leaders. But she’s the candidate of the Democratic status quo, and the Democratic status quo isn’t working.

We need to do something different. We can have a debate about what that is. I think we need two things: comprehensive electoral reform (which is why I supported Larry Lessig in this election) and a wave of unabashedly ideological candidates who push the overall debate to the left. But Hillary Clinton amounts to doing the same thing again and hoping for different results.

Update: I inadvertently (really) typed “Hillary Trump” when I meant “Hillary Clinton.” That’s been fixed.

The Value of the Humanities

Wed, 06/01/2016 - 12:18

By James Kwak

In the Washington Post, Harvard Medical School professor David Silbersweig argues for the continuing value of a liberal arts education in today’s world. The “liberal arts”—usually meaning anything other than math, science, engineering, and maybe business—do seem to be under attack from all quarters, and not only from know-nothings like Marco Rubio. Just this week, the president of Queen’s University in Belfast said this (explaining why students will no longer be able to concentrate in sociology or anthropology):

Society doesn’t need a 21-year-old who is a sixth century historian. It needs a 21-year-old who really understands how to analyse things, understands the tenets of leadership and contributing to society, who is a thinker and someone who has the potential to help society drive forward.

That’s the new conventional wisdom: we need “leaders” who can “help society drive forward,” whatever that means.

Silbersweig himself majored in philosophy before becoming a doctor and a medical researcher. He makes a number of points, but this is the one you usually see in articles like this:

If you can get through a one-sentence paragraph of Kant, holding all of its ideas and clauses in juxtaposition in your mind, you can think through most anything. If you can extract, and abstract, underlying assumptions or superordinate principles, or reason through to the implications of arguments, you can identify and address issues in a myriad of fields.

I certainly agree. And I also agree that society needs people with a broad range of intellectual perspectives. This is the kind of thing you would expect me to agree with. I majored in social studies and got a Ph.D. in French intellectual history, of all things (and one of my fields for my orals was philosophy). But there’s an important caveat, which I’ll get to.

Unlike, say, learning Java, it isn’t easy to specify exactly what you learn in the humanities that turns out to be useful later. You do a lot of reading and writing, but of course those are things you knew how to do before going to college. You may learn how to check out boxes of documents at the archives, but that turns out not to be so useful unless you stay in academic research.

One thing I think I learned was dealing with ambiguity. In fields like social studies and history, you rarely find explanations of the world that are unequivocally correct. You don’t even have the pretense, which many economists labor under, that there is an unequivocally correct explanation out there, and you are just trying to find it. As a result, one thing I became pretty good at was using words fill to gaps—manufacturing connections and relationships between different phenomena. This, it turns out, is a very useful skill in the business world where, to tell an old consulting joke, two data points are a trend and three data points are proof. The ability to come up with a story that is convincing—and that very well may be true—based on limited information can be worth a lot in the business world.

Another thing that you can develop in the humanities is the ability to convince people. Unlike math or physics, often there is no definitive way to prove anything, so powers of argument matter. As I’ve often told students and advice-seekers, the single most important skill in business is the ability to pick up the phone, call someone (no, email doesn’t work) who doesn’t owe you anything, and convince her to do something for you. The most convincing person I’ve ever met is also the most effective businessperson I’ve ever known, and he has a B.A. and D.Phil. in philosophy.

And, of course, you learn a lot more about the real world—meaning how people behave, both individually and in groups—in the humanities and social sciences than you do in most scientific fields. So, for example, you might realize that human beings are prone to herd behavior when it comes to, say, investing in real estate, and that bubbles are prone to collapse in messy ways.

The caveat, though, is this: David Silbersweig went to Dartmouth and Cornell Medical School. I went to Harvard and UC-Berkeley (and, much later, the Yale Law School). If you go to a school like that, there are prestigious companies that will take a chance on you even if you majored in classics or medieval history. Even so, there aren’t that many: three consulting firms, a handful of investment banks, Google, Facebook, Microsoft, and probably not that many others. Or you can get fancy summer internships even as you spend your semesters reading Sartre and Heidegger, or whatever people read today. Or, as they say, you can always go to law school.

The problem is that while we need lots and lots of people with humanities and social science backgrounds, in today’s increasingly anti-intellectual climate, majoring in philosophy is becoming a risk that fewer and fewer people can afford to take. It’s also becoming an option that fewer and fewer people have to begin with, as schools from Queen’s University to CUNY make it harder and harder to study in fields that can’t attract their own corporate donors. This is what happens when you have a poor job market for new graduates, a social safety net in tatters, crumbling financial support for public higher education, an arms race in corporate fundraising by elite private schools, and a general takeover of the intellectual culture by corporate CEOs. Studying French literature will become one more luxury good reserved for the elite.

Why Justice Is So Rare

Mon, 05/23/2016 - 15:18

By James Kwak

Today was a victory for justice. In Foster v. Chatman—a case brought by the Southern Center for Human Rights and argued by death penalty super-lawyer Stephen Bright—the Supreme Court overturned the death sentence imposed on Timothy Foster by an all-white jury in 1987. In that case, the prosecution made sure it had an all-white jury by eliminating  (striking) all black candidates from the jury pool. In Batson v. Kentucky (1986), the Supreme Court ruled that it is unconstitutional to strike potential jurors on the basis of race, but the prosecutors’ own notes made clear that they knew what they were doing. Here are just a few examples, from the appendix. They pretty much speak for themselves.

It’s hard to read, but next to the green blotch in the picture above are the words “represents Blacks.”

In order to “avoid Batson claims,” the prosecutors came up with a long list of “race-neutral” reasons for striking black jurors, several of which contradicted each other. But the trial judge bought them, and Foster was sentenced to death. Only twenty-seven years later did the Supreme Court overturn that judgment, with Chief Justice Roberts not only concluding that at least two jurors were rejected because of race, but also calling out the prosecution for “the shifting explanations, the misrepresentations of the record, and the persistent focus on race in the prosecution’s file.”

But even if today is a victory for justice, the story of Tim Foster also explains why those victories are so rare.As Steve Bright said after the verdict was announced, “The practice of discriminating in striking juries continues in courtrooms across the country. Usually courts ignore patterns of race discrimination and accept false reasons for the strikes.” There are many reasons why this successful outcome is the exception, not the rule:

  1. Tim Foster was sentenced to death. People convicted by all-white juries in non-capital cases—and sentenced only to life in prison—are less likely to find good lawyers or have their cases heard by the Supreme Court.
  2. Because this was a post-conviction appeal, Tim Foster had no constitutional right to a lawyer. But he got not only a lawyer, but the best: the Southern Center for Human Rights and Steve Bright, who has argued and won multiple death penalty cases in the Supreme Court. (I am on the board of the Southern Center, which is a truly fantastic organization.)
  3. Foster’s attorneys got the prosecution’s notes, which is where they found what Bright called “an arsenal of smoking guns.” As he said today, in a classic understatement, “Usually that does not happen.”
  4. Foster’s trial was in 1987, only one year after Batson. Since then, prosecutors have gotten much better at coming up with plausible race-neutral reasons for striking jurors, which is why relatively few cases are overturned for Batson violations.
  5. The prosecutors were pretty ham-handed, both in their handling of the juror selection process and in their attempted rationalizations for their strikes. As Justice Kagan said in oral argument, “Isn’t this as clear a Batson violation as a court is ever going to see?” More sophisticated prosecutors, and Foster loses his case.
  6. The Supreme Court agreed to hear Foster’s case. You may think that the evidence of racial discrimination is obvious. I do. So does John Roberts. But a Georgia trial court rejected Foster’s appeal, even after his attorneys presented the evidence from the prosecutor’s notes. And the Georgia Supreme Court refused to hear the appeal. That’s two courts, staffed by eminent judges, who looked at the evidence and said, “Whatever.”
  7. The Supreme Court agreed to decide the case on the merits. Just three days before the oral argument, the court asked both sides to address a complicated procedural question involving which ruling Foster was appealing—the trial court’s or the state supreme court’s—and whether the case dealt with state law or federal law issues. In his dissent, Justice Thomas argued that Foster had lost in the state courts because his claims were procedurally barred. In that case, there is no federal issue and nothing for the Supreme Court to review, so he loses—despite the overwhelming evidence of racial discrimination.

If any of those seven things didn’t happen, Tim Foster would still be on death row. The stars aligned for Tim Foster. They don’t for most people. For many people in America, justice is the exception, not the rule. That’s not right.

 

Moral Worldviews and Empirical Beliefs

Mon, 05/16/2016 - 09:55

By James Kwak

Funny thing, Twitter. My most-viewed tweet ever is the following:

“A survey of 131 economists found that their answers to moral questions predicted their answers to empirical ones.” https://t.co/sPmCZx9S4A

— James Kwak (@JamesYKwak) May 14, 2016

That’s a retweet of this, from Neel Kashkari:

When All Economics Is Political https://t.co/rL5hTo9syF

— Neel Kashkari (@neelkashkari) May 14, 2016

The quotation about the survey is from the WSJ article about Russ Roberts that Kashkari originally tweeted.

Most of the comments on my tweet were some version of “duh.” But then there were a bunch who said some version of “correlation doesn’t imply causality” (which is an excuse to link to my favorite XKCD cartoon).

The thing is, it’s quite plausible that there is causality from empirical beliefs (how the world works) to policy preferences (what one should do). But I don’t see why there would be causality from empirical beliefs to moral beliefs. For example, let’s say you think that same-sex marriage is morally wrong, perhaps because the Bible says so (in your interpretation, at least). You should be able to concede any number of empirical points—that gay couples are perfectly good at raising children, that the existence of gay married couples does not weaken straight marriages around them, that the incidence of gay married couples does not cause an increase in crime, etc.—while still holding to your belief that same-sex marriage is morally wrong. That’s the thing about morality. Conceding these points might decrease your resistance to gay marriage as a public policy—that’s causality from empirics to policy—but it shouldn’t change your moral beliefs.

So what is this “survey of 131 economists” really about? The preliminary findings are in “The Moral Narratives of Economists” by Anthony Randazzo and Jonathan Haidt. They surveyed a bunch of economists and asked them two sets of questions: one economic (e.g., is austerity good or bad for economic growth) and one moral (e.g., which is more fair, equal outcomes or outcomes that are proportional to contributions).  They found, to take a couple of examples:

  • “Economists that tended to favor fiscal austerity during a recession defined fairness in proportional terms” and more generally “tended to show a moral judgment profile similar to what you would find among political conservatives.”
  •  Economists who opposed austerity during a recession “tended to have moral worldviews similar to political progressives, such as defining fairness in terms of equality.”

The obvious (“duh”) reading is that moral beliefs (conservative, liberal) shape people’s opinions about what should be empirical questions (effects of austerity).

The reverse-causality argument is the following: Imagine some macroeconomist who starts off with an egalitarian moral worldview. She studies fiscal policy in recessions and concludes that fiscal austerity tends to increase economic growth. Because of this research finding, she adopts conservative moral views—that is, she starts thinking that fairness should be thought of in terms of just deserts rather than equal outcomes. To me, that just doesn’t make sense. I don’t see the mechanism that leads from a judgment about policy effectiveness to a belief about what is fair.

But here’s the broader question that Randazzo and Haidt bring up. Take two big questions about the role of government in the economy: (a) whether there should be a robust welfare state to protect people from the risks of capitalism; and (b) whether should be a robust regulatory state to ensure that corporate profit-seeking is channeled toward socially beneficial ends. These are two very different issues, and the empirical questions on which they depend are also very different. The first depends a lot on whether you think that the incentive costs of welfare programs are balanced by the benefits they provide to recipients; the second depends a lot on whether you think that the cost of regulation exceeds the social costs generated by corporate externalities in the first place. It should be possible for a careful economist to conclude that the welfare state is good and the regulatory state is bad, or vice versa.

So why is it, as Randazzo and Haidt observe, that “there seem to be no U.S. economists who take diverging views on the welfare state and the regulatory state?” Their hypothesis is that people, including economists, have morally coherent, narrative beliefs about the world—stories—and their beliefs about specific, empirical questions have to be consistent with those stories. To anyone who is at all self-aware, this seems obvious. Duh.

Economics 101, Good or Bad?

Fri, 05/13/2016 - 08:25

By James Kwak

Over at the Washington Post, Michael Strain of the American Enterprise Institute is upset that people are picking on Economics 101. He singles out Paul Krugman and Noah Smith in particular for claiming that “the pages of economics 101 textbooks are filled with errors, trivia and ‘useless fables.'” Instead, Strain insists, “an economics 101 textbook is a treasure.” He continues by discussing some of the key insights that you can gain from the basic models presented in an introductory economics class.

Except, for the most part, Strain is rebutting an argument that no one is making. He is right to say that Economics 101 provides many valuable lessons—the competitive market model, opportunity cost, diminishing marginal returns, comparative advantage, the labor-leisure tradeoff, etc. But no one denies the analytical power of those abstract concepts.

Krugman’s argument was that for some policy issues, the lessons of Economics 101 are just not that important: correct sign but small magnitude, you might say. One of his examples was international trade; his point was that even if free trade is better than protectionism, the welfare gains from reductions in tariffs are relatively small, especially in a world where most trade is mostly free already, and especially when compared to the gains from other factors such as technological innovation.

Smith did say that most of what’s in an introductory textbook is “probably wrong,” but if you read his article it’s clear that he meant “probably wrong [if you expect it to accurately describe the real world].” Smith’s example is the minimum wage; his point is that while the supply-and-demand model predicts that a higher minimum wage will increase unemployment, empirical research shows that real labor markets often do not behave that way. The Economics 101 model is wrong as a description of reality; that doesn’t mean that it isn’t an important source of insight. Here’s how Smith puts it:

That doesn’t mean the theory is wrong, of course. It probably only describes a small piece of what is really going on in the labor market. In reality, employment probably depends on a lot more than just today’s wage level — it depends on predictions of future wages, on long-standing employment relationships and on a host of other things too complicated to fit into the tidy little world of Econ 101.

I think “small” might be an overstatement—statutory wage levels are probably a big factor in the low-skilled labor market—but otherwise Smith’s point should be uncontroversial. A friend and labor economist said to me that when thinking about the impact of a minimum wage, the natural starting point is the supply-and-demand diagram, because it’s so powerful—but you don’t stop there. The model is incomplete, like all models, and if you don’t realize that you will make mistakes.

Professional economists know all this, and hence many think that models need to be balanced by empirical research, even in first-year classes. Strain doesn’t buy this because “economists’ empirical studies don’t agree on many important policy issues.” I don’t understand this argument. The minimum wage may or may not increase unemployment, depending on a host of other factors. The fact that economists don’t agree reflects the messiness of the world. That’s a feature, not a bug.

People like Krugman and Smith (and me) aren’t saying that Economics 101 is useless; we all think that it teaches some incredibly useful analytical tools. The problem is that many people believe (or act as if they believe) that those models are a complete description of reality from which you can draw policy conclusions. As Smith says, “If economics majors leave their classes thinking that the theories they learned are mostly correct, they will make bad decisions in both business and politics.” In the first (1948) edition of his famous textbook, Paul Samuelson lamented that the simple model of the competitive market—you know, the one that says that markets maximize social welfare—is “all that some of our leading citizens remember, 30 years later, of their college course in economics.”

In the past forty years, simplistic applications of Economics 101 concepts, stripped of nuance or empirical verification, have swept the policy field in areas from labor markets to taxes to health care. They now constitute virtually the whole of the establishment Republican Party’s economic policy, as represented by Paul Ryan (who talks exactly like someone with an exaggerated faith in a handful of Economics 101 snippets). The problem isn’t Economics 101—it’s the transformation of Economics 101 into an ideology that, like most ideologies, claims the status of objective truth.

Hillary Clinton, Barack Obama, and Our Intoxicated Horse

Wed, 05/11/2016 - 09:18

By James Kwak

Remember just eight years ago, when we had an epic primary battle between Hillary Clinton and Barack Obama? There weren’t many significant policy differences between them; Obama was never as liberal as many people assumed he was. But there was one major difference. This is what Obama said:

Washington has allowed Wall Street to use lobbyists and campaign contributions to rig the system and get its way, no matter what it costs ordinary Americans. . . .

Unless we’re willing to challenge the broken system in Washington, and stop letting lobbyists use their clout to get their way, nothing is going to change.

The reason I’m running for President is to challenge that system.

The quotations are from the new edition of Republic, Lost by Larry Lessig (pp. 167–68). My handful of loyal readers will recall that Lessig was my choice for the Democratic presidential nomination until he was shut out of the debates by the Democratic Party. (Note to the party and its affiliated Super PACs: no, I’m not giving you money.)

I’m reading the new edition of the book, and I came across this brilliant description of Hillary Clinton’s 2008 run (p. 168):

She saw the job of the president to be to take a political system and do as much with it as you can. It may be a lame horse. It may be an intoxicated horse. It may be a horse that can only run backward. But the job is not to fix the horse. The job is to run the horse as fast as you can.

Regardless of what you think about Clinton on policy—she’s a little too far to the right for my tastes, but not terribly so—I think this is a fair summary of her approach, both in 2008 and in 2016. She has positioned herself as the pragmatic choice, the person who knows how to work within the system to make incremental gains, the candidate of modest by supposedly achievable ambitions. Last time she lost; this time she’s winning. She’s nothing if not consistent.

This means, of course, that the broken, rigged system—those are President Barack Obama’s words, everyone, not just those of some socialist from Vermont—orchestrated by lobbyists and dominated by concentrated special interests, will be around for the foreseeable future.

For someone who only tunes in during presidential election campaigns, this may raise the question: What happened? Wasn’t Obama going to fix the system? Well, as Lessig and many others have pointed out, he didn’t even try. Whether Obama gave up because he thought he could grind out legislative victories the old-fashioned way, or whether he never really believed in the cause, I guess only he knows. But Obama the candidate was right: unless we fix the system, nothing else is going to change. And except for Zephyr Teachout and a few other down-ballot candidates who are committed to electoral reform, this year is going to be another lost opportunity.

The Committee to Save the World

Tue, 05/10/2016 - 07:30

By James Kwak

You know that famous Time cover featuring Rubin, Greenspan, and Summers, calling them “The Committee to Save the World”? I was reading the accompanying article, which I had never read before, and it’s an absolutely precious example of the nonsense people said at the time. Like this:

Rubin, Greenspan and Summers have outgrown ideology. Their faith is in the markets and in their own ability to analyze them. … This pragmatism is a faith that recalls nothing so much as the objectivist philosophy of the novelist and social critic Ayn Rand (The Fountainhead, Atlas Shrugged), which Greenspan has studied intently. During long nights at Rand’s apartment and through her articles and letters, Greenspan found in objectivism a sense that markets are an expression of the deepest truths about human nature and that, as a result, they will ultimately be correct. … They all agree that trying to defy global market forces is in the end futile. That imposes a limit on how much they will permit ideology to intrude on their actions.

I realize this is written by a journalist, not by one of the three men themselves. But could you come up with a better example of an ideology?

The Problem with Obamacare

Mon, 05/09/2016 - 10:39

By James Kwak

When it comes to Obamacare, I’m firmly in the “significantly better than nothing” camp. Obamacare has increased coverage—although not as much as one might have hoped. The percentage of people uninsured has fallen from around 17% in 2013, when only a few coverage-related provisions of the ACA were in effect, to around 11% in early 2015, after the major changes kicked in in 2014. That’s six percentage points, or millions of people—but it’s still much less than half of the pre-ACA uninsured.

There has also been a lot of controversy over the impact of Obamacare on health insurance prices. According to the Kaiser Family Foundation, the weighted average pre-subsidy price of a silver plan on the exchanges only increased by 3.6% from 2015 to 2016, which certainly seems good. But one way the ACA keeps premiums reasonable is by pushing people into plans with high levels of cost sharing. The average silver plan has a combined annual deductible (including prescriptions) of more than $3,000; the deductible for an average bronze plan is close to $6,000. In other words, one reason that insurance premiums are affordable is that those premiums don’t buy you what they used to, as insurers shift more and more health care costs onto their customers.

This is exactly what we should have expected. Obamacare is an example of “managed competition,” something that Bill Clinton talked about on the campaign trail twenty-four years ago. The basic principle is that competitive markets will generally produce good outcomes—low costs, efficient allocation of resources to meet consumer needs, etc.—but need to be managed around the edges. Moderate Democrats (what we used to call moderate Republicans) have fallen in love with this idea, because they can talk about the wonders of markets while blaming anything they don’t like on “market failures.”

The classic example of correcting for a market failure, of course, is the individual mandate. By now, every liberal interested in policy has learned what adverse selection is and, more specifically, can explain why community rating will produce an adverse selection death spiral unless you have mandated universal participation. This is the image that Obamacare’s most ardent supporters want you to take away: cleverly designed regulation preventing a market failure and ensuring universal coverage, while enabling markets to reduce costs, encourage innovation, blah blah blah. What could be better?

The dirty not-so-secret of Obamacare, however, is that sometimes the things we don’t like about market outcomes aren’t market failures—they are exactly what markets are supposed to do.

The problem with adverse selection, remember, is that people know more about their health status than insurers do, so they only buy policies that are profitable for them on an expected basis (that is, sick people are more likely to buy insurance than healthy people), which means that insurers would lose money, so insurers raise premiums, but that only reduces the number of people buying insurance. But imagine if insurers had the same information as insureds, so they could calculate the actuarially fair price for every policy. No more adverse selection! But would that be a good outcome? Sick people and poor people would be unable to afford insurance at all. That’s what markets do: they distribute goods and services based on people’s willingness to pay, which is a function of their budget constraints. And that’s not something that we as a society are willing to accept.

So Obamacare says: No medical underwriting!—which means, basically, that the healthy and the sick pay the same up-front premiums. At this point, with a universal coverage mandate and no medical underwriting, you might think we should just have a single payer system. But … but … markets!

So, in order to give private insurers something to do, Obamacare allows them to offer different flavors of health plans, within the rules set up by the ACA. But what is it that insurance companies do? They try to sell policies for more (in premiums) than they cost (in benefits). We know sick people will cost more than healthy people, but now insurers aren’t allowed to price discriminate on the front end. So, instead, they offer plans with loads of cost sharing—high deductibles, high out-of-pocket maximums, and high levels of coinsurance. Cost sharing has two purposes. One is to deter people from actually using health care—this is the reality of “consumer-driven health care.” The other is to make the sick pay more than the healthy. Remember, that’s how markets are supposed to work. Insurers are supposed to identify the sick people and charge them more for insurance; Obamacare says they can’t do that, so instead they switch to policies that force sick people to pay more for care at the point of service.

None of this is at all nefarious. If you’re going to have private health insurance companies, you have to let them try to make money—otherwise, what’s the point? Indeed, if you like markets, you have to recognize that markets only do what they do because companies are trying to make money.

But you run up against this fundamental problem: Markets work by making people pay for what they get; the more health care you “consume,” the more you pay, either in insurance premiums or at the hospital. But the vast majority of Americans are not comfortable with the idea that rich people get good health care, middle-class people get passable health care (until they get seriously ill, in which case they go bankrupt), and poor people get no health care to begin with.

Obamacare is a heroic attempt to make the best out of this basic conundrum: we are trying to use markets to distribute something that, at the end of the day, we don’t want distributed according to market forces. That’s why we have not only the individual mandate and the prohibition on medical underwriting, but also the expansion of Medicaid, the subsidies, the Cadillac tax (because we don’t like the market when it produces gold-plated insurance plans) and, most telling of all, risk adjustment.

What is risk adjustment? Well, consider what a profit-seeking insurer would do if it has to charge the same price to everyone. In that case, you want to sell insurance to healthy people, not to sick people. Since you’re not allowed to turn people away, you design marketing programs so that only healthy people find out about your product. Again, nothing nefarious going on. But that’s bad for the system, because then other insurers will get stuck with the sick people, lose money, and pull out of the market.

So Obamacare’s risk adjustment provisions transfer money from plans with healthy people to plans with sick people. Insurance companies aren’t allowed to compete by trying to attract lower-risk customers. The only way they are allowed to compete is by paying less to health care providers for the same services (since Obamacare requires standard minimum benefit packages for all plans). But the thing is, we already know how to lower payments to providers. The key is to be a really, really big insurance plan, covering lots of people, so that you have bargaining power when it comes time to negotiate rates with hospitals and physician offices. There’s no “innovation” to stimulate here; it’s pure market power. No one has more of it than Medicare—and nothing can have as much market power as a single payer plan.

So at the end of the day, Obamacare is based on the idea that competition is good, but tries to prevent insurers from competing on all significant dimensions except the one that the government is better at anyway. We shouldn’t be surprised when insurance policies get worse (in terms of the benefits they actually provide) and health care costs continue to rise.

If we take as our starting premise that everyone should be able to afford decent health care—something that literally everyone agrees with—then the most obvious solution is single payer or one of its close cousins, such as we see in every other advanced economy in the world. But … markets! Not just Republicans, but also most Democrats are convinced that markets must be better, because of something they learned in Economics 101. Health care is one of the best examples of economism—the outsized influence that the competitive market model has had on public policy, even in areas where its lessons patently don’t apply.

You could say that the Obama administration made the best of the lousy hand it was dealt by decades of market propaganda and a weak majority that hinged on Democrats In Name Only. Obamacare certainly improves on what preceded it (nothing, that is, as far as the individual market is concerned). But ultimately it is a flawed attempt to force markets to produce outcomes that markets don’t want to produce.

The Long Game

Mon, 04/25/2016 - 07:30

By James Kwak

Charles Koch recently made headlines by saying that it is “possible another Clinton could be better than another Republican” in this year’s presidential race. Some people find this surprising: how could the Koch brothers sit by and let another Democrat be elected to the White House? But that’s a reflection less of the Kochs’ political acumen than of our collective quadrennial fixation on the presidential election.

I find it unlikely that the Kochs would actually support Hillary Clinton—it’s more likely Charles was taking the occasion to signal his displeasure with both Donald Trump and Ted Cruz—but it’s quite possible that they will simply sit out the battle for the White House. Unlike, oh, just about everyone in the Democratic Party, the Kochs have never panicked at the thought of losing any particular election. Instead, as Jacob Hacker and Paul Pierson put it:

When conservative business leaders such as Charles and David Koch invested in Cato, Heritage, the American Enterprise Institute, and all the other intellectual weapons of the right, they were playing the long game. When Republican political leaders like Newt Gingrich and Mitch McConnell developed new strategies for tearing down American government to build up GOP power, they were playing the long game.

That’s from the conclusion of their new book, American Amnesia (p. 369).

Since the 1980s, if not earlier, the story of the Democratic Party has been a reasonably successful attempt to take or maintain control over the presidency at any costs—combined with a complete failure to articulate a compelling, long-term vision, or to build lasting networks and institutions that provide the infrastructure for political change. We bet everything on the political skills of Bill Clinton or Barack Obama, and then we act surprised when they end up following moderate Republican policies—in part because they are blocked in by Republicans in Congress, in state houses, and in the federal judiciary. (And for those who think this is hyperbole, it was Bill Clinton himself who said, “I hope you’re all aware we’re all Eisenhower Republicans. . . . We stand for lower deficits and free trade and the bond market. Isn’t that great?” (Hacker and Pierson, p. 163).)

The story of the conservative movement, on the other hand, is the opposite: serial failure to come up with a compelling presidential candidate—since 1988, no Republican nominee has won a plurality of the popular vote, except W. when running as an incumbent after “winning” a war—combined with a consistent vision, a massive advantage in fundraising not dependent on a unique individual (like Obama or Bernie Sanders), repeated victories in state legislative and gubernatorial elections, successful gerrymandering in multiple states, a structural lock on the House of Representatives, and consolidation of the small-state bias in the Senate. Sure, things haven’t been all rosy for libertarian conservatives like the Kochs—there was the huge expansion of government under W., and now Obamacare. But they’ve reduced the chances of higher taxes to nil, they’ve blocked any action on climate change, they have Barack Obama reduced to trying to pass a “free trade” agreement (because he can’t pass anything else), and they’re just one presidential election—now, or 2020, or 2024—from a massive restructuring of the tax code and all social insurance programs.

American Amnesia is a great read, for several reasons. It’s a passionate and fact-based argument for what Hacker and Pierson call the “mixed economy” (what that David Kotz calls “regulated capitalism” in The Rise and Fall of Neoliberal Capitalism), one in which government plays an active role in structuring and supplementing markets. It’s long on American history going back to the Founding Fathers. Not only does it emphasize the importance of everyone’s current favorite founder, Alexander Hamilton, who favored a strong (for his day) federal government; it also shows James Madison to be a proponent of a strong central government, and even describes Thomas Jefferson’s reconciliation to Hamiltonian economic policies. And it’s deep on details of everything that’s wrong with American politics and economic policy today.

I think the best part of the book, however, is its emphasis on the interest groups and ideologues who brought about the current state of affairs. The title of the book refers to the fact that Americans no longer remember that it was the mixed economy that made the United States the most powerful country in the world. But the word “amnesia” is slightly misleading, because it connotes passive forgetting. It’s more accurate to say that the idea of the mixed economy was actively pushed aside by the ideology of all-powerful and universally benevolent competitive markets—what I call economism because of its exaggerated dependence on Economics 101 models, and Hacker and Pierson call Randianism after Ayn Rand.

Like Kotz, and like Cohen and DeLong in Concrete Economics, Hacker and Pierson discuss how something went wrong beginning in the 1970s and especially from the 1980s. (That’s also the inflection point in 13 Bankers, but we were focusing just on the financial sector.) The second part of American Amnesia focuses on several of the actors behind that historical shift. There is future Supreme Court Justice Lewis Powell encouraging American business to become a force for deregulation and small government. There is Newt Gingrich, who hit on the brilliant idea of destroying the reputation of the very institution he inhabited. And, of course, there are the Koch brothers, with that same Charles Koch saying (p. 232):

 The most important strategic consideration to keep in mind is that any program adopted should be highly leveraged so that we reach those whose influence on others produces a multiplier effect. That is why educational programs are superior to political action, and support of talented free-market scholars is preferable to mass advertising.

The Friedrich Hayek of “The Intellectuals and Socialism” could not have wished for a better student.

Through it all, the key is that these men were playing the long game—not the short game of trying to win the next election. And they are still playing it. In the long run, what matter are ideas, institutions, and money—not the millions of $25 donations that can make the difference in a presidential primary, but the million-dollar checks that build up think tanks, academic institutes, Astroturf organizations, 501(c)(4)s and super PACs, training courses for activists and local candidates, and all the other infrastructure necessary to build a long-term political movement.

They have it. We don’t. Hacker and Pierson write, “Those like us who believe we can and must build a mixed economy for the twenty-first century—they need to play the long game, too” (p. 369). The more I have studied the development of modern American conservatism, the more I agree. Unfortunately, that’s not how Democrats roll—at least not so far.

Cultural Capture in Black and White

Fri, 04/22/2016 - 07:31

By James Kwak

A few years back I wrote a paper on “cultural capture” in financial regulation. The basic idea is that the industry can achieve the practical result of regulatory capture—industry-friendly policies—not just by bribing regulators (legally or illegally) and not just by providing useful “information” to agencies, but by cultivating other types of influence such as social relationships status advantages. The response was decidedly mixed. Some people said, “Yes, that’s exactly right!” while others said, “Nice idea but how can you prove that it actually happens?”

I completely concede the identification problem. Regulatory decisions are always overdetermined, and it’s hard to find data on, say, how many regulators’ kids go to school with lobbyists’ kids. But sometimes it just hits you between the eyes.

Yesterday the invaluable Jesse Eisinger published the backstory of the SEC’s ABACUS investigation (which will always have a special place in my heart, since the complaint was filed shortly after the publication of 13 Bankers, getting Simon and me a full-hour interview on Bill Moyers and boosting the book up the charts). Eisinger’s story is based on information provided by James Kidney, a veteran SEC lawyer who thought the agency should pursue Goldman senior executives on a broader theory of liability—but was opposed by other insiders and, ultimately, Enforcement Director Robert Khuzami.

And here’s the smoking gun, in an email by Reid Muoio, then head of the team investigating complex mortgage securities (that is, most of the financial crisis):

 

Let’s leave aside the illogical conclusion: of all the people at Goldman, Tourre most closely fits the description, “good people who have done one bad thing.” His higher-ups, the ones who designed and supervised the whole operation, are the serial wrongdoers. And forget for the moment the obvious contrast with how the justice system treats minor drug offenders.

Muoio’s charging decision is based on sympathy with the entire category of securities law defendants. They’re good people. They’re like us, but for one little mistake. So let’s go easy on them.

Kidney knew what was going on. This is what he wrote in one email: “We must be on guard against any risk that we adopt the thinking of those sponsoring these structures and join the Wall Street Elders, if you will.” The problem is that his colleagues seem to have wanted to be part of the Wall Street Elders—not that they necessarily wanted jobs on Wall Street, but that they wanted to feel like part of the sophisticated club, the people who designed the most complicated financial products ever.

There’s an argument to be made that the SEC couldn’t have won the broader case that Kidney wanted to bring; I can’t judge that from what I can see. But the key thing is, that wasn’t the only factor behind the SEC’s decision to go easy on Goldman.

Kidney explains why he came forward on his blog here. It’s a great read, full of passages like this:

Yessir, according to the Obama administration, Goldman Sachs, JP Morgan, Bank of America, Citibank and other institutions made their contributions to tearing down the economy, but no one was responsible.  They are ghost companies.

Enjoy.

Profits in Finance

Wed, 04/20/2016 - 11:20

By James Kwak

Noah Smith on one reason why financial sector profits have remained high as the industry has grown:

Why haven’t asset-management charges gone down amid competition? In a recent post, I suggested one answer: people might just be ignoring them. Percentage fees sound tiny — 1 percent or 2 percent a year. But because that slice is taken off every year, it adds up to truly astronomical amounts. … If many investors pay no attention to what they’re being charged, more competition can’t push down those fees.

I think that’s basically right, but there’s a smidgeon more to it. Expense ratios on actively managed mutual funds have remained stubbornly high. Even though more people switch into index funds every year, their overall market share is still low—about $2 trillion out of a total of $18 trillion in U.S. mutual funds and ETFs. Actively managed stock mutual funds still have a weighted-average expense ratio of 86 basis points.

Why do people pay 86 basis points for a product that is likely to trail the market, when they could pay 5 basis points for one that will track the market (with a $10,000 minimum investment)? It’s probably because they think the more expensive fund is better. This is a natural thing to believe. In most sectors of the economy, price does correlate with quality, albeit imperfectly. It’s also natural to believe that some people are just better than others at picking stocks, just like Stephen Curry is better than other people at playing basketball. Finance and economics professors can talk all they like about nearly-efficient markets, the difficulty of identifying the people who can generate positive alpha, and the fact that you have to pay through the nose to invest your money with those people (like James Simons), but ordinary investors just don’t buy it. And this is one area where I think marketing does have a major impact, both in the form of ordinary advertising and in the form of the propaganda you get with your 401(k) plan.

So while some people are no doubt ignoring the fees, others are probably saying, “Sure the expense ratio is 100 basis points, but look at the past performance!” (Anyone who makes decisions based on past performance—that is, most people—is taking fees into account to some extent, since published mutual fund returns are almost always net of fees.) So the persistence of high fees is partly due to the difficulty of convincing people that markets are nearly efficient and that most benchmark-beating returns are some product of (a) taking on more risk than the benchmark, (b) survivor bias, and (c) dumb luck.

The Root of All Our Problems?

Mon, 04/18/2016 - 09:04

By James Kwak

A friend pointed me toward an op-ed in The Guardian by George Monbiot titled “Neoliberalism—The Ideology at the Root of All Our Problems.” The basic argument is that there is an ideology that has had a pervasive influence on the shaping of the contemporary world. Its policy program includes “massive tax cuts for the rich, the crushing of trade unions, deregulation, privatisation, outsourcing and competition in public services.” Monbiot calls this cocktail “neoliberalism” (more on the name later).

There’s a lot in the article that I agree with. The political and economic takeover of the Western world by the super-wealthy was not accomplished by force, nor by rich people simply demanding a larger slice of the proverbial pie. Instead, it happened because many people—particularly in the media, the think tank intelligentsia, and the political community—internalized the idea that competitive markets provided the solution to all problems. (The idea that unfettered capital markets and financial innovation would be good for everyone, which helped produce the financial crisis, is only a special case of this larger phenomenon.)

I like Monbiot’s framing of how this works:

So pervasive has neoliberalism become that we seldom even recognise it as an ideology. We appear to accept the proposition that this utopian, millenarian faith describes a neutral force; a kind of biological law, like Darwin’s theory of evolution.

Sometimes it does seem like evolutionary biology and the simply model of supply and demand are the two most common models that people turn to when trying to explain things they don’t really understand.

The effect is to naturalize and even celebrate the inequality that results from a blind reliance on supposedly competitive markets: “Inequality is recast as virtuous: a reward for utility and a generator of wealth, which trickles down to enrich everyone.”

Monbiot also points out that “neoliberals” have the field largely to themselves, since the “left” has not come up with an alternative since Keynesianism. I think this is one reason why the policy center of gravity has shifted steadily rightward since the 1970s, even though public opinion on basic questions such as the role of government has scarcely budgeted. Conservatives have an economic program; liberals (what we now call “progressives” in the United States) have a bunch of complaints about that economic program.

One thing I’m not entirely on board with is the particular bundle of policies that Monbiot ties together, or the label “neoliberalism” that he applies. This is a complicated conceptual space that, portions of which have been labeled liberalism (that’s the word Hayek used in The Road to Serfdom, neoliberalism, libertarianism, Randianism (used by Hacker and Pierson in American Amnesia), or simply conservatism. After all, there isn’t much in the ideology that Monbiot identifies that every Republican presidential nominee since Reagan wouldn’t agree with. “Neoliberalism” also suffers from the problem that it means very different things to Anglo-Americans and to Latin Americans, many of whom see it through the lens of the Pinochet regime and the U.S./IMF’s imposition of the Washington Consensus. Various people, particularly Frank Pasquale, have pointed out to me that there are thoughtful, coherent conceptions of neoliberalism—see Grewal, Purdy and others here, or Mirowski here. But I worry that popular usage of the term has run away from any clean definition. This is particularly so because no one actually claims to be a neoliberal anymore, so the term is mainly used by its purported opponents.

There is a central strain in Monbiot’s conceptual cocktail that I think is coherent, and that is overreliance on the competitive market model taught in Economics 101. Although this isn’t the same as Economics 101, as various people have pointed out, it tends to be the single most important lesson of introductory economics. As Paul Samuelson wrote in the first edition of his 1948 textbook, this model—and the result that competitive markets maximize social welfare—is “all that some of our leading citizens remember, 30 years later, of their college course in economics.” The assumption that the competitive market model accurately describes the real world is, I think, one of the major reasons why conservative economic policies have been so persuasive—and why, for example, our pre-Bernie Sanders health care debate was divided between leaving markets alone and fixing markets to make them more competitive (Obamacare). The belief that public policy could be based directly on theoretical principles is also a reasons for the turn away from practical economic management discussed by Cohen and DeLong in Concrete Economics.

This pervasive assumption also does not have an accepted name, but I call it “economism,” since it constitutes a worldview (not quite an ideology) based on economic theory.* (Noah Smith calls it, or something similar, “101ism.”) I don’t claim that economism is a better category than neoliberalism, or historically  more important, but I do think that it is more easily isolated in both history and public policy. And it certainly has done its share of damage in justifying deregulatory policies and rationalizing the rise in inequality that followed.

* To be precise, this is one meaning of economism, which already has several—none of which is particularly well known except in certain obscure academic circles.

Hamilton Everywhere, All the Time

Wed, 04/13/2016 - 13:35

By James Kwak

Alexander Hamilton is a big deal these days. Apparently there’s a musical about him—something I only found about when I saw Lin-Manuel Miranda’s Rose Garden video on Twitter (which tells you something about my relationship to popular culture). In their new book (on which more later), Jacob Hacker and Paul Pierson invoke Hamilton to defend their vision of an interventionist government and a mixed economy.  And Stephen Cohen and Brad DeLong have titled their new book Concrete Economics: The Hamilton Approach to Economic Growth and Policy.

I like to think that Simon and I were on the leading edge of this mini-trend when we featured Hamilton in our 2011 Vanity Fair article and in White House Burning. But it’s no surprise that people turn to Hamilton today, when what used to be known as the Tea Party (now simply the Republican Party) dreams of recreating a libertarian founding moment that never existed. Hamilton stood for a (relatively) strong central government, deficit spending, and what would now be called industrial policy, all with the intent of fostering economic growth.

Concrete Economics is less about Hamilton’s particularly approach to economic policy than about an overall attitude of which Hamilton cited as an exemplar: in short, a pragmatic rather than ideological approach to policymaking, one which used government resources and power to pursue specific goals. The best contrast is between the Republican Party c. 1955—which used state power to suburbanize the country, build up the military, and spin off the technologies that turbocharged productivity growth—and the Republican Party of the past 35 years, which (along with a considerable amount of Democratic abetting), which slashed government spending and deregulated the financial sector for ideological reasons (free flow of capital, free markets, blah, blah, blah).

If Cohen and DeLong are right about the broad course of American economic history, then the big question is why the we had this major transformation in overall attitudes toward policymaking. There are two major ways to think about this historical shift. One is to look at the ideas involved: maybe what happened is that people suddenly started believing “neoclassical” economic theories about the benefits of free markets (particularly for capital) and small government, and then acted on those beliefs. The other is to look at the who benefited: financial institutions, financial professionals, corporate executives, and rich people generally all stood to gain from lower taxes, smaller government, and financial deregulation. Superstructure, base.

The truth, I think, is that both stories are true. We have had a major ideological shift since the 1950s, from the idea that the government can play a positive role in influencing economic development, to the idea that government is either evil and incompetent and free markets can solve all problems. As Dick Armey (remember him?) said, “The market is rational and the government is dumb.” This naive belief that the simple results people remember from Economics 101—supply and demand interact to maximize social welfare—actually apply in the real world has produced decades of bad policies, and continues to be spouted by the Paul Ryan wing of the Republican Party. But the rise of that ideology did not happen by itself; it was organized and funded by wealthy businessmen, large corporations, and conservative foundations, as amply documented by historians such as Kim Phillips-Fein, Gareth Stedman Jones, Elizabeth Fones-Wolf, and many others.

To turn the tide, it won’t be enough simply to tell people that they should be more practical and less ideological. Powerful interest groups have to decide that they would be better served by different policies based on different ideas. There are glimmers of hope that that might happen in particular areas; for example, the corporate sector may be very slowing coming around to the idea that destroying the planet may not be so good for future profits. But on the whole, the very wealthy—who basically control the American political system—seem to be happy with the way things are. Which is one indication that things are unlikely to change anytime soon.