Friday, January 2, 2015

Piketty Declines Legion of Honor

Thomas Piketty, author of Capital in the Twenty-First Century has declined a nomination to the French Legion of Honor. The Legion of Honor is a big deal. It's described as an "order," and was founded in 1802 by Napoleon. The structure of the order is rather militaristic (no surprise). It has ranks, medals, and elaborate rules. This from Wikipedia:
Wearing the decoration of the Légion d'honneur without having the right to do so is an offence. Wearing the ribbon or rosette of a foreign order is prohibited if that ribbon is mainly red, like the ribbon of the Légion. French military members in uniform must salute other military members in uniform wearing the medal, whatever the Légion d'honneur rank and the military rank of the bearer. This is not mandatory with the ribbon.
Here's the list of recipients. I scanned it, and it's quite a cross-section of humanity, including Desmond Tutu, Kristin Scott-Thomas, Ravi Shankar, Louis Pasteur, Sharon Stone, Alexis de Tocqeville, Paul McCartney and Jerry Lewis (a bastion of French culture, in case you didn't know). Who would not want to keep company with Ravi Shankar, Desmond Tutu, and Kristin Scott-Thomas? Piketty, apparently. And it's not like he didn't have good contemporary company. His fellow nominees were Jean Tirole (Economics Nobel Prize 2014) and Patrick Modiano (Nobel Prize for Literature 2013). So, what's Piketty's problem? He's quoted (in translation) as follows:
I just found out that I had been proposed for the Legion of Honor. I reject this appointment and I do not think it is the role of government to decide who is honorable. It would be better that it be devoted to recovery growth in France and in Europe.
Now I'm really confused. I got the idea - perhaps mistaken - from Capital in the Twenty-First Century, that Piketty viewed unfettered market outcomes as seriously deficient. Society could be destroyed as the result of capitalism run wild, or some such. Is he now telling us that there is a market for honor? If you want a reward for your good deeds, you should write your own 700-page tome, and see if it meets the market test, perhaps? Maybe he has found religion, and thinks that honor is found in heaven. Inquiring minds want to know.

My suggestion is that we run an experiment. Anyone with any power to decide such things should nominate Piketty for their award, and we'll see which ones he takes, if any. I would be happy to put him up for the Hillcrest Neighborhood Holiday Season Light Display Award. I'll have to convince the neighborhood association to waive the usual rules, but that could fly.

Thursday, January 1, 2015

Historical Fiction

Paul Krugman has an interesting perspective on the history of economic thought. According to him, the Volcker disinflation played out exactly as Keynesians thought it would. Cutting to the chase:
So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.
That's not the way I remember it. But maybe my memory is bad, so I thought I would do some research to check out these claims. We'll set the wayback machine for 1978, when Arthur Okun wrote "Efficient Disinflationary Policies." Okun set out to obtain a good measure of the costs of disinflation, and consulted what he thought were some of the best minds in the profession at the time:
You should recognize at least some of the names of these estimators-of-Phillips-curves. Some of them are still big shots. What was the conclusion from that work?
So, if we look only at the 1981-82 recession which, according to the NBER, is defined by a peak in 1981Q3 and a trough in 1982Q4, the drop in the quarterly PCE deflator was about 3.6%, or an average of 2.9% per year. By Okun's estimate, this should have resulted in a 29% drop in GDP per year. What actually occurred was a drop in real GDP during the recession of about 2.5%, or an average of about 2% per year. If Okun was feeling pretty smug [infeasible actually (see the first comment below), but imagine he were still alive then] about all this by the mid-1980s, I'm not sure why, given that he was off by an order of magnitude.

There's more to Okun's "Efficient Disinflationary Policies." After coming to the conclusion that using conventional policy to address the inflation problem would be too costly, he suggested how we could do that in a more efficient manner, by using "the direct influence of public policy on costs." What he appeared to have in mind were subsidies to firms - subsidies lower costs, which lower prices, so there would be less inflation, he reasoned. Obviously Okun's idea was not highly influential. Around the same time (this is from 1977), you would find James Tobin in roughly the same ballpark. Like Okun, Tobin ponders the costs of disinflation, and judges that they are just too high if conventional means are used. He says, "the way out, the only way out, is incomes policy," and concludes:
For the uninitiated, "incomes policy" means wage and price controls. Tobin is saying that the cost of using wage and price controls to control inflation is small, with the welfare distortions measured by "Harberger triangles," but the cost of using conventional policy to control inflation is a potentially very large output gap. As with Okun's policy recommendation, Tobin's went by the wayside.

So, now we know how some of the established macroeconomic big shots of 1978 were thinking. What about the upstarts? In early 1981 (the dates on the working papers are May 1981) Tom Sargent wrote a couple of papers which address directly the costs of disinflation, and how we should think about the policy problem. These are "The Ends of Four Big Inflations," and "Stopping Moderate Inflations: The Methods of Poincare and Thatcher." In "Four Big Inflations," he tells us about the consensus view of deflation at the time - i.e. the view of Okun and Tobin as outlined above:
Further, he supports that in a footnote:
Then, Sargent describes an alternative view:
While much of Okun's and Tobin's papers referenced above read like a foreign language now, I think most people will find that paragraph of Sargent's very familiar. Today, I don't think many people would find it objectionable, but at the time this sort of thinking was getting serious pushback from some big shots. Sargent quotes one of them in a footnote:
In contrast to Okun, who is quite pleased to supply an estimate of the cost of disinflation, in terms of lost GDP, Sargent doesn't think he has a serious model that he can take off the shelf to address the quantitative implications of disinflation. But, his two papers provide nice historical examples of disinflations (in some cases the ends of hyperinflations) that are quick, and that seem to conform to his ideas.

So, who were the winners and losers from this episode? Probably that's the wrong question to be asking. When science progresses, we all win. Science does not progress when individual scientists see it as a loss if their ideas are superseded, and try to prevent that loss from happening by denigrating new ideas. Careful re-examination of the Volcker disinflation might lead to the conclusion that Volcker should have done something different, but I think the consensus view among economists is that the Volcker disinflation was necessary, and whatever output was lost in the process was outweighed by the benefits of low inflation that we have enjoyed for the last 30 years. The idea that we should control inflation through tax/subsidy policy or wage/price controls gets no traction in the 21st century. It's now widely accepted that inflation control is the province of central banks, though we owe that principally to the Old Monetarists rather than the 1970s macro revolutionaries. But those revolutionaries gave us much of the framework we now use for addressing monetary policy problems: (i) we want to think in terms of policy regimes rather than policy actions; (ii) commitment is important; (iii) well-understood policy rules are important; (iv) people are forward-looking. All of those ideas are now common currency in central banking circles. People who succeeded as macroeconomists in the post-1980 world absorbed the ideas of Sargent and his contemporaries and worked with them - you can certainly see that in Woodford's work, for example.

Are these long-gone controversies which have no relevance for current policy issues? Certainly not. In the messy world of monetary policy-making, we can still find people who want to think about monetary policy in terms of actions rather than state-contingent policy rules, or who want to base policy decisions on things akin to an estimate of the slope of the Phillips curve. People with the resolve of Paul Volcker are unusual, and institutional commitment can be difficult when there are many people with disparate ideas weighing in on a policy decision.

One way to think about the history of macroeconomic thought is as a series of battles. These people thought x, those people thought y. There was a fight between x and y, and y won. Then z came along, tried to beat up y, but y kicked z's butt, etc. That may sound very exciting - people find stories about personal animosity intriguing. But the reality is perhaps not so exciting. We have a set of ideas that we currently find useful, some of which we can trace directly to x, y, and z. Other ideas have morphed in various ways through the work of many people, and through public discussion, so that it's hard to give credit to some unique originator. And we're all "rational expectationists" now. Perhaps it's best to remember Samuelson for the positive things he gave us - the Foundations of Economic Analysis, the overlapping generations model, for example - rather than his carping about "rational expectationists." As well, if you want to know what people really think, look at what they do, not what they say. For example, when Paul Krugman wants a
... core insight that changed ...[his] ... mind about monetary policy in a liquidity trap (and is useful for fiscal policy too),
what does he do? He takes a Lucas cash-in-advance model off the shelf - a model with forward-looking optimizing economic agents having rational expectations - and uses it to learn something. He's a rational expectationist too!

Sunday, December 28, 2014

Where's the Multiplier?

Robert Waldmann thinks there are "non-Keynesians" who are excessively dismissive of the Keynesian multiplier:
Various non Keynesians have argued that the pattern of public spending and GDP in the USA during the current recovery (that is since June 2009) undermines the Keynesian hypothesis that the Government spending multiplier is positive. In particular JOhn Cochrane and Tyler Cowen argue that, if Keynesians were right, sequestration should have caused at least a decline in GDP growth rates.
Waldmann shows us a chart, calculates a correlation coefficient (complete with standard error and t-statistic), showing that, for the last 19 quarters, quarterly growth in real government spending and growth in real GDP are positively correlated, and concludes:
...19 data points can’t prove anything, but the few data support the Keynesian hypothesis about as strongly as could be imagined. I am impressed by the unreliability of casual empiricism conducted by idealogues. Some people look at this period and see the opposite of what I see. Even now, I am shocked that economists didn’t bother to look up the data on FRED before making nonsensical claims of fact.
First, I think Christian Zimmerman will be very pleased to learn that FRED has become so user-friendly that failure to consult it has become proof that one is a dim-wit. In fact, my cat was using FRED the other day to sort out some empirical facts. Apparently she's been getting tips from FRED Blog. Second, I'll go Waldmann one better, and include two more observations, so that I can include the whole post-recession time series. Then, the scatter plot of output growth vs. growth in government spending looks like this:
From Waldmann's standpoint, this is even better - a correlation coefficient of 0.50 to his 0.34.

I can see why Waldmann is worried that 19 observations might be too skimpy, though. If you include all the data from 2008Q1 to 2014Q3, you get this:
That doesn't look so great. In that scatter plot, the correlation coefficient is -0.12 which, by Waldmann's criteria, would be a win for the non-Keynesians. But, my cat is is looking over my shoulder and telling me "not so fast." My cat, in addition to knowing FRED, also took an intro-to-macro course, and knows all about the Keynesian Cross. She's a big Paul Krugman fan too.

So, as I learned from Dick Lipsey in 1975, and my cat learned last fall, Keynesian Cross is

(1) C = A + cY,

(2) Y = C + I + G,

where C is consumption, Y is output, I is investment, and G is government expenditures, with 0 < c < 1 and A > 0. Y and C are endogenous, and A, I, and G are exogenous. We can solve for C and Y as follows (my cat checked my algebra):

(3) C = [A + c(I + G)]/(1-c)

(4) Y = (A + I + G)/(1-c)

Here, 1/(1-c) is the multiplier. Krugman, with whom Waldmann appears to be quite sympathetic, has told us that IS/LM is truth, truth is IS/LM - that is all we know on earth, and all we need to know. Better than that, since late 2008, when we entered the liquidity trap and the LM curve became flat, Keynesian Cross has become our even simpler truth. My cat agrees that (3) and (4) are rich with implications and policy conclusions. She has also pointed out that it's not quite fair to be drawing conclusions from the last chart above. Suppose, says my cat, that A and I are random variables, that the government sees the realizations of A and I before choosing G, and that the government has a target level of output, Y*. Then, Y = Y* and Y and G would be uncorrelated. Alternatively, suppose that the government is constrained, so that it can only close a fraction of the output gap, under any circumstances. Then, we could observe something like the last chart. There was a big demand shock in 2008, the government didn't do enough, and so we see government spending going up when output is going down during 2008.

So, my cat reasons, maybe Waldmann has the right idea. After the end of the recession in mid-2009, the demand shock is long gone, and the government has been behaving in a random fashion. Then, if we see a positive correlation between output growth and growth in government spending post-recession, that's consistent with Keynesian Cross. My cat has an inquiring mind though, and she's thinking about equation (3). She understands that the Keynesian multiplier works through consumption, and that (3) implies that we should see a positive correlation between consumption growth and government expenditure growth over the period Waldmann looks at, if Keynesian Cross actually describes the data. No such luck though:
I showed my cat how to use Matlab to compute the correlation coefficient for the data in the chart, and she gets -0.07. Now she's pissed, and is hiding under the sofa.

Obviously, my cat has a lot to learn. I'll have her read Hal Cole's post on the aggregate effects of government spending, for starters. It's also important that my cat understand that there are few economic policy problems that can be solved in a blog post. We're very lucky if looking at raw correlations helps us to discriminate among economic models, or to draw policy conclusions. Indeed, normal economics tells us that there are various mechanisms by which increases in government spending can cause aggregate output to increase. Government spending could be totally unproductive, but lead to an increase in output because of a wealth effect on labor supply. Government spending could be complementary to private consumption, and if the complementarities are large enough, there could be large multipliers. There could be multiple equilibria. But, when we start to think in terms of normal economics, it becomes clear that the effects of government spending depend on what the government spends on, how the spending is financed, etc. And we start asking more questions - interesting ones. Indeed, I think my cat would quickly go back to watching squirrels, if (1)-(4) were all the economics she had to think about.

Sunday, December 21, 2014

Inflation at the Zero Lower Bound

I'm going to try to clear up some issues in the blog discussion among Ambrose Evans-Pritchard, Paul Krugman, and Simon Wren-Lewis, among others, about zero-lower-bound monetary policy. Rather than parse the thoughts of others, I'll start from scratch, and hopefully you'll be less confused.

I'll focus narrowly on the issue of what determines inflation at the zero lower bound or, as Evans-Pritchard states:
The dispute is over whether central banks can generate inflation even when interest rates are zero.
As it turns out, David Andolfatto and I have a paper (shameless advertising) in which we construct a model that can address the question. And that model is actually a close cousin of the Lucas cash-in-advance framework that Krugman uses to think about the problem. There is a continuum of households, and each one maximizes
We'll simplify things by assuming that there are only two assets, money and one-period government bonds, and no unsecured credit. We can be more explicit about how assets are used in transactions, but to make a long story short, think like Lucas and Stokey. There are two kinds of consumption goods. The first can be purchased only with money, and the second can be purchased with money or government bonds. We can think of this as standing in for intermediated transactions. That is, people don't literally make transactions with government bonds, but with the liabilities of financial intermediaries that hold government bonds as assets. We can also extend this to more elaborate economies in which government debt serves as collateral, to support credit and intermediation, but allowing government bonds to be used directly in transactions gets at the general idea.

So, suppose a deterministic world in which the economy is stationary, and look for a stationary equilibrium in which real quantities are constant forever. Further, restrict attention to an equilibrium in which the nominal interest rate is zero. Let m and b denote, respectively, the quantities of money and government bonds, in real terms. We'll assume that the government has access to lump sum taxes and transfers. Starting the economy up at the first date, the first-period consolidated government budget constraint is
where T is the real transfer to the private sector at the first date, i.e. the government (the consolidated government - at this stage we won't differentiate the tasks of the central bank and the fiscal authority) issues liabilities and then rebates the proceeds, lump sum, to the private sector. Then, in each succeeding period, since the nominal interest rate is zero, the consolidated government budget constraint is
where T* is the real transfer at each succeeding date, and i is the inflation rate, which is constant for all time.

A zero nominal interest rate will imply that consumption of the two goods is the same, so per-household consumption is c = y, where y is output. First, suppose that government bonds are not scarce. What this means is that, at the margin, government bonds are used as a store of wealth, so the usual Euler equation applies:
Therefore, i = B - 1, i.e. there is deflation at the rate of time preference. Further, output y solves
This is basically a Friedman rule equilibrium, and we could use results in Ricardo Lagos's work, for example, to show that there exists a wide array of paths for the consolidated government debt that support the Friedman rule equilibrium. An extra condition we require here is that this economy be sufficiently monetized, i.e.
so that the central bank's balance sheet is sufficiently large, and there is enough money to finance consumption of the good that must be purchased with money.

The Friedman rule equilibrium is Ricardian. At the margin, government debt is irrelevant. As well, there is a liquidity trap - open market operations, i.e. swaps of money for government bonds by the central bank, are irrelevant. Thus, the central bank cannot create more inflation. But neither can the fiscal authority. What about helicopter drops? Surely the fiscal authority can issue nominal bonds at a higher rate, and the central bank could purchase them all? But, as long as government bonds are not scarce, equation (3) must hold at the zero lower bound, which determines the rate of inflation. Basically, this is the curse of Irving Fisher. Under these conditions, it is impossible to have higher inflation at the zero lower bound. Helicopter drops may indeed raise the rate of inflation, but this must necessarily imply a departure from the zero lower bound.

Note that, in the Friedman rule equilibrium in which government debt is not scarce, there is sustained deflation at the zero lower bound, which doesn't seem to fit any observed zero-lower-bound experience. Average inflation in the Japan in the last 20 years has been about zero, and inflation has varied mostly between 1% and 3% in the U.S. for the last 6 years. But if government debt is scarce in equilibrium, we need not have deflation at the zero lower bound in our model. What scarce government debt means is that the entire stock of government bonds is used in transactions, which implies, in general, that the nominal interest rate is determined by
where R(t) is the nominal interest rate. So now there is a liquidity premium on government debt, which is determined by an inefficiency wedge in the market for goods that trade for money and government bonds. Then, in a zero-lower-bound equilibrium, the inflation rate is determined by
Note as well, that in this equilibrium, y = m + b, so the total quantity of consolidated government debt constrains output. Clearly, this equilibrium is non-Ricardian - government debt matters in an obvious way. But, there's still a liquidity trap. If the central bank swaps money for bonds, this is irrelevant. The central bank can't change the rate of inflation through asset swaps.

But, when government debt is scarce, fiscal policy can determine the inflation rate, as the fiscal authority can vary the rate of growth of total consolidated government liabilities (which determines the inflation rate), and this in turn will affect the real quantity of consolidated government liabilities held in the private sector, and the liquidity premium on government debt. To explore this in more detail, suppose that the utility function is constant relative risk aversion, with CRRA = a > 0. Then, equation (7) gives us a relationship between output and the inflation rate:
Then, since y = m + b, we can substitute in the consolidated government's budget constraints to obtain
In (9) and (10), s = 1/(1+i), 1 - s is the effective tax rate on consolidated government debt, and T* is the revenue from the inflation tax, where the inflation tax applies to the entire outstanding nominal consolidated government debt.

So, if the fiscal authority chooses an inflation rate i > 1/B -1, then it expands the government debt at the rate i per period, the central bank buys enough of that debt each period that the nominal interest rate is zero forever, and the government collects enough revenue from inflation every year to fund a real transfer T* which, as a function of s, is shown in the next chart.
Note that this is essentially a Laffer curve. Infinite inflation (s = 0) implies zero revenue from the inflation tax, as does zero inflation (s = 1), and transfers are negative when the inflation rate is negative (s > 1). The higher the rate of inflation, the lower is the real quantity of consolidated government debt, output and consumption - more inflation reduces welfare. The central bank cannot control inflation, but the fiscal authority can.

Therefore, in this model, it is indeed correct to state that, at the zero lower bound, the central bank has no control over the inflation rate. The fiscal authority may be able to control inflation at the zero lower bound, but only by tightening liquidity constraints and increasing the liquidity premium on government debt. Of course, in this model the government debt all matures in one period. What about quantitative easing? QE may indeed matter, particularly when government debt is scarce. In a couple of papers (this one and this one) I explore how QE might matter in the context of binding collateral constraints. First, if long-maturity government debt is worse collateral than is short-maturity debt, then central bank purchases of long-maturity government debt matter. As well, if the central bank purchases private assets, this can circumvent suboptimal fiscal policy that is excessively restricting the supply of government debt. But in both cases this works in perhaps unexpected ways. In both cases, unconventional asset purchases by the central bank act to reduce inflation.

Wednesday, December 3, 2014

Economics: The View From Sociology

Some people (Noah Smith, Paul Krugman) have recently written about The Superiority of Economists by Fourcade et al. This is a paper written by two sociologists and an economist, who give us a sociological perspective on the economics profession. To get my bearings I found the most fitting definition:
Sociology: the scientific analysis of a social institution as a functioning whole and as it relates to the rest of society.
So, that seems very promising. Some scientists, who specialize in the analysis of institutions and the role those institutions play in society, are going to figure out the economics profession.

Here's what I'm thinking. I've never taken a sociology course, but being a social scientist maybe I can guess how a sociologist might think about the institution of economics. What is the social role of the economics profession? First, human beings have a need for pure scientific knowledge - we just want to know what is going on. How do economic systems work? Why are some countries and individuals so poor, and why are some so rich? Why do prices of goods, services, and assets move around over time? Second, human beings have a need for applied science. How do we take what we know about economics and use that knowledge to make human beings collectively better off? Third, we might be interested in where economics came from. Who were the first economists, and how did they put together the seeds of economic knowledge? How is the economics profession organized? How is economics taught? Fourth, what makes economics different from other disciplines? If there are large differences in economics, are the human beings who do economics somehow different, for example do they self-select as economists due to particular skills they possess? Is economics different by chance, or is there something about the nature of things that economists study that makes the field different? Finally, how does the organizational structure of the economics profession help it to perform its key social role? Are there ways we could improve on this organizational structure? This could be pretty interesting, and I'm pleased, in principle, that there are scientists who care about these things, and are willing to help out.

First, I'll tell you some of what I know about the economics profession. Economics is clearly successful - in economic terms. Economics is a high enrollment major in most universities, one can make a decent living selling economics textbooks to undergraduates (as I can attest), an economics undergrad major pays off handsomely, and PhD economists are very well-paid - as academics, in the financial sector, and in government. Economists are also influential. They are called on to run key interational institutions like the IMF and World Bank, they more often than not serve as the chief officers in central banks, and they hold important positions in government. Further - and this must be unique among scientific pursuits - you can become extremely rich as a specialist in bad-mouthing your fellow economists.

Economics is very different from other academic pursuits, as any economist who has had to educate a Dean (of Liberal Arts, Social Sciences, Business, whatever) can tell you. In most academic fields, jobs are scarce, and mobility is low. Not so in economics. It is typically hard work to convince a Dean that one needs to make 8 job offers to fresh PhDs in the hope of getting one or two acceptances, that senior job candidates may be even harder to get, and that departures from your economics department need not mean that good people are fleeing a bad department. Salaries are always an issue. Basically, you need to know some economics (though not much) to understand why the economists are paid much more than the philosophers. Economists have a well-organized fresh-PhD job market that operates under clear rules, and performs the function of matching young economists with employers. Economists are social and love to argue. If you are uninitiated and happen to walk into an economics seminar, you might think you should call the police. Don't worry, it's OK.

Fourcade et al. get some of the facts right, but I came away puzzled. Some data is marshalled, but I wouldn't call this paper science, and it's unclear what we are supposed to learn. The first argument the authors want to make is that economics is "insular." By this they mean that economists don't pay much attention to the other social sciences. The evidence for this is citations - apparently the flow of citations is smaller from economics to the rest of the social sciences than the other way around. Whether this is a good way to measure interaction is not clear. There is a very active area of research in economics - behavioral economics - that uses developments in psychology extensively. There is extensive interaction between economists and political scientists - especially those interested in game theory. But I don't think I have ever encountered a sociologist in an economics seminar, or at a conference. However, suppose that economists totally ignored the other social sciences. Could we then conclude that this is suboptimal? Of course not. Maybe what is going on in the rest of the social sciences is actually of no use to economists. Maybe it is of some use to us. Certainly Fourcade et al. don't give us any specific examples of things we're ignoring that might help us out.

And economists are far from insular, especially if we look beyond the social sciences. Economics is a big tent. To gain admission to an economics PhD program requires some background in mathematics and statistics typically, but we don't necessarily require an undergraduate economics degree. People come into economics from history, engineering, math, psychology, and many other fields. As well, an undergraduate degree in economics is an excellent stepping stone to other things - professional degrees in business and law, or graduate degrees in other social sciences. Economic science did not come out of nowhere. Indeed, it often went by "Political Economy" in the early days, and sometimes still does. Most of our technical tools came from mathematicians and statisticians, though econometricians have developed sophisticated statistical tools designed specifically to deal with inference problems specific to economics, and macroeconomists took the dynamic optimization methods invented by mathematicians and engineers and adapted them to general equilibrium economic problems.

The authors of "The Superiority of Economists" see us as hierarchical, with a power elite that controls the profession. PhD programs are indistinguishable, and publication and recruiting are regimented. Seems more like the army than an institution that is supposed to foster economic science. Well, baloney. People of course recognize a quality ranking in academic institutions, journals, and individual economists, but I don't think that's much different from what you see in other fields. Powerful people can dominate particular subfields, but good ideas win out ultimately, I think. In the 1970s, there was a revolution in macroeconomics. That did not happen because the research of the people involved was supported by the Ivy League - far from it. But modern macro research found supporters in lesser-known places like Carnegie-Mellon University, the University of Minnesota, and the University of Rochester. People like Bob Lucas and Ed Prescott got their papers published in good places - eventually - and then got their share of Nobel Prizes in Economics. The economics profession, though it could do better in attracting women, is very heterogeneous. I have no hard evidence for this, but my impression is that the fraction of foreigners teaching economics in American universities is among the highest across academic disciplines. I don't think you would see that in a rigid profession.

Ultimately, Fourcade et al. think that our biggest problem is our self-regard. Of course, people with high self-regard are very visible, by definition, so outsiders are bound to get a distorted picture. We're not all Larry Summers clones. But if we do, on average, have a high level of self-regard, maybe that's just defensive. Economists typically get little sympathy from any direction. In universities, people in the humanities hate us, the other social scientists (like Fourcade et al.) think we're assholes, and if we have to live in business schools we're thought to be impractical. Natural scientists seem to think we're pretending to be physicists. In the St. Louis Fed, where I currently reside, I think the non-economists just think we're weird. Oh well. It's a dirty job. Someone has to do it.

Monday, November 24, 2014


So you've all forgotten who Thomas Piketty is, right? Recall that he is the author of the 685-page tome, Capital in the Twentieth Century, a bestseller of the summer of 2014, but perhaps also the least-read bestseller of the summer of 2014. I was determined, however, not to be like the mass of lazy readers who bought Capital. I have slogged on, through boredom, puzzlement, and occasional outrage, and am proud to say I have reached the end. Free at last! Hopefully you have indeed forgotten Piketty, and are not so sick of him you could scream. Perhaps you're even ready for a Piketty revival.

Capital is about the distribution of income and wealth. For the most part, this is a distillation of Piketty's published academic work, which includes the collection and analysis of a large quantity of historical data on income and wealth distribution in a number of countries of the world. Of course, data cannot speak for itself - we need theory to organize how we think about the data, and Piketty indeed has a theory, and uses that theory and the data to arrive at predictions about the future. He also comes to some policy conclusions.

Here's the theory. Piketty starts with the First Fundamental Law of Capitalism, otherwise known as the definition of capital's share in national income, or

(1) a = r(K/Y),

where a is the capital share, r is the real rate of return on capital, K is the capital stock, and Y is national income. Note that, when we calculate national income we deduct depreciation of capital from GDP. That will prove to be important. The Second Fundamental Law of Capitalism states what has to be true in a steady state in which K/Y is constant:

(2) K/Y = s/g,

where s is the savings rate, and g is the growth rate of Y. So where did that come from? If k is the time derivative of K, and y is the time derivative of Y, then in a steady state in which K/Y is constant,

(3) k/K = y/Y.

Then, equation (3) gives

(4) K/Y = k/y = (k/Y)/(y/Y) = s/g,

or equation (2). It's important to note that, since Y is national income (i.e. output net of depreciation), the savings rate is also defined as net of depreciation.

So, thus far, we don't have a theory, only two equations, (1) and (2). The first is a definition, and the second has to hold if the capital/output ratio is constant over time. Typically, in the types of growth models we write down, there are good reasons to look at the characteristics of steady states. That is, we feel a need to justify focusing on the steady state by arguing that the steady state is something the model will converge to in the long run. Of course, Piketty is shooting for a broad audience here, so he doesn't want to supply the details, for fear of scaring people away.

Proceeding, (1) and (2) imply

(5) a = r(s/g)

in the steady state. If we assume that the net savings rate s is constant, then if r/g rises, a must rise as well. This then constitutes a theory. Something is assumed constant, which implies that, if this happens, then that must happen. But what does this have to do with the distribution of income and wealth? Piketty argues as follows:

(i) Historically, r > g typically holds in the data.
(ii) There are good reasons to think that, in the 21st century, g will fall, and r/g will rise.
(iii) Capital income is more more concentrated among high-income earners than is labor income.

Conclusion: Given (5) and (i)-(iii), we should expect a to rise in the 21st century, which will lead to an increasing concentration of income at the high end. But why should we care? Piketty argues that this will ultimately lead to social unrest and instability, as the poor become increasingly offended by the filthy rich, to the point where they just won't take it any more. Thus, like Marx, Piketty thinks that capitalism is inherently unstable. But, while Marx thought that capitalism would destroy itself, as a necessary step on the path to communist nirvana, Piketty thinks we should do something to save capitalism before it is too late. Rather than allow the capitalist Beast to destroy itself, we should just tax it into submission. Piketty favors marginal tax rates at the high end in excess of 80%, and a global tax on wealth.

Capital is certainly provocative, and the r > g logic has intuitive appeal, but how do we square Piketty's ideas with the rest of our economic knowledge? One puzzling aspect of Piketty's analysis is his use of net savings rates, and national income instead of GDP. In the typical growth models economists are accustomed to working with, we work with gross quantities and rates - before depreciation. Per Krusell and Tony Smith do a nice job of straightening this out. A key issue is what happens in equation (2) as g goes to zero in the limit. Basically, given what we know about consumption/savings behavior, Piketty's argument that this leads to a large increase in a is questionable.

Further, there is nothing unusual about r > g, in standard economic growth models that have no implications at all for the distribution of income and wealth. For example, take a standard representative-agent neoclassical growth model with inelastic labor supply and a constant relative risk aversion utility function. Then, in a steady state,

(6) r = q + bg,

where q is the subjective discount rate and b is the coefficient of relative risk aversion. So, (6) implies that r > g unless g > q and b is small. And, if g is small, then we must have r > g. But, of course, the type of model we are dealing with is a representative-agent construct. This could be a model with many identical agents, but markets are complete, and income and wealth would be uniformly distributed across the population in equilibrium. So, if we want to write down a model that can give us predictions about the income and wealth distribution, we are going to need heterogeneity. Further, we know that some types of heterogeneity won't work. For example, with idiosyncratic risk, under some conditions the model will essentially be identical to the representative agent model, given complete insurance markets. Thus, it's generally understood that, for standard dynamic growth models to have any hope of replicating the distribution of income and wealth that we observe, these models need to include sufficient heterogeneity and sufficient financial market frictions.

Convenient summaries of incomplete markets models with heterogeneous agents are in this book chapter by Krusell and Smith, and this paper by Heathcote et al. In some configurations, these models can have difficulty in accounting for the very rich and very poor. This may have something to do with financial market participation. In practice, the very poor do not hold stocks, bonds, and mutual fund shares, or even have transcations accounts with banks in some circumstances. As well, access to high-variance, high-expected return projects, for example entrepreneurial projects, is limited to very high-income individuals. So, to understand the dynamics of the wealth and income distributions, we need to understand the complexities of financial markets, and market participation. That's not what Piketty is up to in Capital.

How might this matter? Well suppose, as Piketty suggests, that g declines during the coming century. Given our understanding of how economic growth works, this would have to come about due to a decline in the rate of technological innovation. But it appears that technological innovation is what produces extremely large incomes and extremely large pots of wealth. To see this, look at who the richest people in America are. For example, the top 20 includes the people who got rich on Microsoft, Facebook, Amazon, and Google. As Piketty points out, the top 1% is also well-represented by high-priced CEOs. If Piketty is right, these people are compensated in a way that is absurdly out of line with their marginal productivities. But, in a competitive world, companies that throw resources away on executive compensation would surely go out of business. Conclusion: The world is not perfectly competitive. Indeed, we have theories where technological innovation produces temporary monopoly profits, and we might imagine that CEOs are in good positions to skim off some of the rents. For these and other reasons, we might imagine that a lower rate of growth, and a lower level of innovation, might lead to less concentration in wealth at the upper end, not more.

Capital is certainly not a completely dispassionate work of science. Piketty seems quite willing to embrace ideas about what is "just" and what is not, and he can be dismissive of his fellow economists. He says:
...the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences.
Not only are economists ignoring the important problems of the world, the American ones are in league with the top 1%:
Among the members of these upper income groups are US academic economists, many of whom believe that the economy of the United States is working fairly well and, in particular, that it rewards talent and merit accurately and precisely. This is a very comprehensible human reaction.
Sales of Capital have now put Piketty himself in the "upper income group." Economists are certainly easy targets, and it didn't hurt Piketty's sales to distance himself from these egghead ivory-tower types. This is a very comprehensible human reaction.

To think about the distribution of income and wealth, to address problems of misallocation and poverty, we need good economic models - ones that capture how people make choices about occupations, interhousehold allocation and bequests, labor supply, and innovation. Economists have certainly constructed good models that incorporate these things, but our knowledge is far from perfect - we need to know more. We need to carefully analyze the important incentive effects of taxation that Piketty either dismisses or sweeps under the rug. Indeed, Piketty would not be the first person who thought of the top 1% as possessing a pot of resources that could be freely redistributed with little or no long-term consequences. It would perhaps be preferable if economists concerned with income distribution were to focus more on poverty than the outrageous incomes and wealth of the top 1%. It is unlikely that pure transfers from rich to poor through the tax system will solve - or efficiently solve - problems of poverty, in the United States or elsewhere. My best guess is that our time would be well spent on thinking about human capital accumulation and education, and how public policy could be reoriented to promoting both in ways that have the highest payoff.

Thursday, November 13, 2014

Neo-Fisherians: Unite and Throw off MV=PY and Your Phillips Curves!

I've noticed a flurry of blog activity on "Neo-Fisherianism," and thought I would contribute my two cents' worth. Noah Smith drew my attention to the fact that Paul Krugman had something to say on the matter, so I looked at his post to see what that's about. The usual misrepresentations and unsubstantiated claims, apparently. Here is the last bit:
And at the highest level we have the neo-Fisherite claim that everything we thought we knew about monetary policy is backwards, that low interest rates actually lead to lower inflation, not higher. At least this stuff is being presented in an even-tempered way.

But it’s still very strange. Nick Rowe has been working very hard to untangle the logic of these arguments, basically trying to figure out how the rabbit got stuffed into the hat; the meta-point here is that all of the papers making such claims involve some odd assumptions that are snuck by readers in a non-transparent way.

And the question is, why? What motivation would you have for inventing complicated models to reject conventional wisdom about monetary policy? The right answer would be, if there is a major empirical puzzle. But you know, there isn’t. The neo-Fisherites are flailing about, trying to find some reason why the inflation they predicted hasn’t come to pass — but the only reason they find this predictive failure so puzzling is because they refuse to accept the simple answer that the Keynesians had it right all along.
Well, at least Krugman gives Neo-Fisherites credit for being even-tempered.

Let's start with the theory. Krugman's claim is that "all of the papers making such claims involve odd assumptions that are snuck by readers in a non-transparent way." Those sneaky guys, throwing up a smoke screen with their odd assumptions and such. Actually, I think Cochrane's blog post on this was pretty clear and helpful, for the uninitiated. I've written about this as well, for example in this piece from last year, and other posts you can find in my archive. More importantly, I have a sequence of published and unpublished papers on this issue, in particular this published paper, this working paper, and this other working paper. That's not all directed at the specific issue at hand - "everything we thought we knew about monetary policy is backwards" - but covers a broader range of issues relating to the financial crisis, conventional monetary policy, and unconventional monetary policy. If this is "flailing about," I'm not sure what we are supposed to be doing. I've taken the trouble to formalize some ideas with mathematics, and have laid out models with explicit assumptions that people can work through at their leisure. These papers have been presented on repeated occasions in seminars and conferences, and are being subjected to the refereeing and editorial process at academic journals, just as is the case for any type of research that we hope will be taken seriously. The work is certainly not out of the blue - it's part of an established research program in monetary and financial economics, which many people have contributed to over the last 40 years or so. Nothing particular odd or sneaky going on, as far as I know. Indeed, some people who work in that program would be happy to be called Keynesians, who are the only Good Guys, in Krugman's book.

So, let me tell you about a new paper, with David Andolfatto, which I'm supposed to present at a Carnegie-Rocheser-NYU conference later this week (for the short version, see the slides) . This paper had two goals. First, we wanted to make some ideas more accessible to people, in a language they might better understand. Some of my work is exposited in terms Lagos-Wright type models. From my point of view, these are very convenient vehicles. The goal is to be explicit about monetary and financial arrangements, so we can make precise statements about how the economy works, and what monetary policy might be able to do to enhance economic performance. It turns out that Lagos-Wright is a nice laboratory for doing that - it retains some desirable features of the older money/search models, while permitting banking and credit arrangements in convenient ways, and allowing us to say a lot more about policy.

Lagos-Wright models are simple, and once you're accustomed to them, as straightforward to understand as any basic macro model. Remember what it was like when you first saw a neoclassical growth model, or Woodford's cashless model. Pretty strange, right? But people certainly became quickly accustomed to those structures. Same here. You may think it's weird, but for a core group of monetary theorists, it's like brushing your teeth. But important ideas are not model-bound. We should be able to do our thinking in alternative structures. So, one goal of this paper is to explore the ideas in a cash-in-advancey world. This buys us some things, and we lose some other things, but the basic ideas are robust.

The model is structured so that it can produce a safe asset shortage, which I think is important for explaining some features of our recent zero-lower-bound experience in the United States. To do that, we have to take a broad view of how assets are used in the financial system. Part of what makes new monetarism different from old monetarism is its attention to the whole spectrum of assets, rather than some subset of "monetary" assets vs. non-monetary assets. We're interested in the role of assets in financial exchange, and as collateral in credit arrangements, for example. For safe assets to be in short supply, we have to specify some role for those safe assets in the financial system, other than as pure stores of wealth. In the model, that's done in a very simple way. There are some transactions that require currency, and some other transactions that can be executed with government bonds and credit. We abstract from banking arrangements, but the basic idea is to think of the bonds/credit transactions as being intermediated by banks.

We think of this model economy as operating in two possible regimes - constrained or unconstrained. The constrained regime features a shortage of safe assets, as the entire stock of government bonds is used in exchange, and households are borrowing up to their credit limits. To be in such a regime requires that the fiscal authority behave suboptimally - basically it's not issuing enough debt. If that is the case, then the regime will be constrained for sufficiently low nominal interest rates. This is because sufficient open market sales of government debt by the central bank will relax financial constraints. In a constrained regime, there is a liquidity premium on government debt, so the real interest rate is low. In an unconstrained regime the model behaves like a Lucas-Stokey cash-in-advance economy.

What's interesting is how the model behaves in a constrained regime. Lowering the nominal interest rate will result in lower consumption, lower output, and lower welfare, at least close to the zero lower bound. Why? Because an open market purchase of government bonds involves a tradeoff. There are two kinds of liquidity in this economy - currency and interest-bearing government debt. An open market purchase increases currency, but lowers the quantity of government debt in circulation. Close to the zero lower bound, this will lower welfare, on net. This implies that a financial shock which tightens financial constraints and lowers the real interest rate does not imply that the central bank should go to the zero lower bound. That's very different from what happens in New Keynesian (NK) models, where a similar shock implies that a zero lower bound policy is optimal.

As we learned from developments in macroeconomics in the 1970s, to evaluate policy properly, we need to understand the operating characteristics of the economy under particular fiscal and monetary policy rules. We shouldn't think in terms of actions - e.g. what happens if the nominal interest rate were to go up today - as today's economic behavior depends on the whole path of future policy under all contingencies. Our analysis is focused on monetary policy, but that doesn't mean that fiscal policy is not important for the analysis. Indeed, what we assume about the fiscal policy rule will be critical to the results. People who understand this issue well, I think, are those who worked on the fiscal theory of the price level, including Chris Sims, Eric Leeper, John Cochrane, and Mike Woodford. What we assume - in part because this fits conveniently into our analysis, and the issues we want to address - is that the fiscal authority acts to target the real value of the consolidated government debt (i.e. the value of the liabilities of the central bank and fiscal authority). Otherwise, it reacts passively to actions by the monetary authority. Thus, the fiscal authority determines the real value of the consolidated government debt, and the central bank determines the composition of that debt.

Like Woodford, we want to think about monetary policy with the nominal interest rate as the instrument. We can think about exogenous nominal interest rates, random nominal interest rates, or nominal interest rates defined by feedback rules from the state of the economy. In the model, though, how a particular path for the nominal interest rate is achieved depends on the tools available to the central bank, and on how the fiscal authority responds to monetary policy. In our model, the tool is open market operations - swaps of money for short-term government debt. To see how this works in conjunction with fiscal policy, consider what happens in a constrained equilibrium at the zero lower bound. In such an equilibrium, c = V+K, where c is consumption, V is the real value of the consolidated government debt, and K is a credit limit. The equilibrium allocation is inefficient, and there would be a welfare gain if the fiscal authority increased V, but we assume it doesn't. Further, the inflation rate is i = B[u'(V+K)/A] - 1, where B is the discount factor, u'(V+K) is the marginal utility of consumption, and A is the constant marginal disutility of supplying labor. Then, u'(V+K)/A is an inefficiency wedge, which is equal to 1 when the equilibrium is unconstrained at the zero lower bound. The real interest rate is A/[Bu'(V+K)] - 1. Thus, note that there need not be deflation at the zero lower bound - the lower is the quantity of safe assets (effectively, the quantity V+K), the higher is the inflation rate, and the lower is the real interest rate. This feature of the model can explain why, in the Japanese experience and in recent U.S. history, an economy can be at the zero lower bound for a long time without necessarily experiencing outright deflation.

Further, in this zero lower bound liquidity trap, inflation is supported by fiscal policy actions. The zero nominal interest rate, targeted by the central bank, is achieved in equilibrium by the fiscal authority increasing the total stock of government debt at the rate i, with the central bank performing the appropriate open market operations to get to the zero lower bound. There is nothing odd about this, in terms of reality, or relative to any monetary model we are accustomed to thinking about. No central bank can actually "create money out of thin air" to create inflation. Governments issue debt denominated in nominal terms, and central banks purchase that debt with newly-issued money. In order to generate a sustained inflation, the central bank must have a cooperative government that issues nominal debt at a sufficiently high rate, so that the central bank can issue money at a high rate. In some standard monetary models we like to think about, money growth and inflation are produced through transfers to the private sector. That's plainly fiscal policy, driven by monetary policy.

In this model, we work out what optimal monetary policy is, but we were curious to see how this model economy performs under conventional Taylor rules. We know something about the "Perils of Taylor Rules," from a paper by Benhabib et al., and we wanted to have something to say about this in our context. Think of a central banker that follows a rule

R = ai + (1-a)i* + x,

where R is the nominal interest rate, i is the inflation rate, a > 0 is a parameter, i* is the central banker's inflation target, and x is an adjustment that appears in the rule to account for the real interest rate. In many models, the real interest rate is a constant in the long run, so if we set x equal to that constant, then the long-run Fisher relation, R = i + x, implies there is a long-run equilibrium in which i=i*. The Taylor rule peril that Benhabib et al. point out, is that, if a > 1 (the Taylor principle), then the zero lower bound is another long run equilibrium, and there are many dynamic equilibria that converge to it. Basically, the zero lower bound is a trap. It's not a "deflationary trap," in an Old Keynesian sense, but a policy trap. At the zero lower bound, the central banker wants to aggressively fight inflation by lowering the nominal interest rate, but of course can't do it. He or she is stuck. In our model, there's potentially another peril, which is that the long-run real interest rate is endogenous if there is a safe asset shortage. If x fails to account for this, the central banker will err.

In the unconstrained - i.e conventional - regime in the model, we get the flavor of the results of Benhabib et al. If a < 1 (a non-aggressive Taylor rule), then there can be multiple dynamic equilibria, but they all converge in the limit to the unique steady state with i = i*: the central banker achieves the inflation target in the long run. However, if a > 1, there are two steady states - the intended one, and the zero lower bound. Further, there can be multiple dynamic equilibria that converge to the zero lower bound (in which i < i* and there is deflation) in finite time. In a constrained regime, if the central banker fails to account for endogeneity in the real interest rate, the Taylor rule is particularly ill-behaved - the central banker will essentially never achieve his or her inflation target. But, if the central banker properly accounts for endogeneity in the real interest rate, the properties of the equilibria are similar to the unconstrained case, except that inflation is higher in the zero-lower-bound steady state. How can the central banker avoid getting stuck at the zero lower bound? He or she has to change his or her policy rule. For example, if the nominal interest rate is currently zero, there are no alternatives. If what is desired is a higher inflation rate, the central banker has to raise the nominal interest rate. But how does that raise inflation? Simple. This induces the fiscal authority to raise the rate of growth in total nominal consolidated government liabilities. But what if the fiscal authority refused to do that? Then higher inflation can't happen, and the higher nominal interest rate is not feasible. In the paper, we get a set of results for a model which does not have a short-term liquidity effect. Presumably that's the motivation behind a typical Taylor rule. A liquidity effect associates downward shocks to the nominal interest rate with increases in the inflation rate, so if the Taylor rule is about making short run corrections to achieve an inflation rate target, then maybe increasing the nominal interest rate when the inflation rate is above target will work. So, we modify the model to include a segmented-markets liquidity effect. Typical segmented markets models - for example this one by Alvarez and Atkeson are based on the redistributive effects of cash injections. In our model, we allow a fraction of the population - traders - to participate in financial markets, in that they can use credit and carry out exchange using government bonds (again, think of this exchange being intermediated by financial intermediaries). The rest of the population are non-traders, who live in a cash-only world.

In this model, if a central banker carries out random policy experiments - moving the nominal interest rate around in a random fashion - he or she will discover the liquidity effect. That is, when the nominal interest rate goes up, inflation goes down. But if this central banker wants to increase the inflation rate permanently, the way to accomplish that is by increasing the nominal interest rate permanently. Perhaps surprisingly, the response of inflation to a one time jump (perfectly anticipated) in the nominal interest rate, looks like the figure in John Cochrane's post that he labels "pure neo-Fisherian view." It's surprising because the model is not pure neo-Fisherian - it's got a liquidity effect. Indeed, the liquidity effect is what gives the slow adjustment of the inflation rate.

The segmented markets model we analyze has the same Taylor rule perils as our baseline model, for example the Taylor principle produces a zero-lower-bound steady state which is is the terminal point for a continuum of dynamic equilibria. An interesting feature of this model is that the downward adjustment of inflation along one of these dynamic paths continues after the nominal interest rate reaches zero (because of the liquidity effect). This gives us another force which can potentially give us positive inflation in a liquidity trap.

We think it is important that central bankers understand these forces. The important takeaways are: (i) The zero lower bound is a policy trap for a Taylor rule central banker. If the central banker thinks that fighting low inflation aggressively means staying at the zero lower bound that's incorrect. Staying at the zero lower bound dooms the central banker to permanently undershooting his or her inflation target. (ii) If the nominal interest rate is zero, and inflation is low, the only way to increase inflation permanently is to increase the nominal interest rate permanently.

Finally, let's go back to the quote from Krugman's post that I started with. I'll repeat the last paragraph from the quote so you don't have to scroll back:
And the question is, why? What motivation would you have for inventing complicated models to reject conventional wisdom about monetary policy? The right answer would be, if there is a major empirical puzzle. But you know, there isn’t. The neo-Fisherites are flailing about, trying to find some reason why the inflation they predicted hasn’t come to pass — but the only reason they find this predictive failure so puzzling is because they refuse to accept the simple answer that the Keynesians had it right all along.
Why? Well, why not? What's the puzzle? Well, central banks in the world with their "conventional wisdom" seem to have a hard to making inflation go up. Seems they might be doing something wrong. So, it might be useful to give them some advice about what that is instead of sitting in a corner telling them the conventional wisdom is right.