Aggressive comments attached to my posts are a sure sign that links elsewhere in the blogosphere are giving me readers that don’t normally come here. That was certainly the case with this piece, which Mark Thoma, and in turn Paul Krugman paid some attention to. This gives me a useful opportunity to address issues related to real business cycle theory and its place in modern macroeconomics.
Milton Friedman and the Old Monetarists seemed to be short-run Keynesians. When pressed to write down his vision of sources of short-run nonneutralities of money, Friedman’s framework was essentially standard IS/LM. In contrast to mainstream Keynesian views at the time, however, Friedman was firmly anti-interventionist. Attempts at stabilization policy, either through monetary or fiscal means, according to Friedman would inevitably make things worse. For Friedman, policy could mess things up because of policy-making lags, imperfect information about the structure and state of the macroeconomy, and the “long and variable lags” associated with the effects of monetary policy (and fiscal policy too).
Then, along came the Phelps volume and Lucas’s pathbreaking 1972 Journal of Economic Theory paper, and macreconomists began to think about the world in an entirely different way. In the Keynesian world, fluctuations in aggregate economic activity are inefficient, and the logic appeared to be consistent with what we observe. We find ourselves in the middle of a recession. In terms of the its basic fundamentals, the economy looks more or less the same as it did before the recession happened. There is roughly the same set of people, with the same skills. The same buildings and machines are in existence, and we know just as much about how to produce stuff as we did before the recession happened. However, we are producing less and more people are out of work. Surely something has gone wrong, and the government can do something about it, by spending more and relaxing monetary policy to put people back to work.
However, the Phelps volume writers and Lucas got us thinking about the following. An unemployed person is someone who answers the labor force survey in a particular way. This person is engaged in a particular activity – search – and we can analyze this process just as we would analyze anything else in economics, as involving choice and incentives. Due to a mismatch between the workers that firms want and the jobs that workers would like to have, separations due to various factors, and people moving in and out of the labor force, there will always be unemployment. Further, fluctuations in these factors determining unemployment will make the unemployment rate fluctuate. Indeed, we might imagine fluctuations in unemployment that are purely efficient – there may be nothing the government should do about this. Also, according to Lucas’s 1972 model, monetary policy could be causing inefficient fluctuations. Indeed, there could be states of the world where GDP is inefficiently high. Thus, the government could be actively screwing things up, in line with Friedman’s thinking.
Next, along come Kydland and Prescott in 1982, with what later became known as real business cycle analysis. In terms of the economics, the Kydland-Prescott framework was not very radical, being an elaboration of received growth theory, stemming from the work of Solow, Cass-Koopmans and Brock-Mirman. However, in a lot of ways Kydland and Prescott were thinking outside the box, and they were very much in the faces of mainstream macroeconomists, in a much more aggressive way than were Lucas, Sargent, and Wallace in the previous decade. Kydland and Prescott offended econometricians, by calibrating rather than estimating, and by using unconventional time series filters (i.e. the Hodrick-Prescott filter) to separate the the business cycle components of time series from the trends. They also gave economists license to contemplate the possibility that business cycles could be bad events that we should do nothing about – government intervention could serve only to make the problem worse, analagous to how Lucas's 1972 model works, but for different reasons.
For some macroeconomists who were brought up on the Keynesian paradigm, and were highly invested in it, this was heresy. Some older prominent economists, including Tobin and Solow, resisted, and some prominent young economists, including Larry Summers, did as well. For young researchers who received their education during the 1970s and 1980s (this includes me of course), the new paradigms were exciting. The economics of Kydland, Prescott, Lucas, Sargent, and Wallace looked more firmly grounded in the solid general equilibrium theory developed by Arrow and Debreu, and these people had good arguments which appeared to match well with empirical observations. Relative to this, mainstream Keynesian economics just looked mushy. Who would want to tie their caboose to that train?
Now, though Kydland and Prescott presented an extreme view of business cycles, which could be interpreted as telling us that the government is irrelevant, the general spriit of the approach is something entirely different from that. The key lesson is that we need to impose the same discipline on the evaluation of macroeconomic government policies as we would in any other field of economics. Skepticism about the role of government is healthy, and every government program and intervention should be justified in terms of correcting some externality or market failure. In the language of Pat Kehoe (or maybe this comes from Lucas – I’m not sure), we don’t want to justify government intervention based on a “chicken model.” In a chicken model, we assume that chickens are good, that the private sector cannot produce chickens, and that the government can, and therefore should, produce chickens. A Principles-of-Economics Keynesian-Cross model is basically a chicken model. In it, we assume that more GDP is a good thing, and the government can give us more GDP – essentially for free – by increasing government spending on goods and services. The logical conclusion from such a model is that we could make ourselves infinitely well off with a government of infinite size. Keynesian Cross macroeconomics is basically the macroeconomics of Paul Krugman – what he is peddling to the general public.
What about Prescott’s remarks last Wednesday about the causes of the current recession? What should we make of that? To me, Prescott’s narrative did not make any sense, in terms of what I know about the facts. To be fair, we should wait to see how he spells this out in terms of a rigorous argument with an explicit model he can use to confront the data. However, to make myself clear (to people like Krugman), what I was quarreling about had nothing to do with the competitive paradigm or basic neoclassical growth theory. My only problem was with the shocks that Prescott was evoking to explain events. It could well be that we could ultimately come to an understanding of the financial crisis and the recent recession as an economically efficient macroeconomic response to events. In this context, we might come to think of the policy responses to the crisis – the extreme actions of central banks and the US stimulus bill – as being wrongheaded. In spite of the fact that important factors that gave rise to the recent recession were the result of errors in the design and regulation of the US financial system, it could be that the temporary government intervention in response to the recession was wrong. In my view, some of the US monetary policy intervention was the right thing to do, and maybe some elements of the stimulus bill were appropriate, but I could be wrong.
Now, serious New Keynesian economists do not peddle chicken models in the same sense as the Old Keynesians, like Krugman. These people, including Jordi Gali, Mark Gertler, and Mike Woodford, deal with models where inefficiencies arise for reasons that any economist could understand. The basic sticky price frictions in their models yield relative price distortions that happen to be correctible (subject to the zero lower bound on the nominal interest rate) by monetary policy. These people use conventional theory, developed by Arrow, Debreu, Solow, Cass, Koopman, Brock, Mirman, and Prescott, to derive their conclusions. That has been the great victory of Lucas, Prescott, Sargent, Wallace, and others. Their theoretical methods are now widely-accepted, and in terms of empirical work New Keynesians and others now use a wide array of calibration and estimation techniques. No one would shy away from New Keynesian economics because it is too mushy.
My problem with New Keynesian economics (in addition to its treatment of monetary and financial frictions) lies with the basic sticky price assumption which leads to what I referred to here as “incoherence.” This was a little too harsh a word, but what the heck. One problem with the New Keynesian approach is that there is a chicken model lurking in there. Price flexibility is good, the private sector cannot produce it, but appropriate monetary intervention can produce the equivalent of price flexibility. Another way to look at this is that we do not understand pricing decisions at the level of production, distribution, and retail sales, and the relationship of these pricing decisions to other elements of productive decisions (employment and investment) enough to really say whether there are frictions or externalities arising from those decisions that matter in a serious way for macreoeconomics. No one has a convincing theory (or indeed any theory) to tell us why pricing should matter for macroeconomic activity and policy in the way Old and New Keynesians want it to. That, I think, is the challenge for Keynesian economics. Until that is resolved, I am content to seek explanations for aggregate phenomena and roles for economic policy in financial and monetary frictions.