Thursday, October 20, 2011

Econometrics, Calibration, and Fights

Paul Krugman is off on another rant about about dead issues in macroeconomics, in this post and this one, including the usual discussion about "freshwater guys," who currently exist only in Krugman's mind.

Several points here:

1. In current macroeconomic thought, calibration methods and econometrics are both widely accepted as useful approaches to answering quantitative questions, and these approaches are often mixed in particular projects. Kydland and Prescott pioneered calibration methods in macroeconomics, and applied them to a particular class of models, but the methods themselves are more general than that, and various Old Keynesians and New Keynesians have found calibration useful.

2. Part of what the calibration people were reacting to, was a view among econometricians that quantitative work was about "testing" theories. The problem is that any macroeconomic model is going to be wrong on some dimensions. To be useful, a model must be simple, and simplification makes it wrong in some sense. Subjected to standard econometric tests, it will be rejected. Models that are rejected by the data can nevertheless be extremely useful. I think that point is now widely recognized, and you won't find strong objections to it, as you might have in 1982.

3. There was indeed a fight at Minnesota over econometrics. I can't remember exactly when it happened. Maybe I was working at the Minneapolis Fed at the time. Maybe I wasn't. In any case, I think I know most of the details of the fight. The people involved were all fine human beings, and if you talked to them at the time about their sides of the argument, they would all make perfect sense. Fights happen among people who work together, and it's best to let these things rest. I'm sure Krugman could tell us a lot about fights in his own department. This is basically gossip, and is best left for talk in the bar.

4. Krugman tells us:
I’d just add that correspondents tell me that the anti-Keynesians pretty much blockade any hiring of new Keynesians in their departments.
People in academic departments disagree about who to hire. This is news? When an academic hire is made, we think we are making a very long-term commitment, particularly when it's a tenured appointment. We're making risky bets on people. Are they intelligent and creative types? Will they be able to adapt when their current research program goes out of fashion? Will they get along well with their colleagues and make everyone more productive? Sometimes things turn out badly. While James Tobin was alive, he wouldn't tolerate having macroeconomists who were up on contemporary macro in his department. As a result, he set his department back, and they have only recently caught up with where macro has gone. Sometimes things turn out well. In 1991, the University of Minnesota hired Nobu Kiyotaki. Remember that Blanchard and Kiyotaki (1987) was a key innovation in Keynesian economics. The Minnesotans were not doctrinaire about it. They saw a smart and productive guy, hired him, and he went on to do more great things. He now works in the Princeton econ department, with Krugman.


  1. Don't disagree with your post but your point #3 (leave gossip for the bar) and #4 (gossip about hiring) seem to conflict.

  2. Tobin's dead, so it's hard to hurt his feelings, and it's always nice to give Nobu a compliment, don't you think?

  3. "To be useful, a model must be simple, and simplification makes it wrong in some sense. Subjected to standard econometric tests, it will be rejected. Models that are rejected by the data can nevertheless be extremely useful."

    The issue is how far off and why.

    Your rejection can have statistical significance, but not economic significance, like if you test whether men and women are paid the same controlling for various factors. You can reject that their pay is perfectly equal statistically, but then when you look at the confidence interval and you see it's tightly around $50/year, not an economically significant amount.

    From what I've seen, though, I bet real business cycles is off by a very very economically significant amount. At least some of what it's crucially, not trivially, based on is far from reality based on very strong evidence.

    Like I've said, though, it's the interpretation. A model is only as good as its interpretation, and a good interpretation is often far from literal.

  4. An example I like is the CAPM. CAPM says everyone holds a combination of the market portfolio and the risk-free asset (assumed to be US T-bills). That's easy to reject. I don't hold that, so instant rejection. But if you examine the model you get some good understanding for smart investment policy, and it helps you roughly estimate certain things easily and well. Interpreted well, the model's very useful and gives valuable intuition, but the best interpretation is not literal.

  5. Richard,

    Your CAPM logic applies to RBC too. It gives you valuable intuition, you don't take it literally, and there are some things you would not use it for. Obviously, it's not going to help you think about monetary policy.

    Deirdre McCloskey likes to talk about the infatuation of some empirical researchers with t statistics. Some regression coefficient could be associated with a large t statistic, but the economic effect in question could be irrelevant. Prescott has a very sharp empirical mind. He has a good feel for models, and can cut through theoretical arguments quickly to get at the quantitative significance of a particular phenomenon.

  6. Peter Orszag has an article on how wage share of national income is declining.

    Do you see this as problematic for the Cobb-Douglas approach of modern macro ? Is it something that can be addressed by plugging a bargaining model of the labor market instead of a spot competitive labor market?

  7. Some people have thought about the fluctuating labor share of income and how you might explain it. Here's an old paper on this, but I'm sure there has been other work since:

  8. great, thank you for the reference.

  9. can i ask what happened in 1982?

  10. Presumably it was a reference to this paper:

    Kydland/Prescott 82 is one of the most important papers (arguably the most important?) in modern DSGE macroeconomics. It utilized calibration instead of estimation.

  11. the fact that there would even be a discussion about whether to teach econometrics shows how far off the path things are. the fact that economics is like different religious camps shows how bad things are. models that are rejected by the data are religion not science and eventually end up in the dustbin of history. the triumph of science over religion was empirical data. Sure, its hard to test economics models. You can't run experiments in astrophysics either but the same as there you in physics you should relish opportunities to test (NOT "calibrate"!)the assumptions different models, and look for those opportunities. That can only happen of you teach econometrics. moreover, you can't be against things like more QE that have "no effect." its a great opportunity to have your model be proven right. or not.

  12. 1. "models that are rejected by the data are religion not science"

    Then all macroeconomics is religion, and it's certainly not in the dustbin.

    2. I agree that we should require econometrics in the PhD core.

    3. "its a great opportunity to have your model be proven right. or not."

    We could run a whole array of monetary policy experiments. You could imagine all kinds of wild things. Why not try it all and see what happens? Wouldn't that be great.

  13. "Prescott has a very sharp empirical mind. He has a good feel for models, and can cut through theoretical arguments quickly to get at the quantitative significance of a particular phenomenon."

    Very sharp... like when he assumed a labor supply elasticity of 100 to make the point that Obama-fear was causing the Great Recession. Edward Prescott is a deranged lunatic and an embarassment for the economist profession

  14. "Ed's key points were: 1. Monetary policy does not matter. 2. Financial factors are the symptoms, not the causes, of the recent downturn. 3. The recession was due to an Obama shock, i.e. labor supply fell because US workers anticipate higher future taxes.* Bob Hall suggested that this would require a Frisch labor supply elasticity of about 27, which seems ridiculous. However, Ed stuck to his guns and thus seemed - well, ridiculous."


    The guy may be sharp and accurate when it supports his libertarian ideology, but deliberately inaccurate when it doesn't.

  15. Actually, when I heard him say that in public, I thought he was talking off the top of his head. That actually was not the case, as it turned out when I talked to him about it later. He has actually thought carefully about it. You can certainly argue about it, i.e. the importance of anticipated future policy for what we are seeing now and have seen, but Ed is certainly clear about what he is thinking and why. There's a model there, and science at work, and you can have the argument with him on scientific terms. He certainly has political views, but the economics is science.

  16. If you build a model designed to show why the great recession happened, and someone points out to you that your model implies a labor supply elasticity of 27, what do you do, assuming you're a scientist? It's hard to imagine a piece of evidence that would cast more doubt on your model than this, is it not?

  17. Hall thought that Prescott was thinking about a mechanism working by way of that particular labor supply elasticity. That's not what Prescott was thinking about. I can't argue in Prescott's place here, as I'm not him and I don't know, or can't remember, all the details.

  18. dwb, all models are wrong. And some models are useful. Do you know how we use econometrics to test "usefulness?"

  19. "Hall thought that Prescott was thinking about a mechanism working by way of that particular labor supply elasticity. That's not what Prescott was thinking about."

    Is this a case of incommensurable paradigms where you can't translate the results or implications from one paradigm or model (Prescott's) into another (the one Hall was thinking in terms of)? Put somewhat differently, is it meaningless to say that Prescott's result implies a labor supply elasticity of 27 whether or not Prescott was thinking in these terms?

  20. this is way above my pay grade as a community college lecturer but: don't labor supply elasticities matter much less for explaining labor input in the search and matching framework of Diamond et al compared to the competitive spot market world inhabited by Prescott?
    Steve, please feel free to enlighten me. Now let me get back to prepping my 4 sections of principles.

  21. "incommensurable paradigms"

    I had to look up "incommensurable," before replying. That's not the issue here. If Prescott had written down his argument, then we could have the discussion. One element of progress in macroeconomics in the last 40 or so years is that, in fact, the paradigms are commensurable. New Keynesian economics, for example, uses the same language as does non-Keynesian economics, which in turn encompasses many things. We can argue in terms of the basic building blocks of models - preferences, endowments, technology, equilibrium concept - and then quantitative questions can boil down to something like, as a hypothetical example: Such and such a result depends on having a labor supply elasticity of a particular magnitude, but this is way outside the ballpark of what we know about labor supply elasticities. Here's an actual example: Prescott and Mehra (1985). That's a calibration exercise, basically. Take a Lucas asset-pricing model. Plug in the observed variability in aggregate consumption, and the observed equity premium. Question: How risk averse does the representative agent have to be to explain the observed equity premium? Answer: You need a level of risk aversion that is way outside the ballpark of what we know about risk aversion. Simple exercise, and it tells you exactly what the problem is with the model.

  22. 5:55 anonymous,

    In a search and matching model, you'll have the same basic issues if you are trying to explain, for example, fluctuations in aggregate employment and hours worked. A search and matching model is equipped to think about questions pertaining to fluctuations in vacancies and unemployment, for example, which of course you cannot address in models where there is no activity that matches up with what we observe as unemployment in practice.

  23. Here are some additional points which to some extent supports Prescott's views regarding policy uncertainty and the slow recovery from the Great Recession.

  24. Clarification:

    At Minnesota in the 1970s and 1980s, there was not agreement as to methodology, but there was mutual respect. Neil Wallace said no issue in monetary economics will be resolved by a number. My view was that most, not all, aggregate issues must be resolved by a number. I have the greatest respect for Neil, Tom, and Chris. I think the respect is mutual. Incidentally, Chris Sims attacked Tom Sargent's methods for estimating macro models. I know they have the greatest respect for each other as economic scientists.

    There was market in ideas at Minnesota. No required courses, so this was not a mechanism to teach the past. If no student showed up to your course, it was evidenced that you were out of it. This rule lead to people keeping current and looking ahead. Competition fosters the development of the sciences. Politics retard their development.

    Edward C. Prescott

  25. Since Professor Prescott has joined the conversation, perhaps he can answer a couple of questions asked above: 1) does your explanation [model] of the great recession implicitly assume an implausibly high labor supply elasticity? and 2) if so, should this count as a significant shortcoming of the model?

  26. Or if Prof Prescott does not return to the conversation he might ask us to consider
    FR Minneapolis Staff Report on labor supply at

  27. 1. Ed's point about the market for ideas is a useful one. In economics the common core in PhD programs is very useful for us. It gives all economists a common language, which of course helps us communicate with each other and make progress. There is some disagreement about what is in the common core. Most typically it's macro, micro, and econometrics. In most PhD programs, there are preliminary exams in only micro and macro. Sometimes an econometrics prelim in included. Sometimes macro is downplayed. There are programs where macro courses are not taught until the second year of the program. In some cases macro has been under attack - some people want to take it out of the core. Minnesota is unusual in not requiring econometrics, specifically, as part of the core, but it appears students are required to have some kind of quantitative training. Beyond the core, the market for ideas is very important. Field requirements (field exams for example) seem to be simply attempts to establish restrictions that circumvent this competition, and act to hamper students' progress.

    2. Ed's comments bring back some memories. You shouldn't think that Minnesota was a place rife with bickering in the period we are discussing. The environment was very collegial, and fun. At the Fed, Ed sometimes kidded the econometric types about being dinosaurs. At some point, someone bought a five-foot-high blowup rubber dinosaur, and sat it in someone's office chair (don't remember who got it first). After that, the dinosaur circulated - one morning it would be in Christiano's chair, the next in Prescott's, etc.