Wednesday, April 9, 2014

The FRB/US Model and Inflation

The Board of Governors has posted details on the structure of the FRB/US model, the data used in estimating the model, published work using the model, etc. If you have access to EViews, it appears you can also run simulations. There is even a long disclaimer, presumably to cover cases where someone takes the model too seriously, uses it for retirement planning or some such, and then wants to sue the Fed. On this, I have a proposal, which is a blanket disclaimer to cover everything - public speaking by Fed employees, casual chit-chat in the coffee shop, whatever:
Please don't ever pay close attention to what we say, or take any action based on such utterances. We're only joking most of the time anyway. If you really think we're saying something important, you're not as smart as you look.
That should do it, I think. The Fed can state this once, and then never say it again.

The FRB/US model, used by the Board for forecasting and policy analysis, is the culmination of perhaps 45 years of work. Various generations of management at the Board have directed some smart people to work on this thing, and you can feel the weight of the large quantity of quality-adjusted hours of work that went into putting it together. But is it any good? Could the Board do just as well or better at forecasting with a much simpler tool? Could a well-educated and well-informed economist do a respectable job of central banking without ever looking at the output of the FRB/US model?

Long ago in a galaxy far far away, large-scale macroeconometric models were taken very seriously. This 1959 paper by Adelman and Adelman was published in Econometrica. They simulated the Klein Goldberger model on an IBM 650, which was the first mass-produced computer. Here's what that looked like (from Wikipedia):
The Klein-Goldberger model was small relative to the FRB/US model - 15 equations. It was first estimated in 1955 using electro-mechanical desk calculators. In those days running a regression was a big job - you will understand the magnitude of the task if you have ever had to invert a matrix by hand. By the late 1960s, the Klein-Goldberger model had evolved into the large-scale FRB/MIT/Penn model, which is a distant ancestor of the FRB/US model.

But shortly after large-scale models had been warmly embraced by policymakers in central banks and governments, they were trashed by the upstarts of the macroeconomics profession. I think the modern notion of the Lucas critique is that it calls into question "reduced-form" economics, "implicit theorizing," and such. But Lucas's paper actually had a more narrow focus. Lucas was criticizing large scale macroeconometric models. This is the key quote from his paper:
The thesis of this essay is that it is the econometric tradition, or more precisely, the "theory of economic policy" based on this tradition, which is in need of major revision. More particularly, I shall argue that the features which lead to success in short-term forecasting are unrelated to quantitative policy evaluation, that the major econometric models are (well) designed to perform the former task only, and that simulations using these models can, in principle, provide no useful information as to the actual consequences of alternative economic policies. These contentions will be based not on deviations between estimated and "true" structure prior to a policy change but on the deviations between the prior "true" structure and the "true" structure prevailing afterwards.
Clearly, this was not taken to heart at the Board, as the FRB/US model - in spite of some claims to the contrary - does not look so different from early macroeconometric models. Indeed, if Klein and Goldberger were alive, I don't they would find the FRB/US model an unfamiliar object, though the documentation is dressed up in the language of modern macroeconomics as practiced in most central banks. So, was Lucas wrong, or what?

There's a lot going on the the FRB/US model, but suppose we focus on something the Fed cares about, and is instructed to care about: the inflation rate. Inflation determination appears to be in a New Keynesian spirit. So, one thing that has changed from 1970s-era large scale macroeconometric models is that the money demand function has disappeared, and monetary quantities appear to be nonexistent. Inflation is determined - roughly - by an output gap and trend inflation, which is a survey measure of the ten-year-ahead inflation rate. For example, if a shock occurs in the model which causes a positive output gap, then inflation rises, and over time inflation will revert to the long run trend, which is exogenous.

So, this is purely Phillips-curve inflation determination, which certainly does not square with how I think about the inflation process. Neither does it square with the data, which is notoriously at odds with the view that output gaps are important in explaining or forecasting inflation, or the view that Phillips curves are stable. The Phillips curve does particularly badly over the past couple of years or so. Here is the output gap measure used in the FRB/US model:
So, given that, the FRB/US model tells us that inflation should have been rising recently. But headline pce inflation and core pce inflation have been falling:
If you look at Flint Brayton's memo on "A New FRB/US Price-Wage Sector," in the FRB/US documentation, you can get some idea of how the Board staff think about this. Staff members of course monitor how the model is predicting out of sample, but the model had not been doing well:
Inflation in the recent recession and its aftermath did not decline nearly as much as the 1985-2007 estimates would have predicted, a result that is common to many models of inflation.
So, apparently they found that the Phillips curve they had estimated was not stable - during the recession it was making inflation forecasting errors on the low side. The solution was to re-estimate:
ML estimation over a longer sample period that ends in 2012 reduces the sector’s unemployment slope coefficient by more than half.
ML is maximum likelihood. So, if the sample is extended to 2012, the Phillips curve starts to go away - the slope coefficient gets much smaller. But the Board staff doesn't seem to like this:
An alternative and more cautious re-assessment of the unemployment slope is obtained with Bayesian methods. Using the 1985-2007 ML parameter estimates and their standard errors as a prior, Bayesian estimation over the longer sample reduces the slope coefficient by one-third.
So, apparently being "cautious" is when you ignore the data and go with your intuition.

This is important, as it tells us something about how the Governors and the FOMC chair think about monetary policy decisions. The FRB/US model seems to be producing the Board's forecast and some policy scenarios that are used as input at FOMC meetings. And we know that Janet Yellen takes FRB/US quite seriously. The FOMC is predicting that the inflation rate will rise over time to 2%, and Janet Yellen has told us that she thinks that, under that scenario, the Fed's policy interest rate should start rising in spring 2015. Some people want to interpret that as a hawkish statement, but I don't think so, because I think the forecast - and FRB/US - is wrong. The FOMC has also stated that the policy rate will stay low if inflation continues to be low or falls, and I think that is the likely outcome, for reasons I have discussed before. So, given what the FOMC has told us about its policy rule, my prediction is that the policy rate will be where it is for considerably longer than Janet Yellen thinks it will.

14 comments:

  1. "So, apparently being "cautious" is when you ignore the data and go with your intuition."

    I can't comment on the specific methods employed here, but as a characterization of statistical regularization, the quote is pretty off base. The prior is based on the MLE and it's SE from the earlier data set, and it is combined with a likelihood for the more recent data. What does this have to do with data versus intuition? Bayesian methods simply provide a natural way to combine different sources of information in a coherent way. MLE alone is only more data-based if they're pulling the prior out of thin air. They're not.

    ReplyDelete
    Replies
    1. Bayesian estimation effectively takes a prior distribution, uses information in the data, and produces a posterior distribution. Where does the prior come from? Well, you can say that it's "using data from different sources," but it looks to me like these people just didn't like the maximum likelihood estimate that the data sample delivered, and changed it by manipulating the prior. That's the role that Bayesian estimation plays in New Keynesian estimation. In some cases the likelihood function is flat, effectively, and the prior is choosing the parameter estimate. That's the modeler's intuition, if you ask me.

      Delete
    2. A more charitable interpretation is that the modelers believe that the recent inflation data are more relevant to understanding the behavior of inflation for the foreseeable future, owing to the historically unusual circumstances that have recently obtained, but recognize the potential bias and variability inherent in small samples, and so see fit to shrink the estimates towards estimates based on less recent data. If we think A tells us more than B about how things work in general, but B tells us more than A about how things work for the time being, then we should put more weight on B, but also utilize A, in order to obtain estimates that are constrained by the full set of information available, but put more emphasis on recent trends. If you think weighting different parts of the data differently based on external knowledge is scientifically suspect, I wonder how you think anyone ever models anything at all. Priors are no more subjective than likelihoods--both reflect the modelers pre-existing knowledge about the system under study, or lack thereof. Why is it good science to say that y is normally distributed with mean X*beta, and hence we can use OLS to estimate beta, but bad science to say beta is normally distributed with mean mu, estimated from a separate data set?

      Delete
    3. Sure, I agree with most of that. I'm quite willing to consider the output that comes out of a calibrated model, in which the modeler somehow marshaled all the evidence he or she had on how to set the parameters and then did some quantitative experiments. You could say that in Bayesian estimation is just an efficient way to efficiently use the information in the data set, and what comes from outside the data set. But there are two things that make me suspicious in this instance:

      1. I know that Phillips curve estimates are going to be highly sensitive to the sample period, and the prior. These relationships simply are not stable.
      2. The slope of the Phillips curve is a key parameter in this model. It's going to matter a lot for the policy conclusions and the forecast. And the prior is moving the parameter estimate in a direction consistent with the thinking of the people who are choosing the policy. Hmmm.

      Delete
    4. I'll defer to you on the usual purposes of employing Bayesian methods in the New Keynesian literature. I just think it's important not to conflate Bayesian statistics with intuition. Properly understood, Bayesian statistics simply provides a way of regularizing inferences by combining prior information with the data under investigation. Prior information, in turn, need not have anything to do with intuition, unless intuition includes information acquired from earlier data analyses. Moreover, classical statistics is no less susceptible to manipulation by intuition, as the likelihood is just as much a product of the mind as the prior. Bad statistics is bad statistics, but there is no reason for Bayesian inferences to be more suspect than classical inferences, other things being equal.

      Delete
  2. Again, Stephen, what you've shown is that the output gap alone doesn't determine inflation. Sure, high oil prices in 2012 which fell later in 2012 and 2013 also had an impact. It doesn't mean the output gap doesn't matter. You're going to run into some brick walls if you limit yourself to a monocausal explanation of everything.

    Question for Stephen: take the last 30-40 major recessions in large economies not associated with oil prices shocks. How many are associated with drops in inflation? How many are associated with increasing inflation? If you're hypothesis is correct, we should see 15-20 cases where inflation went up in a recession. If I'm correct, and I am, we should see very few or no sharp corrections in GDP which were not also associated with declines in inflation like we saw in the US from 2007-2009.

    ReplyDelete
    Replies
    1. So, what was the current supply shock?

      Delete
    2. "You're going to run into some brick walls if you limit yourself to a monocausal explanation of everything."

      Who is limiting him or herself to a monocausal explanation of what? I'm just saying using some output gap, which you can't measure, and which we have good reasons to think would not be useful even if we could measure it, to forecast inflation or explain its causes, is a very poor approach.

      Delete
    3. Thanks for deleting my post. It shows that you cannot stand it when somebody points out that your previous inflation predictions (4-5%) have been way off. So much about your "scientific" economics. :D

      Delete
    4. Steve should delete all of your posts, John.

      Delete
  3. "Given that you past inflation predictions have been way off nobody really cares another of your predictions which is again based on bad theory."

    In modern macro-theory there is only bad theory. I found it interesting that Yellen in the link provided attributes low inflation since 2005 despite a rise in commodity prices to low inflationary expectations due to credibilility earned during the Volcker monetary policy of the 1980s. Rubbish. It scares me to hear that central bank governors believe this stuff. Driving these high commodity prices was rising demand in China. When it reigned back on its demand through direct loan based monetary policy its demand stalled, and so too the rise in commodity prices. Comparing the role of oil in the world economy and its effect on inflation now with the 1970s is also nonsense.

    ReplyDelete
  4. Thanks Stephen for this post; as always, sharp and insightful =)

    It seems that the uncertainty of predictions would be better clearly communicated. It is always possible for inflation rate to rise to 2%. The question is with what probability. What is the range of inflation rates that has the highest probability based on the information available today?

    As for estimating the inflation equation, I think in recent years there have been quite some research on modeling regime-switching or change-point in coefficients. These econometric models are likely to improve the out-of-sample forecasting performance if that is the target. But I guess there might be other considerations for central banks to decide the model specifications they use, e.g. ease of communicating with the business world, connecting with the DSGE so to have a more coherant description of the economy etc. As always, models are built to serve specific purposes. Depending on how the Fed intends to use the inflation equation, maybe it is not too bad for that purpose.... don't know...

    ReplyDelete
  5. Hello Stephen,
    I just watched your debate with Mark Thoma. In my book, you won. Now I write for a left of center blog, Angry Bear and I research effective demand.
    To me, the key issue in the debate was the causality of the Fisher equation. You said that expected inflation will follow the nominal rate in the middle and long run. This implies that the real rate is independent of monetary policy in the long-run. You made a case that real rates should be low. I think they should be rising, while monetary policy is trying to push them lower. You also made a case that the short-run effects of monetary policy have worn off. And since we are now in the long-run of monetary policy, inflation is low because of the low Fed rate.
    I agree with you.
    I have been calling for a rise in the Fed rate and tighter monetary policy and I have taken some heat for it. Yet, in my research of effective demand, the output gap is much smaller than the CBO says. My research says we are actually reaching the end of the business cycle. The spare capacity is almost all used up. Some $100 billion more in real GDP and it is all gone. This leads me to want tighter monetary policy.
    Yet, I see taking a different and complementary approach to the issue... and I like your thinking. If you were to think that the output gap was very small, you would have even more reason to want tighter policy.

    Now Thoma says that demand should come first to give support for raising nominal rates. But your distinction between the short-run and longer-run effects wins out. As I understand you, seeing the real rate as independent of monetary policy in the long-run, a rise in the nominal rate will create more expected inflation. Like you say, there will be a reaction in the short-run, but eventually the expected inflation will rise to meet the nominal rate.

    I agree,
    Appreciate your work and look forward to more posts from you.

    ReplyDelete
    Replies
    1. I haven't watched the video, as I really can't bear to watch myself. I spent a couple of days at KSU, and enjoyed my visit there. The "debate" wasn't quite a debate format, and I thought of it more as an opportunity to talk to the students and teach them about monetary policy. More on low inflation, policy traps, and the Fisher relation later. Mark and I had a good time together, actually. On everything except economics, we agree I think.

      Delete