Thursday, June 28, 2012

Economic Nonsense

In 1991, Paul Krugman published "Increasing Returns and Economic Geography," which the Nobel Prize committee cited in 2008 in support of Krugman's Nobel Prize in Economics. The paper is basically an exercise in normal economics. There are optimizing consumers and firms; there is a Dixit-Stiglitz monopolistic competition structure; there is an equilibrium in which prices adjust to equate quantities supplied and demanded in each market. If Krugman had submitted the paper for the 1991 SED meetings, I'm sure he would have been on the program, in spite of the absence of dynamics in the model. Indeed, no one would have had a problem calling Krugman's work macroeconomics - it fits well within the set of what modern macroeconomists like to think about.

Fast forward to 2012, and we all know what Krugman stands for, and he's been even more vociferous about these things over the last week or so than usual. One of his blog posts was this one, which is his response to some policy discussion in Britain about the state of macroeconomics. Part of what is in that piece, and other recent ones, is the argument - which we have heard from Krugman many times before, and will likely hear ad infinitum - is that the Hicksian IS-LM model is "spectacularly successful." I don't know whether to laugh or cry when I read things like that. How could anyone think of that crude tool as a success in this context? What could Krugman be thinking? Does the IS-LM model tell us what a financial crisis is, and what the policy response to such an event should be? Does it tell us what quantitative easing does? Does it tell us the extent of inefficiency in the US economy that monetary or fiscal policy might correct? Does it help us understand sovereign debt problems? Does it help us understand how to regulate the financial system? Of course not! Get a life, Krugman!

Krugman asks:
So why the sense that macroeconomics is a mess?
The answer to that question is, of course, that Krugman is doing a great job of propagating the fiction that it's a mess. But it's not. When I went from session to session at the recent SED meetings, I got the sense of a group of productive, young (for the most part), and thoughtful macroeconomists, working on, and making progress in understanding, the key economic problems of our time. Here's an example. In the Friday, 4-6 PM session on Macro/Labor, there are a couple of papers - one by Violante and coauthors, and the other by Alvarez/Shimer. The Violante paper makes some progress in defining what "mismatch" is, and measuring the extent of it. The Alvarez/Shimer paper is a study of how human capital accumulation matters for unemployment in the context of sectoral realloaction. Both papers are highly topical and teach us something about what is going on currently in the US economy.

The first key point is that macroeconomists are not in disarray. They are pursuing their research programs in a sound and coherent fashion, debating the issues, and making progress.

In case you were concerned about Krugman's focus on modern macroeconomists, he doesn't have much use for theorists either:
The other thing I’d like to say is that the notion that microeconomics is in much better shape is questionable, to say the least. I mean, it’s not as if the assumptions underlying standard micro theory are, you know, true – utility maximization? Really? Micro is consistent in a way macro is not, but for the most part it’s best viewed as a metaphor that’s helpful as long as you don’t take it too seriously.
The truthiness of utility maximization? I know I have to explain that to an Econ 101 student, but not to Paul Krugman. The first thing Krugman does in laying out the model, on page 488 of this paper is to specify a utility function, which is the objective function for all the individuals in Krugman's fictitious environment. Apparently he didn't feel the need to footnote that, and make excuses for, you know, the lack of truthiness.

The second key point is that modern "macroeconomists" do not actually think of themselves as different from "micreconomists." It's all economics, using the same set of tools - the tools that the 1991 Krugman could use to great effect.

Now, after insulting most of the economics profession, Krugman would like us all to sign on to his Manifesto for Economic Sense. For the most part, I think monetary policymakers do a good job of absorbing economic research and applying the ideas. For example, the FOMC is not perfect, but there are some good minds in the room when the committee meets. Fiscal policy leaves a lot to be desired. But injecting a dose of Krugmanite IS-LM into the mix is not going to help. I don't know about you, but I'm not signing up.

Monday, June 25, 2012

SED Meetings

A short report on the Society for Economic Dynamics meetings in Cyprus:

Plenary Talks This was perhaps unusual for the SED, in that the three plenary talks were pretty light on theory. Andy Atkeson's Friday evening talk was purely a measurement exercise. The idea is to measure a firm's financial soundness by "distance to insolvency." An increase in leverage or in risk will decrease distance to insolvency. Andy and his coauthors (Andrea Eisfeldt and Pierre Oliver-Weill) isolate three "solvency crises," which occurred in 1932-33, 1937, and 2008. During these crises, you can see what happens to the whole distribution of firms (ranked according to distance to insolvency). Essentially the distribution collapses - all firms get a lot closer to insolvency. Andy makes a big deal of the fact that what we see happening to financial firms is similar to what happens to nonfinancial firms during crises. He wants to question the view that financial firms are especially vulnerable during a financial crisis. I'm not sure. The financial intermediation sector can be fragile, and transmit this fragility everywhere, so that insolvency is observed in both financial and nonfinancial firms. Bailing out the financial firms because they are viewed as "systemically" important may be wrong, but I don't think the work of Andy and his coauthors necessarily suggests that.

The SED has become a very large meeting, with 12 parallel sessions running. That's about 450 papers presented over three days. In spite of its size, though, the conference has retained its midwestern sensibility. The papers are mainly in modern macro and structural applied micro. However, one of the plenary talks each year is typically devoted to a presentation by someone outside the tribe, and this year's was by Christopher Udry, from Yale. Udry talked about work on field experiments in development that related to some of the financial frictions mechanisms that macroeconomists like to think about. Udry characterized the results as mixed - in some experiments it appears that credit frictions seem to matter, and in other cases not. Here, I think the development experimenters and the macroeconomists could benefit more from talking to each other. In some venues, I think this is happening already. For example, Rob Townsend at MIT has made attempts to get the two groups together. One benefit from cross-fertilization would be the integration of more serious theory into the design of field experiments and the interpretation of the evidence. In particular, it wasn't clear that some of the field experiments Udry discussed could actually tell us much about the role of credit and financial arrangements in the economy.

Finally, Monika Piazzesi presented some interesting preliminary work, joint with Juliane Begenau and Martin Schneider, on measuring bank risk. They focus in particular on making inferences about derivative positions, which are in principle difficult to measure. Some of the results indicated that the derivative positions of large banks were increasing risk rather than reducing it. Not sure how we think about this in a systemic context.

RBC is not dead. We would have to go back years to find anything that would resemble a baseline real business cycle (RBC) model in a paper presented at the SED meetings. This year's crop includes plenty of models with heterogeneous firms, heterogeneous consumers, banks and other financial intermediaries, search frictions, etc. A common view of the recent recession is that the standard representative agent RBC model does not fit the facts, particularly with regard to the behavior of labor productivity. However, this paper, by McGrattan and Prescott makes the case that we can solve the "productivity puzzle" by thinking about measurement error in the national income accounts. McGrattan and Prescott argue that the key mismeasurement involves intangible investment. What's that?
Intangible capital is accumulated know-how from investing in research and development, brands, and organizations, which is for the most part expensed by companies rather than capitalized. Because it is expensed, it is not included in measures of business value added and thus is not included in GDP.
The argument is that intangible investment is a significant fraction of correctly-measured GDP, and that it is volatile and procyclical, just like tangible investment. MacGrattan/Prescott claim that intangible investment helps us understand both the 1990s boom and the 2008-2009 bust as TFP-driven. You may think that factors other than TFP are important for business cycles, or that TFP is some kind of reduced form for those other factors, but people with alternative ideas need to be as serious about the data as MacGrattan and Prescott are. Popular discussions about the role of Keynesian phenomena in the recent recession are, in this respect, particularly loose.

Thursday, June 21, 2012

FOMC Statement

I'm a bit behind on this, as I've been traveling. As anticipated, the FOMC voted to do something in its meeting earlier this week. From their point of view, this is the least dramatic policy action they could have taken, which is an extension of the "Operation Twist" program which began in September 2011 and was scheduled to end this month. Just to refresh your memory, Operation Twist involves doing this:
Specifically, the Committee intends to purchase Treasury securities with remaining maturities of 6 years to 30 years at the current pace and to sell or redeem an equal amount of Treasury securities with remaining maturities of approximately 3 years or less.
As I have argued, most recently here, given the large stock excess reserves outstanding, asset swaps by the Fed - whether they are swaps of reserves for government debt or swaps of short maturity government debt for long-maturity government debt - are essentially irrelevant. What matters under these circumstances is the interest rate on reserves.

A program such as QE2, which involved swaps of reserves for long-maturity Treasuries, increases the size of the Fed's balance sheet, and potentially has some effect on asset prices as it could change beliefs about the future policy rate. However, I don't see that happening as a result of an Operation Twist program. Thus, the Fed's policy action won't accomplish anything, but it's for the most part harmless. The only harm comes if the Fed expects too much from the policy, and projects those expectations to the public. However, the Fed's problem with the public now has more to do with the people who are screaming for more.

One interesting issue: Why did Lacker bother to cast a dissenting vote?

Tuesday, June 12, 2012

Times Are Hard For Nobel-Winners Too

Here's the news all of you Nobel hopefuls were dreading. The Nobel endowment has taken a hit, and will be reducing the cash prizes from $1.4 million to $1.1 million. You might think, as the Nobel prize in economics is actually a a faux-Nobel, funded by the Swedish central bank, that economists would be immune. Our Nobel-winners (just like those of us who are on central bank payrolls) basically live off seignorage. Tough luck, though. The NYT article informs us that:
The Nobel Memorial Prize in Economic Science — technically not one of the original prizes — is also being scaled back.
Should we worry about the incentive effects? Dale Mortensen says there's no problem:
At the time you’re doing your work you’re not exactly striving for the Nobel Prize, but to make a contribution, make your name, maybe make tenure. The Nobel is a great honor that comes much later in life. The money itself is a windfall. It probably wouldn’t matter what the amount is, although it’s nice to have.
Actually, Peter Diamond tells us that the cash prize is the tip of the iceberg:
One of the things that comes with the prize, besides the prestige and the money, is the opportunities to make more money.
Maybe Diamond should tell Mortensen about this.

Sunday, June 10, 2012

The Swedish Financial Crisis and the Labor Market

This may not be news to you, but it was to me. In this speech, Narayana Kocherlakota shows us how the labor market behavior in Sweden, following the early-1990s financial crisis that occurred there, looks much like what has been happening recently in the United States. This is consistent with this post, where I looked at some Canadian labor market data. Canada sailed through the financial crisis with essentially no problems in its banking sector, and the recent behavior of the labor market (or rather, labour market) in Canada looks quite different from the US. Sweden had a financial crisis in the early 1990s, and it's labor market behaved subsequently like the US labor market is behaving now.

Like Reinhart and Rogoff's book, this raises more questions than it answers. How do we define a financial crisis anyway? How do we differentiate between bad macroeconomic events that are caused by problems in the financial sector and problems in the financial sector that are caused by bad macroeconomic events? What is it about financial sector problems that could make the labor market behave in unusual ways?

Monetary Policy: The Naive View

Christina Romer is certainly consistent. In spite of her many years of experience in academia and policy circles, she consistently surprises me with her un-nuanced views on economic theory and empirical evidence and how they inform policymaking. Case in point: this NYT article.

Romer wants to make the case that the state of the world dictates more accommodation by the Fed. First,
By law, the Fed is supposed to aim for maximum employment and stable prices. But the unemployment rate is 8.2 percent — a good two percentage points above what even the most pessimistic members say is its sustainable level.
...right now, the inflation measure that the Fed watches is a bit below its target of 2 percent...
the Fed’s dual mandate doesn’t say it should care about unemployment only so long as inflation is at or below the target. It’s supposed to care about both equally. If inflation is at the target and unemployment is way above, it’s sensible to risk a little inflation to bring down unemployment.
The "dual mandate" specified in the Full Employment and Balanced Growth Act of 1978, or Humphrey Hawkins Act, is in fact quite vague. Under the Act, the Fed is supposed to be promoting maximum employment and price stability. But any creative central banker would find it easy to make an argument that his or her favorite policy fits well within the Act's guidelines. One could argue (I'm not saying this argument would necessarily be correct), for example, that "price stability" means constant prices (0% inflation) and that employment is lower than it was five years ago due to factors which the Fed cannot correct for. That would then imply that policy should be less accommodating. The set of policies consistent with the Act is so large that the dual mandate argument is not going to help in making Romer's case. She's going to have to make the case by appealing to sound economics.

So what about that? The inflation rate, as measured by the twelve-month percentage increase in the pce deflator, is indeed just below 2%, the Fed's inflation target. The unemployment rate is indeed historically high. Given the Fed's past behavior, one might think more accommodation would be appropriate, assuming of course that the Fed's past behavior was optimal. But what is the Fed supposed to do to be more accommodative? Romer says the alternatives are:

1. Nominal GDP targeting. Romer incorrectly refers to NGDP targeting as an "operating procedure." The operating procedure is actually a description of how the FOMC directs the open market desk at the New York Fed to act. The desk can only do one thing: conduct open market operations. The current fiction (in line with what is written in FOMC statements) is that the operating procedure is the same as before the financial crisis, i.e. the desk conducts open market operations to target the fed funds rate at a level specified by the FOMC. In fact, the fed funds rate is currently determined by the interest rate on reserves, as set by the Board of Governors (for the details, see this post). The desk currently just executes whatever the current quantitative easing (QE) program of the Fed is, without regard to the quantity of reserves that are held in the system overnight.

It would not have made any sense even in pre-financial crisis times for the FOMC to direct the desk by just telling it to hit a NGDP target. Nominal GDP is measured on a quarterly basis, and the National Income Accounts numbers are published with a lag. Currently, for example, it is June 10, and the last NGDP number we have is for first quarter of the year. The FOMC meets about every six weeks. If we were practicing NGDP targeting, how exactly would the FOMC translate the difference between actual NGDP and target NGDP in the first quarter into a directive to the desk at the next meeting? If, as in pre-financial crisis times, excess reserves in the system were essentially zero overnight, then presumably the directive to the desk would have to be in terms of a fed funds target. Under current conditions, and given how the Fed thinks about the monetary policy problem, the directive to the desk would have to be in terms of quantitative goals for the Fed's portfolio. Thus, the operating procedure under a NGDP target would necessarily have to be identical to what it is now.

NGDP targeting does not do anything other than specify the Fed's ultimate goals. As such, there are two problems with it. The first is the absence of a sound theory to justify NGDP targeting. It is unclear why an economy in which NGDP grows at a constant rate is an economy in which the central bank is doing what is optimal. Second, one can imagine circumstances under which particular NGDP targets will not be feasible. In fact, I do not think it would be feasible for the Fed to achieve 5% annual nominal GDP growth by the end of this year, for these reasons.

2. More QE. Romer and the FOMC are on the same page on this one, but I don't think QE does anything at all (again, see the last link above). At best, QE can signal future intentions of the Fed with regard to the policy rate (and thus move asset prices), but the Fed can do the same thing with "forward guidance," i.e. announcements about the future path for the policy rate. Like other people, including Miles Kimball, Romer seems to think that QE isn't doing much because the Fed hasn't done it right:
The previous rounds of quantitative easing may have done little to improve expectations because their size and duration were limited in advance. If the Fed does another round, it should leave the overall size and end date unspecified. Or, better yet, the ultimate scale and timing could be tied to the goals the Fed wants to achieve.
First, I'm not sure how you announce a policy without saying what it is. Second, the last sentence in the above quote is interesting. The Fed claims that, for example, purchases of long-maturity Treasuries will lower long bond yields. If they were confident about that, the FOMC would announce targets for long bond yields rather than quantitative goals. They don't announce the targets, therefore they must not be confident that QE does what they claim.

3. Forward guidance. As with QE, Romer likes it, but she does not like how the Fed does it:
Instead, the policy-making committee could adopt the proposal of Charles Evans, the president of the Federal Reserve Bank of Chicago, that the Fed pledge to keep rates near zero until unemployment is down to 7 percent or inflation has risen to 3 percent. Such conditional guidance assures people that the Fed will keep at the job until unemployment is down or the toll on inflation becomes unacceptable.
If you look through the FOMC minutes (a prize to the person who can find this), you'll find the FOMC's rationale for ditching Evans's suggestion, and I think that rationale was good. From what I remember, the reasoning was: (i) There are too many contingencies to worry about. You can't write them all into the FOMC statement. (ii) Making policy explicitly contingent on, for example, the unemployment rate, would be silly. The unemployment rate is determined by many factors, most of which have nothing at all to do with monetary policy. (iii) The relationship between monetary policy and real economic activity is imperfectly understood.

The important fight that is going on is not one involving the weapons of monetary policy against the poorly-performing US economy. The key struggle is in getting policy people, and those who write about policy, to use the best available economic tools and reasoning to address our current problems.

Monday, June 4, 2012

More on Unconventional Open Market Operations

Miles Kimball may have been the first New Monetarist (actually, if you read the paper, he may have been the first New-Keynesian; he's basically outlining the basic NK model in 1995). Fortunately for us, Miles wants to blog, which is guaranteed to increase the average quality of discourse in the medium. From a dismal low, you might say, but progress is progress.

Miles has taken the trouble to write at length in reply to my comments on one of his posts, so I'm encouraged to carry on the discussion.

For some background on liquidity traps, quantitative easing (QE) and what it can and cannot do, you can browse my archive, or read these particular pieces:

1. QE irrelevance.
2. An example.
3. Liquidity traps.

To understand QE, we need to know the difference between a channel system and a floor system. A good example of a channel system is the central banking framework within which the Bank of Canada works. The Bank sets a deposit rate (the interest rate on reserve accounts) and a higher rate at which it lends to financial institutions, and targets an overnight rate that lies between those two rates (i.e. the target rate lies in the "channel"). In a channel system, as long as the overnight rate lies within the channel, open market operations matter - purchases or sales of assets by the central bank will move the overnight rate.

Prior to the financial crisis, the Fed worked within what was essentially a channel system. The interest rate on reserves (IROR) was zero, excess reserves were essentially zero overnight, and the overnight fed funds rate fell between zero and the discount rate (there are some complications involving what the "fed funds rate" is, and how lending at the discount window takes place, but ignore that for now). Pre-financial crisis, if the Fed bought or sold assets (T-bills, long Treasuries, whatever), that would move the fed funds rate.

A floor system is different. Under such a system, the central bank sets an interest rate on reserves and a central bank lending rate, and plans to have a positive supply of reserves in the system overnight. As a result, the overnight rate must be equal to the IROR, by arbitrage. An open market operation in short-term government debt in a floor system will have no effect, at the margin, as the central bank is simply swapping one interest-bearing short-term asset for another. The instrument of monetary policy in a floor system is the IROR, which determines short-term nominal interest rates.

Currently, the Fed operates under a floor system. The supply of excess reserves is enormous, and the IROR determines short-term interest rates. There are some weird features of the system, such as the fact that the GSEs receive no interest on their reserve accounts, and there is some lack of arbitrage which results in a fed funds rate less than the IROR, but I think those weird features are irrelevant to how monetary policy works.

Under a floor system, we are effectively in a perpetual liquidity trap. Conventional open market operations in short-term government debt do not matter, whether the IROR is 5%, 0.25%, 10%, or zero. But, not to worry, the central bank can always change the IROR except, of course, when it hits the zero lower bound (neglecting the possibility of taxation of reserve balances, which is another issue altogether).

My irrelevance argument goes one step further. First, we have to understand why open market operations move the overnight rate in a channel system. An open market purchase of T-bills under a channel system essentially involves the transformation by the central bank of T-bills into currency. Remember that, in a channel system, overnight reserve balances are zero. The increase in outside money has to show up somewhere, so currency outstanding has to increase. Private financial intermediaries cannot convert T-bills into currency, as they are not permitted (either explicitly or implicitly in the US) to issue currency. That's why monetary policy matters in a channel system - central bank intervention works by varying the quantity of liquidity transformation.

But what happens if, for example, the central bank purchases long Treasury bonds under a floor system? The Fed issues reserves, and purchases Treasury bonds, thus transforming T-bonds into overnight reserves. Does that do anything? Why should it? A private financial institution can create a special purpose vehicle (SPV) whose only function is to hold T-bonds as assets, and finance that portfolio by rolling over overnight repos. The SPV performs exactly the same asset transformation as the central bank is performing when it purchases T-bonds. Imagine, for example, that there is an SPV that sells T-bonds to the Fed, and suppose that the overnight repos of the SPV were held by a financial institution with a reserve account. Suppose further that this financial institution increases its reserve account balance by exactly the amount of the reduction in its repo holdings after the Fed purchases the T bonds. Clearly, the financial institution does not care whether the T-bonds that back its overnight assets are held by the Fed or the SPV. Nothing changes.

This is just a more elaborate liquidity trap. Under a floor system, the only instrument the central bank has is the IROR. Asset purchases - of whatever - are irrelevant. Under current circumstances, this means that, as long as the Fed does not change the IROR, the inflation rate is passive. Changes in the price level are determined by the demand and supply of liquid assets - the whole array of that stuff, i.e. currency, reserves, interest-bearing government debt of all maturities, liquid asset-backed securities. The Fed can only change the composition of the total stock of liquid assets under a floor system, and so it can't change the price level without changing the IROR. What will move the price level, as long as the IROR is fixed? First, if the private sector creates more liquid assets that compete with reserves, that will increase the price level, and we will get more inflation. That is what I was worried about a while back. Not right now. If it looks unlikely that the private sector will be creating more liquid assets, and if the demand for US-dollar-denominated liquid assets rises, the price level falls and we get less inflation. That's our problem now. Of course there's nothing the Fed can do about that, as the IROR does not have far to go to reach zero.

Some specific comments in reply to Miles's post:

1. Miles argues that, even if we have some doubts about QE, why not try it? At worst it's irrelevant, so no big deal. The problem is that the Fed wants to believe it works, otherwise it looks silly, given the massive QE operations that it has engaged in. Various economists in the Fed system have been falling all over themselves to justify the actions of their superiors, and smart people like Miles are buying the arguments. The Fed has now convinced itself that QE works. In particular, the Fed thinks it works both ways. Thus, if we ultimately see what we think is too much inflation (a more remote possibility at the moment, obviously), the Fed will think it can control it through asset sales. It certainly can, but the sales only start to bite at the point where excess reserves get very close to zero.

2. Miles isn't sure exactly what friction makes QE work, but he seems confident that the friction is small, and so QE must be carried out on a very large scale in order to do much. In fact, as he says:
Balance sheet monetary policy can powerfully stimulate the economy if the Fed does enough.
That seems inconsistent with the rest of the argument. Suddenly we go from statements about how little we know to confident predictions about what we can accomplish if only we do "enough." You have to be more specific about the "enough" to give that content.

Sunday, June 3, 2012

The Beveridge Curve and the Long-Term Unemployed

It's well known that the Beveridge curve relationship (the negative correlation between the unemployment rate and the vacancy rate) shifted out as the unemployment rate began to come down from its peak of 10% in late 2009. The first chart shows the most recent update.
The data in the chart runs up to March 2012, the latest date for which we have JOLTS data. The unemployment/vacancies data from December 2000 to December 2007 (the beginning of the last recession) trace out what is apparently a stable Beveridge curve. As well, the data from December 2007 through October 2009 is consistent with that stable Beveridge curve, but the post-October 2009 observations make it appear that the relationship has shifted. If we had thought that the 2000-2007 Beveridge curve was structural, we would have predicted an unemployment rate of 5.25-5.5%, conditional on the observed vacancy rate in March 2012. But the actual unemployment rate in March 2012 was 8.2%.

Suppose that we disaggregate, and look for Beveridge relationships in terms of duration of unemployment. The second chart is a scatter plot of the vacancy rate vs. the unemployment rate for those unemployed less than 5 weeks (those unemployed less than five weeks divided by the total labor force).
In this chart, you don't see any correlation at all. But in the next two charts, you get nice Beveridge curve correlations, for those unemployed 5 to 14 weeks, and 15 to 26 weeks respectively.

So, now you know what group has to be explaining the shift in the Beveridge curve in the first chart. The next chart is for those unemployed 27 weeks or more.
In this chart I have connected the dots and supplied some dates. The shifts you see in the Beveridge curve in the first chart seem to be entirely due to what is going on with the long-term unemployed. Further, in this last chart you can see an earlier shift, which occurred after the 2001 recession.

Finally, let's take a look at the number of those unemployed 27 weeks or more, as a fraction of total unemployed, since 1948.
Long-term unemployed as a fraction of total unemployed increases in all recessions, for obvious reasons, but the behavior of the time series after the last 3 recessions is striking. After the last 3 "jobless recovery" recessions, the fraction of unemployed who are long-term unemployed has remained persistently high. The most recent recession is just an exaggerated version the the previous two - the long-term unemployed account for an extremely high fraction of total unemployed, and that fraction is persistent, just as in the previous two recessions.

So, where do these observations lead us? The unemployment rate is currently unusually high, and high in a way that does not appear to be consistent with posted vacancies. But if we disaggregate, it seems that the characteristics of the time series might have a lot to do with the fact that there are much more long-term unemployed now than is typically the case. But why has that happened? Especially since the phenomenon appears not to be new (going back to 1990 at least), it's hard to avoid thinking about mismatch. But in order to evaluate that story, we need more information about the long-term unemployed. How many of these are former construction workers? How many are David Autor's middle-skill people? Possibly the financial crisis merely increased the rate of structural change that was already occurring in labor markets? Are there other features of the long-term unemployed we need to be thinking about? The long-term unemployed may have depreciated skills; they may have been picked over as a group and be of perceived low average quality. Their search effort may be low. All of these things matter for policy, particularly for unemployment insurance programs.

What is clear is that conventional models typically have insufficient heterogeneity to explain these facts. In Mortensen-Pissarides models, for example, labor is homogeneous, and mismatch is embedded in reduced-form matching functions. I'm interested in learning about work you know about that either already captures this stuff, or could potentially do so.

Friday, June 1, 2012

Quantitative Easing: The Conventional View

I ran across two pieces by well-known macroeconomists that support - wholeheartedly - the Fed's view of why it conducts quantitative easing (QE) exercises, and why QE is supposed to work. The first is by Miles Kimball, in a blog piece intended for a wide audience. The second is this paper by Roger Farmer, which is a quasi-formal approach to the question.

Kimball's narrative would make any Old Keynesian or New Keynesian comfortable. Here's what he says:

1. There is a Phillips curve:
The “natural level of output” is the level of output at which core inflation will be steady. Above the natural level of output, core inflation rises. Below the natural level of output, core inflation falls.
Here's core inflation for the last 5 years, as measured using the core CPI and core PCE deflator.
Core inflation has been rising (mostly) by either measure since late 2010. Does Kimball think the US economy is above the "natural level of output?" I doubt it. If not, he should re-think his definition. Maybe he could tell us how to measure the natural level of output while he is at it.

2. What should a central bank do?
According to Kimball, this prescription need not be modified given the current state of affairs. He says it does not matter if short-term nominal interest rates are zero, or close to it, because:
Whenever the Fed buys any asset, its price goes up.
Kimball makes this seem like a simple application of what we learned in Econ 101, but he thinks we might need a little Finance too:
One of the most useful facts in all of Finance comes into play. For assets, a higher price is basically the same thing as a lower interest rate.
Another useful piece of finance is the Modigliani Miller theorem. Kimball might want to explain to us why, if the Fed issues reserves (overnight liabilities), and buys some other assets, and private financial intermediaries are perfectly capable of issuing overnight liabilities and buying the same assets, that the Fed's QE is not undone.

3. Lowering interest rates increases aggregate demand, and there are no practical limits on the stimulative effects the Fed can have:
What if all the assets in the world got down to a zero nominal interest rate and the economy still didn’t have enough stimulus? Then, and only then, we would be in deep, deep trouble on the aggregate demand front from which there would be no escape through monetary policy. But we are far, far away from that situation. Simple economic models studied by economists often have this happen because they have so few types of assets in them, but the real world has a huge number of different types of assets, some with nominal interest rates that are still very far from zero.
What's wrong with that paragraph? What's right about it? A central bank is a financial intermediary. Its power to alter the allocation of resources and economic welfare derives from its monopoly over the issue of some special kinds of liabilities (currency and reserves) which are used in retail transactions and large-value financial transactions. As Kimball notes, all but a small quantity of the reserves currently outstanding are currently "asleep," i.e. they sit during the day and overnight, and are not so different from T-bills (except that more economic agents can hold T-bills than have reserve accounts). If the Fed issues reserves and buys long-term Treasury bonds under these conditions, that can have no effect, as that's a process of intermediating Treasury bonds that is no different from what can be done by a shadow bank. If the Fed issues reserves and buys mortgage-backed securities issued by Fannie Mae or Freddie Mac, that amounts to the same thing - no effect. However, if the Fed were to, for example, buy mortgages directly, that would be an entirely different game. How good is the Fed at screening mortgage borrowers? Is the Fed going to target particular segments of the mortgage market? Maybe Congress wants some say in how the Fed does that? Maybe some people will be lobbying Congress in a serious way to make sure that their segment of the credit market gets the intervention? Sounds like opening a can of worms, doesn't it?

So, Kimball needs to think harder about QE. What about Farmer? The empirical part of the paper we have mostly seen before. These are the kind of event-study-type pictures that are sometimes used to "prove" that QE works. Any self-respecting economist will take more convincing than that. A serious model and some solid structural work would be nice. The theory part of Farmer's paper is supposed to be model free. It's basic intertemporal asset pricing, and the idea is the following. Suppose there exists nominal government debt of different maturities. In Woodford fashion, we can think of monetary policy as a contingent rule for setting the one-period nominal interest rate. But what if our rule is not feasible, i.e. the one-period nominal interest rate can't be nonnegative in all states of the world, given the rule we would like to impose. Then, it must be optimal for the one-period nominal rate to be zero in some states, but we then need to choose the short nominal rates in the other states. Given term structure relationships, the thought experiment we conduct in determining the optimal policy rule effectively involves changing some long-term nominal rates. Apparently that's how Farmer thinks about QE. Problem: To say how QE works, we have to have the relevant asset swaps in the model. There are no asset quantities in siqht in the framework that Farmer lays out. That isn't much help.