Wednesday, December 30, 2015

Neo Fisherian Musings?

Brad DeLong has a comment on this blog post by Larry Summers, which I also wrote about. Brad agrees with Summers, and sums up with:
..the Fed does not engage in reflective equilibrium. It rejects the conclusions of what I regard as the standard Patinkin-style existing model of Krugman (1999). But it does not propose an alternative model. There seems to me to be no theoretical ground, no model even considered as a filing system, underpinning the "orthodox" modes of thought that the Fed believes. And it does not seem to feel this absence aaa a problem. I find that somewhat disturbing.
It's unfortunate that Brad's post distracted me, as I was busily engaged in reflective equilibrum, but what the heck. I thought I would check out Krugman's 1999 article, to refresh my memory. As I thought, the meat of the thing isn't actually a "Patinkin" model, it's a simple Lucas cash-in-advance model. As far as I remember, this thing has not been rejected where I work, though some of my colleagues might object to the CIA constraint as being insufficiently grounded in theory. Further, out here in flyover country (or the pond, as it's come to be known in recent days), we take offense when people call our models "orthodox," and if someone suggested we think of our theories as "filing systems," we would likely scream and run away.

Oh, by the way, here's something interesting. Let's take Brad seriously and look at the implications of Krugman's (1999) model. There's a representative, infinitely lived agent, with discount factor B and utility function u(c), where c is current consumption. There is a fixed endowment of perishable consumption goods each period, which must be purchased with money, which is supplied by the central bank. We can flesh out the details of transactions, for example there can be many households which sell their endowment and purchase output each period, or output could be grown on "Lucas fruit trees," with consumers owning shares in the trees. But those are details that are irrelevant for the equilibrium. As well, money could be injected by way of lump-sum transfers, or through open market operations. Again, the details don't matter for what we'll do here. Let R(t) denote the nominal interest rate, i(t) the inflation rate, and M(t) the aggregate money stock.

Consider deterministic equilibria in which the central bank sets policy as a sequence of nominal interest rates R(0), R(1), R(2),... . Then, intertemporal optimization implies

(1) i(t+1) = B - 1 + BR(t),

for t = 0, 1, 2, ... . The central bank then supports the sequence of nominal interest rates in a rational expectations equilibrium through the appropriate series of money growth rates m(1), m(2), m(3), ..., where

(2) m(t) = M(t)/M(t-1) - 1,

and the sequence of money growth rates required to support the interest rate policy is

(3) m(t+1) = B - 1 + BR(t).

Basically, Krugman's model is a purely Fisherian model of inflation - there's no liquidity effect, only a Fisher effect, so the nominal interest rate always reflects only anticipated inflation. So if the central bank thinks inflation is too low and wants it to go up, what should it do? Equation (1) says it must raise the nominal interest rate.

So, I'm pleased to report that Brad is a neo-Fisherian. Seemingly, so is Paul.

What If?

Tim Taylor has written a blog post about a talk by Larry Summers on secular stagnation. Taylor doesn't seem to quibble with Summers's interpretation of what is going on in the data, but he thinks that we should view secular stagnation as a long-run problem, with potential long run solutions. This appears to have put Paul Krugman in a bad mood which in turn ruined Taylor's day.

What seems to have irked Krugman most is Taylor's take on short-term stimulus: seems to me that we are in a situation where monetary and fiscal stimulus that has been extremely high by historical standards since about 2008 has had a much smaller effect on output and inflation than would have been expected before the Great Recession.
Krugman's first problem with this is that fiscal stimulus has not in fact been "extremely high" by historical standards. That's certainly correct, if we use government spending on final goods and services as our measure of fiscal stimulus. Here's government spending since the end of the recession:
And government spending as a percentage of GDP from the beginning of 2007:
So, government spending has been falling for most of the period since the end of the recession, and government spending accounts for about two percentage points less, as a fraction of GDP, than before the recession. That hardly looks like a stimulus program.

Krugman's next point is this:
...there is overwhelming evidence that fiscal policy has strong effects — that the multiplier is well above one. I’ve seen nothing suggesting that fiscal policy has lost traction...
Finally, he states:
...if you have a persistent problem of inadequate demand — which is the secular stagnation argument — then find things that will boost demand.

So, Krugman is making the case that we have a problem of "inadequate demand" that has persisted since the beginning of the Great Recession, and that fiscal multipliers are large - "well above one." First, suppose we look at the behavior of real GDP and of real GDP minus government expenditures - the latter reflecting what Krugman would consider private demand, I think.
We could also look at the time series in terms of year-over-year growth rates:
The recovery from the last recession has indeed been sluggish, with average real GDP growth of 2.2% since the end of the recession. But private demand growth is certainly less sluggish, with average growth of 3.0%. That's not as strong as growth coming out of the 1981-82 recession, for example. But, if fiscal multipliers are actually large, then given the large decline in government spending since the recession, we might have expected that the performance of private demand would be far worse. And I don't think we would expect to see unemployment in the 5 percent range, as it currently is.

Another way to look at this is by way of a counterfactual, though I'll warn you that this won't be anything fancy. Suppose that the government spending multiplier is 2, which seems in the "well above one" ballpark. As well, suppose that we assume that government spending had grown at 3% since the end of the recession. Basically, think of Keynesian Cross with investment and net exports exogenous and held constant (I warned you this would be crude, but I'm trying to do some kind of Krugman calculation), and we're going to change the path for government spending from what it actually was and look at the effect, which gives:
So, that gives us a big effect. Given these assumptions, real GDP would have grown at an average rate of about 4.1% instead of 2.2%, with real GDP 13.3% higher than it actually was in the third quarter of 2015.

But how would we have produced all of that extra output? Suppose that output per worker is the same on the counterfactual path as the actual one, which should put a lower bound on the number of workers required to produce the extra GDP, given that output per worker should fall with higher government spending. Then employment would have to be 13.3% higher now than it actually is. In the CPS, total employment is about 150 million, so it would require 20 million workers to produce the extra output needed to support the higher level of GDP. Where would those workers come from? Well, with the unemployment rate currently at 5%, 1.5 million workers transiting from unemployment to employment would give an unemployment rate of about 4%, which matches the lowest unemployment rate we have seen in the last 40 years, without putting much of dent in the worker shortfall. So, the extra workers would have to come from those who are currently not in the labor force. The employment/population ratio (epop) would have to go to 67.2%. Here's the actual time series:
So 67.2% is much higher than 59.3%, which is the current epop ratio, and is also higher than epop at the previous peak (63.4% in December 2006) or the all-time high of 64.7% in April 2000. Recall that epop has fallen since before the recession in line with a decline in the labor force participation rate. A substantial part of that decline was due to an aging in the population, but also to a decline in labor force participation in the young and prime-age groups. So, given our current demographics, and assuming no difference in average labor productivity, to produce the quantity of output consistent with a multiplier of 2, and what I think Krugman would consider mild stimulus (3% growth per year in government spending), each age group would have to be working much harder than they are now, and harder than any of their predecessors did.

So, it seems hard to make Krugman's argument stick - quantitatively. Further, there are plenty of other reasons to find fault with what he is saying:

1. When Keynesians say "inadequate demand" they really mean than prices and wages are screwed up. How does wage and price stickiness persist for more than seven years after the beginning of the Great Recession? How does a Summers secular stagnation sticky-wage-and-price phenomenon persist indefinitely? How do we square these ideas with what we know about the frequency and size of price and wage changes?

2. As I pointed out in this post and this one, the labor market in the U.S. is actually quite tight. There's no "slack" there, in Krugman's sense.

3. Summers's secular stagnation argument is that real interest rates are low because of a dearth of investment opportunities. But, according to this piece by Gomme, Ravikumar, and Rupert, the measured real rate of return on capital has not declined. Here's their Figure 2:
Note in particular, the "business after-tax" number, which is higher than at any point in the time series. As well, investment rates are not low. Here's total investment (including housing) as a percentage of GDP:
And here's non-residential investment:
So, if anything, investment rates are high. To address why real interest rates are low, it's likely more productive to think about how asset markets work - for example to explore issues related to safe asset shortages.

What to conclude? Taylor should not take Krugman's jabs too seriously. We should be thinking about long-run issues.

Wednesday, December 23, 2015

Summers in Winter

Larry Summers has a post in the Financial Times about - you guessed it - secular stagnation, and what he doesn't like about current monetary policy. In case you are looking for clarification on what Summers's theory of secular stagnation is all about, you won't find it here. He observes, as is well-known, that real rates of interest are low and expected (based on current asset prices) to remain so for some time. Why are real interest rates low? According to Summers, because of secular stagnation. And...
...secular stagnation is the hypothesis that the IS curve has shifted back and down so that the real interest rate consistent with full employment has declined.
Which does not tell me much. Maybe I can find the answer in this transcript of a Summers conference talk, which he references. Here's the relevant passage:
...broad technological features of the economy have changed that should not have been expected to be constant. That there are a variety of trends that are evolving that have led to lower real rates with the right presumption being that that will continue to be the case for quite some time to come.
What exactly are these "broad technological features" that changed? Apparently we should have expected them to change, whatever they are. What is the "variety of trends" and how are they evolving? Suffice to say that the broad technological features have changed in a persistent way, and along with the variety of evolving trends will cause low real rates of interest to continue to be low indefinitely. If you're enlightened by this, you'll have to explain it to me.

Now, on to Summers's policy critique. He has four complaints.
First, the Fed assigns a much greater chance that we will reach 2 per cent core inflation than is suggested by most available data. Inflation swaps suggest inflation on the Fed’s preferred PCE deflator measure will average only 1 per cent over the next 3 years, 1.2 per cent over the next 5 years and 1.5 per cent over the next 10 years. Survey measures of expected inflation are falling not rising. Moreover, if account is taken of quality change inflation measures would have to be further reduced.
There are many ways to measure anticipated inflation. We can use inflation swaps data, as Summers does; we can look at the breakeven rates implied by the yields on nominal Treasury securities and TIPS; and we can look at survey measures. What do people who do forecasting for a living, and who have access to all of that data, say? The Philly Fed's most recent Survey of Professional Forecasters has predictions of PCE headline inflation for 2016 and 2017, respectively, of 1.8% and 1.9%, which is pretty close to the 2% PCE inflation target. A Wall Street Journal survey shows a CPI inflation forecast that seems roughly consistent with the December FOMC projections for PCE inflation. So, it seems that "most available data," filtered through the minds and models of professional forecasters, suggests no less optimism than the FOMC is expressing in its projections, about achieving 2% inflation in the future.

Second, the Fed seems to mistakenly regard 2 per cent inflation as a ceiling not a target. One can reasonably argue that after years of below target inflation, it is appropriate to have a period of above target inflation. This is implied by arguments for price level targeting. Alternatively, it seems reasonable to simply suggest that the Fed should run equal risks of over and under shooting its inflation target. I would actually argue given the observed costs of deflation that the costs of under shooting the target exceed the costs of overshooting it.
In its updated "Statement of Longer-Run Goals and Policy Strategy," the FOMC tells us, with respect to its inflation target,
The Committee reaffirms its judgment that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve’s statutory mandate.
So, the goal is clearly 2 percent inflation, as measured using the headline PCE deflator. Is this a "ceiling," as Summers states? Later in the document, the FOMC states:
In setting monetary policy, the Committee seeks to mitigate deviations of inflation from its longer-run goal and deviations of employment from the Committee’s assessments of its maximum level.
So, deviations of inflation from 2%, on both the high and low sides, are considered by the FOMC to be undesirable. Why does it look to Summers as if 2% is a ceiling? He doesn't give us much to go on.

Third, the Fed seems to be in the thrall of notions that might be right but do not to my knowledge have analytic support premised on the idea that the rate of change of interest rates as distinct from their level influences aggregate demand. It is suggested that by raising rates the Fed gives itself room to lower them. This is tautologically true but I know of no model in which demand will be stronger in say 2018 if rates rise and then fall than if they are kept constant at zero. Nor conditional on their reaching say 3 per cent at the end of 2017 do I know of a reason why recession is more likely if the changes are backloaded. I would say the argument that the Fed should raise rates so as to have room to lower them is in the category with the argument that I should starve myself in order to have the pleasure of relieving my hunger pangs.
Unlike Summers, I do know of a model in which "...demand will be stronger in say 2018 if rates rise and then fall than if they are kept constant at zero." It's a model that Summers should like - a plain vanilla reduced-form New Keynesian model. This comes from Jim Bullard's "Permazero" speech, with reference to work by John Cochrane. Here, I'll extract Jim's Figure 2 from his talk:
Here, i is the nominal interest rate, Greek pi is the inflation rate, and x is the output gap. The figure shows the results of a reduction in the nominal interest rate to zero, followed, after a period of time, by a gradual increase in the nominal interest rate so as to achieve the inflation target. You can see that the sharp drop to zero in the nominal rate also gives a sharp - but temporary - increase in output. With a gradual increase in nominal interest rates, the corresponding drop in output is smaller, but persists for a longer time. So, if the nominal interest rate stayed at zero instead of increasing, then the output gap would stay at zero. But, suppose that we do Summers's "2018" experiment, i.e. return to zero somewhere in the right-hand side of the figure. Then the output gap will indeed be positive for a period of time. So, Summers is incorrect. Indeed, I think it would be difficult to find a model with a non-neutrality of money where this is not the case. Typically, a change in the nominal interest rate produces a short-run nonneutrality. That is, there are temporary real effects, and if the nominal interest rate persists at this different level, the real effects disappear. Fundamentally, it is true that the central bank can't go down unless it goes up. Whatever the benefits of stabilization by way of monetary policy are, the central bank cannot exploit them unless it sometimes chooses to have market nominal interest rates go up.

Fourth, the Fed is likely underestimating secular stagnation. It is failing to recognise its transmission from the rest of the world and it is overestimating the degree of monetary accommodation now present and likely to be present in the future by overestimating the neutral rate. I suspect that if nominal interest rates were 3 per cent and inflation were far below target there would be much less pressure to raise them than there has been of late. The desire to raise rates reflects less some rigorous Philips curve analysis than a sense that zero rates are a sign of pathology and an economy creating 200,000 jobs a month is not diseased. The complexity is that zero rates may be less abnormal than is supposed because of fundamental shifts in the saving investment balance.
For this, it helps to focus on a three key sentences in this paragraph:

1. "...the Fed is likely underestimating secular stagnation." Since Summers has not defined the concept clearly, it's hard to see how anyone could pay too little attention to it. But economists - both inside the Fed and outside - certainly are not ignoring the phenomenon of low real interest rates and the implications. Far from it.
2. "I suspect that if nominal interest rates were 3 per cent and inflation were far below target there would be much less pressure to raise them than there has been of late." Here's the time series for the fed funds rate and PCE inflation since 1960:
If we take "inflation far below target" to mean zero or lower, say, then a 3% fed funds rate and and inflation far below target has never occurred in the time series since 1960. If it did occur, then I think people would be scratching their heads about it. Would the Fed be talking about raising rates? Who knows? Seems like asking what you would do if you came around a bend in the road and encountered a two-headed goat.
3. "The desire to raise rates reflects less some rigorous Philips curve analysis than a sense that zero rates are a sign of pathology and an economy creating 200,000 jobs a month is not diseased." First, I'm not a big fan of using "rigorous" and "Phillips curve" in the same sentence. Second, it is true that a standard argument for liftoff is that that the current level of the policy rate is not consistent with: (i) past Fed behavior and (ii) the proximity of inflation and unemployment to the Fed's goals. So, to justify continued zero interest rate policy, one would have to make a case that there is something wrong with the way policy was conducted in the past, or something different about current circumstances. Summers might say that the different circumstance is secular stagnation. If so, he needs to be more explicit about what this is about.

Tuesday, December 22, 2015

On the Road to Normal

The FOMC's "Policy Normalization Principles and Plans" entered the execution stage last week, as of course you know. The first stage in normalization was liftoff - the departure of the fed funds target range from 0-0.25%, where it had been for the last seven years. As I outlined in this St. Louis Fed Review piece, actual monetary policy implementation over the foreseeable future will be anything but normal, due to some quirks in the U.S. financial system and our large central bank balance sheet.

A typical central bank formulates policy in the very short run by choosing a target for some short-term (usually overnight) nominal interest rate. The procedure for hitting that target depends on the nature of financial markets, and the structure of the banking sytem, among other things. One approach is to operate a channel or corridor system, under which the central bank lends to financial institutions at x%, offers financial institutions deposits (reserves) at the central bank at y%, with a target overnight interest rate of z%. The channel is structured with x >= z >= y, and arbitrage prevents the overnight rate from escaping the channel. A good example of a channel system in operation is Canada, where x - z = z - y = 25 basis points. In the Canadian banking system there are no reserve requirements, and the financial system operates essentially with zero reserves (with a little slippage) overnight while the corridor system is in operation. The U.S. system before the financial crisis was a type of channel system with the interest rate on reserves at zero, the Fed's discount rate determining the upper bound on the overnight rate, and the fed funds rate serving as the target interest rate.

If there are sufficient reserves outstanding (and sufficient need not be much - see my Review article on what happened in Canada, 2010-2011), then a channel system becomes a floor system. That is, in the example, arbitrage should dictate that the overnight rate is y%. But, in the U.S. financial system, arbitrage is imperfect - there are frictions. First, government sponsored enterprises (GSEs) cannot receive interest on their reserve accounts with the Fed, by law. Second, the financial institutions that do receive interest on reserves - depository institutions - face regulatory costs to holding reserves, which is where the friction comes in. So, the Fed has another instrument - overnight reverse repurchase agreements (or ON-RRPs) - which helps it to hit its target for the overnight fed funds rate. An ON-RRP is an overnight loan to the Fed, so think of this as just another interest-bearing Fed liability, like reserves. There is an expanded list of counterparties who can engage in ON-RRPs with the Fed, and that list includes GSEs and money market mutual funds - the first set of institutions cannot receive interest on reserves, and the second do not have reserve accounts. So, the Fed's ON-RRP facility extends the reach of interest-bearing Fed liabilities.

With the ON-RRP facility in place, U.S. monetary policy implementation is unique in the world, as far as I know. What, under ideal conditions, would be a floor system (given the large quantity of reserves outstanding) is actually a floor with a sub-floor. The Fed's discount rate (or "primary credit rate") is currently set at 1.0% which, if there were zero excess reserve outstanding, would normally determine an upper bound (roughly) on the fed funds rate. The floor is the interest rate on excess reserves, or IOER, which is currently set at 0.5%. The sub-floor is the ON-RRP rate, currently set by the Fed at 0.25%. The December 16 FOMC statement states that:
...the Committee decided to raise the target range for the federal funds rate to 1/4 to 1/2 percent.
Information on the details of the directive that the Open Market Desk at the New York Fed received is in this implementation note. Basically, the idea is that the only operations that are necessary for the New York Fed for the immediate future are daily ON-RRP operations. These will be conducted at a rate of 0.25%, with no constraints other than available collateral - the securities on the Fed's balance sheet that are not otherwise accounted for - and per-counterparty limits of $30 billion.

Actual ON-RRP transactions are reported on the New York Fed's website. After the change in policy, the takeup on the ON-RRP facility on Thursday, Friday, and Monday, was, respectively, $105 billion, $143 billion, and $160 billion. So, the quantity of outstanding ON-RRPs has been increasing, but is very modest relative to reserves (at $2.5 trillion) or the quantity of available collateral, which is about $2 trillion. But did the New York Fed actually succeed in controlling the fed funds rate within the specified range of 0.25-0.50%? You can see the results here. On the first three days after liftoff, the effective fed funds rate (an average of rates in individual trades on the market) was 0.37%, 0.37%, and 0.36%, respectively, which is, perhaps surprisingly, in the middle of the range. The dispersion in fed funds rates is small (the standard deviation is 0.05), but there are trades on the fed funds market as high as 0.59% (above the IOER), and as low as 0.25% (the ON-RRP rate).

There can be dispersion in fed funds rates because of idiosyncratic counterparty risk in the market (fed funds is unsecured credit) - this was an important factor during the financial crisis for example. But, under current conditions, any dispersion in rates is in part due to the heterogeneous nature of trading. In particular, there are basically two kinds of borrowers in the market. The first is a depository institution which receives interest on reserves, and is borrowing from some financial institution - typically a GSE - that does not want to be caught holding reserves overnight that earn zero interest. These trades typically occur at an interest rate less than the IOER. The second is a depository institution which, much as in pre-financial crisis times, finds itself short of reserves at the end of the day, and borrows on the fed funds market to replenish reserves. Due to the huge stock of reserves currently in the system, there are very few of these depository institutions at the end of each day, and therefore very few of these transactions. Seemingly, these are the trades that occur above IOER, because the lenders face different opportunity costs than in cases where the lenders cannot receive interest on reserves.

The second stage of normalization will be continued increases in the range for the fed funds rate, followed by a reduction in the size of the Fed's balance sheet. Currently, the size of the Fed's asset portfolio is held constant, in nominal terms, through reinvestment in mortgage backed securities and Treasury securities as these assets mature. The Policy Normalization Principles and Plans state that
The Committee expects to cease or commence phasing out reinvestments after it begins increasing the target range for the federal funds rate; the timing will depend on how economic and financial conditions and the economic outlook evolve.
As well,
The Committee currently does not anticipate selling agency mortgage-backed securities as part of the normalization process, although limited sales might be warranted in the longer run to reduce or eliminate residual holdings. The timing and pace of any sales would be communicated to the public in advance.
So, balance sheet reduction, when it happens, will result when the the Fed's assets are maturing, and they are not being replaced. No outright sales, other than for fine-tuning purposes, are anticipated. A normal balance sheet will have: (i) a small quantity of reserves, on the order of what was outstanding prior to the financial crisis; (ii) a portfolio consisting of only Treasury securities (no mortgage backed securities, for example); (iii) an asset portfolio with a shorter average maturity than currently, again comparable to what existed before the financial crisis. According to this paper by Carpenter et al., normalization in the size of the balance sheet might take 6 years or more from the time reinvestment ends, and normalization in terms of average maturity of the asset portfolio will take even longer.

As normalization proceeds, an issue will arise as to how balance sheet reduction relates to increases in the fed funds rate range. Clearly, the large-scale asset purchases that occurred were viewed by some policymakers as being equivalent to reductions in the fed funds rate target. One view is that we can translate a quantity of nominal asset purchases by the Fed into a given basis point decrease in the fed funds rate, were such a decrease feasible. A second view is that the size of the balance sheet is immaterial - what matters is the composition of the Fed's asset holdings, for example as measured by average maturity. A third view is that neither the size of the balance sheet nor its composition makes any difference - quantitative easing is neutral, and only the short-term nominal interest rate matters. Which view one takes clearly matters a great deal for how normalization should proceed.

I wanted to think about some of these issues, and understand more about how the Fed's ON-RRP facility works, so I wrote this paper. This is a model with two banking sectors - basically regulated and unregulated. In the regulated sector, banks can hold interest-bearing reserves, and they have a capital requirement. Unregulated banks cannot hold reserves, but they also don't have capital requirements. There are four assets: currency, reserves, government debt (one period nominal bonds), and private assets. Regulated and unregulated banks serve different clienteles. The regulated banks offer deposit contracts that provide for currency withdrawal on demand and transactions services (looks like the opportunity to do debit card transactions), and the unregulated banks provide intermediation services that can be interpreted as involving repurchase agreements (repos) using government debt as collateral. In the model, currency and government debt have special roles: currency is the only asset accepted in some transactions, and government debt is the only collateral acceptable in some types of credit transactions. There is an interbank credit market on which regulated and unregulated banks can trade.

In the model, if the balance sheet of the central bank gets large enough, then there can be a positive margin between the interest rate on reserves and the interbank interest rate. As well, there can be a positive margin between the interbank rate and the interest rate on government debt, provided government debt is in sufficiently low supply, in a well-defined sense. This corresponds to what we have been observing recently. In particular, in the last few days short-term T-bills have been trading not only below the interest rate on reserves and the fed funds rate, but below the ON-RRP rate. Basically, the interest rate on government debt in the model can be lower than the interbank rate due to a higher liquidity premium on government debt.

I introduce an ON-RRP facility in the model (basically the unregulated banks can hold interest-bearing central bank liabilities). This does what it is supposed to do - it puts a floor under the interbank rate. As well, more ON-RRPs are always welfare improving. Further, a large central bank balance sheet is a bad thing. Reserves in the model are assets that sit on the balance sheets of regulated banks, and are costly to hold because of the capital requirement. That is, because of the capital requirement, a swap of reserves for government debt tightens collateral constraints and can make everyone worse off. As well, the expanded balance sheet converts assets that are useful in particular transactions as collateral (government debt) into reserves, which are not useful in that sense.

In terms of "tightening" by way of an increase in the interest rate on reserves vs. a reduction in the balance sheet, these two policy changes have similar effects on market interest rates. However, the effects on quantities and on welfare are very different.

I think these results are interesting. Let me know what you think.

Thursday, November 12, 2015

"Permazero," by Jim Bullard

Jim Bullard has given a talk on "Permazero." Jim frames the idea as follows:
We have, after all, been at the zero lower bound in the U.S. for seven years.  In addition, the FOMC has repeatedly stressed that any policy rate increase in coming quarters and years will likely be more gradual than either the 1994 cycle or the 2004‐2006 cycle.  In short, the FOMC is already committed to a very low nominal interest rate environment over the forecast horizon of two to three years.  Perhaps short‐term nominal rates will simply be low during this period, or perhaps the economy will encounter a negative shock that will propel policy back toward the zero lower bound.
So, liftoff (an increase in the Fed's policy rate) may or may not occur soon, but even if it does, it's quite possible that we could face a world of "permazero," i.e. low nominal interest rates for a very long time. Well, so what?
The thrust of this talk is to suppose, for the sake of argument, that the zero interest rate policy (ZIRP) or near‐ZIRP remains a persistent feature of the U.S. economy.  How should we think about monetary stabilization policy in such an environment?  What sorts of considerations should be paramount? Should we expect slow growth?  Will we continue to have low inflation, or will inflation rise?  Would we be at more risk of financial asset price volatility?  What types of concrete policy decisions could be made to cope with such an environment?  Would it require a rethinking of U.S. monetary policy?
I'll leave you to read the paper, which introduces some important policy ideas, I think.

Tuesday, October 13, 2015

What Do We Know About Long and Variable Lags?

Purveyors of standard monetary policy lore argue that the effects of monetary policy are subject to "long and variable" lags. The idea appears to originate with Milton Friedman. Quoting from "A Program for Monetary Stability:"
There is much evidence that monetary changes have their effect only after a considerable lag and over a long period and that the lag is rather variable. In the National Bureau study on which I have been collaborating with Mrs. Schwartz, we have found that, on the average of 18 cycles, peaks in the rate of change in the stock of money tend to precede peaks in general business by about 16 months and troughs in the rate of change in the stock of money to precede troughs in general business by about 12 months ... . For individual cycles, the recorded lead has varied between 6 and 29 months at peaks and between 4 and 22 months at troughs.
The "National Bureau study" he mentions, which was not yet published when he wrote "A Program for Monetary Stability," is Friedman and Schwartz's "A Monetary History of the United States, 1867-1960." The Monetary History is the key empirical work backing up Friedman's monetarist ideas. Roughly, this empirical work consisted of a compilation (and construction where necessary) of monetary measurements for the United States over a long period of time, followed by the use of relatively crude statistical methods (crude in the sense that Chris Sims wouldn't get excited by the methods) to uncover regularities in the relationship between money and real economic activity.

As you can see from the quote, turning points in time series were important for Friedman. In part, he wanted to infer causality from the time series - if turning points in the money supply tended to precede turning points in aggregate economic activity, then he thought this permitted him to argue that fluctuations in money were causing fluctuations in output. But, Friedman could not find any regularity in the timing of the effects of money on output, other than that these effects took a long time to manifest themselves. Thus, the notion that monetary policy lags were long and variable.

The Monetary History formed a foundation for Friedman's monetary policy prescriptions. According to Friedman, central banks had two choices. They could either take the car and drive by looking in the rear-view mirror, or take the train. That is, the central bank could exercise discretion, put itself at the mercy of long and variable lags, and perhaps make the economy less stable in the process, or it could simply adhere to a fixed policy rule. From Friedman's point of view, the best policy rule was one which caused some monetary aggregate to grow at a fixed rate forever. If the primary source of instability in real GDP is instability in the money supply, then surely removing that instability would be beneficial, according to Friedman.

The modern version of the Monetary History approach is VAR (vector autoregression) analysis. This preliminary version of Valerie Ramey's chapter for the second Handbook of Monetary Economics is a nice survey of how VAR practitioners do their work. The VAR approach has been used for a long time to study the role of monetary factors in economic activity. If we take the VAR people at their word, the approach can be used to identify a monetary policy shock and trace its dynamic effects on macroeconomic variables - letting the data speak for itself, as it were. Ramey's paper describes a range of results, but the gist of it is that the full effects of a monetary policy shock are not manifested until about 16 to 24 months have passed. This is certainly in the ballpark of Friedman's estimates, though the typical lag (depending on the VAR) is somewhat longer than what Friedman thought. Thus, modern time series analysis does not appear to be out of line with the work of Friedman and Schartz from more than half a century ago.

But, should we buy it? First, there are plenty of things to make us feel uncomfortable about VAR results with regard to monetary policy shocks. As is clear from Ramey's paper, and to anyone who has read the VAR literature closely, results (both qualitative and quantitative) are sensitive to what variables we include in the VAR, and to various other aspects of the setup. Basically, it's not clear we can believe the identifying assumptions. Second, even if take VAR results at face value, the results will only capture the effect of an innovation in monetary policy. But, modern macroeconomics teaches us that this is not what we should actually be interested in. Instead, we should care about the operating characteristics of the economy under alternative well-specified policy rules. These are rules specifying the actions the central bank takes under all possible circumstances. For the Fed, actions would involve setting administered interest rates - principally the interest rate on reserves and the discount rate - and purchasing assets of particular types and maturities.

Once we think generally in terms of policy rules, the notion of long and variable lags goes out the window. In principle, the current state of the economy determines the likelihood of all potential future states of the economy. Then, if we know the central bank's policy rule, we know the the likelihood of all future policy actions. But some of those future economic states of the world may not arise for many years, if ever. For example, if the policy rule is well-specified, it tells us what the central bank will do in the event of another financial crisis. Under what circumstances will the Fed lend to large and troubled financial institutions? How bad does it have to get before the central bank pushes overnight nominal interest rates to zero or lower? To what extent should the central bank engage in quantitative easing? And so on. This is basically what "forward guidance" is about. In a world with forward looking people, promises about future actions matter for economic activity today - monetary policy actions need not precede effects. All of this raises doubts about what we can learn about monetary policy effects from a purely statistical analysis. Unfortunately the data is not very good at speaking for itself.

But, we have models. In those models, we can think about any policy experiments we want (within the bounds of what the model can handle of course), and we can rig those experiments in ways that allow us to think about long and variable lags in a coherent fashion. Basic frictionless models commonly used in macro have essentially no internal propagation. For example, the standard representative agent neoclassical growth model with technology shocks (i.e. RBC) exhibits some propagation through the capital stock - a positive technology shock implies higher investment today, higher capital stock tomorrow, and higher output tomorrow. But that effect is very small, and the basic RBC model fits the persistence in output by applying persistence in the technology shock. Indeed, in that model the properties of the time series of aggregate output are determined primarily by the time series properties of the exogenous technology shock, so that's not much of a theory of propagation. Add monetary elements to basic RBC without other frictions and not much is going to happen. For example, in Cooley and Hansen's cash-in-advance model, monetary impulses don't matter much, and certainly don't produce Friedman's long and variable lags.

Of course, we have frictions. Sticky prices will certainly act to produce nonneutralities of money that will persist. But it's well-known that the quantitative effects are highly sensitive to assumptions about pricing. With Calvo pricing, monetary shocks are a big deal, but with state-dependent pricing, the effects are small. Other work by Francesco Lippi and Fernando Alvarez shows that small changes in the pricing protocol - for example setting two prices (a sale price and a regular price) from which to choose - can dramatically reduce the effect of a money shock. Another propagation mechanism with some claim to support from serious theory is labor search. The fact that successful matches in the labor market take time will act to propagate any shocks in general equilibrium, including monetary shocks. However, there seems to be some debate about how quantitatively important this is.

Probably the best known attempt to quantify the dynamic effects of monetary policy, in an expanded New Keynesian model, is Christiano/Eichenbaum/Evans (CEE). CEE start by first filtering the data through VAR analysis, and then treating the impulse responses in the VAR as data that the model should explain. Thus, a key assumption in the analysis is that the monetary policy shock has been correctly identified in the preliminary VAR step. Given that heroic assumption, what CMM set out to explain are lags in monetary policy that, if not variable, are certainly long:
Output, consumption, and investment [in the VAR impulse responses] respond in a hump-shaped fashion, peaking after about one and a half years and returning to preshock levels after about three years.
So, the responses of real activity to monetary policy shocks estimated by CMM exhibit two key features. First, the effects take a long time to peak and to dissipate, in a manner that seems consistent with the Monetary History and other VAR evidence. Second, the response exhibits delay - that's what the "hump shapes" are about.

So, how do CMM go about fitting this data? Getting the persistence and delay in the effects of monetary policy will require frictions. These are the frictions in the model:

1. Sticky prices: There is Calvo pricing. Not only that, but if a firm gets to re-set its price, it must do that before knowing the current period's monetary shock.
2. Sticky wages: Households set their wages in a Calvo fashion.
3. Sticky utilization: It is costly to change the utilization rate of capital.
4. Cash-in-advance purchases of labor: This works as in Tim Fuerst's segmented markets model of monetary policy, and gives an added kick to employment from a monetary policy shock.
5. Costs of adjustment associated with investment.

What's going on here? For the most part, the five frictions above are not well-grounded in microeconomic theory, nor are they well-supported with microeconomic evidence. We of course know that firms do not make decisions continuously but at discrete points in time. But why should it be costly to adjust the capital stock, or to change capital utilization? It can be infinitely costly for a firm to change its price if the Calvo fairy does not allow it, but in the CMM model it is costless to index price decisions. Why? Because that helps in fitting the data. Ultimately, then, we have a model which does a good job of fitting VAR impulse responses, but seems to have thrown out a lot of economics along the way.

So, what do we know about long and variable lags associated with monetary policy? Not much, it seems. We don't have good theories of persistence and delay associated with monetary policy actions, and it's hard to trust the empirical evidence that is used to argue for long and variable lags. Further, the theory we have tells us that policy design is about evaluating the operating characteristics of economies under alternative policy rules. And, in that context, thinking in terms of actions and lagged responses is wrongheaded. Let's go with that.

Sunday, October 4, 2015

Some Unpleasant Labor Force Arithmetic

Words such as "grim" and "dismal" were used to describe Friday's employment report, which featured a payroll employment growth estimate for September of 142,000. Indeed, I think it would be typical, among people who watch the employment numbers, to think of performance in the neighborhood of 200,000 added jobs in a month as normal.

But what should we think is normal? As a matter of arithmetic, employment growth has to come from a reduction in the number of unemployed, an increase in the labor force, or some combination of the two. In turn, an increase in the labor force has to come from an increase in the labor force participation rate, an increase in the working-age population, or some combination of the two. So, if we want to think about where employment growth is coming from, labor force participation is an important piece of the puzzle. This chart shows the aggregate labor force participation rate, and participation rates for men and women:
As is well-known, the participation rate has been falling since about 2000, and at a higher rate since the beginning of the Great Recession. Further, participation rates have been falling for both men and women since the beginning of the Great Recession. It's useful to also slice this by age:
Thus, labor force participation has dropped among the young, and among prime-aged workers, but has held steady for those 55 and older. So, there are two effects which have reduced aggregate labor force participation since the beginning of the Great Recession: (i) participation rates have dropped among some age groups, and have not increased for any age group; (ii) the population is aging, and the old have a lower-than-average participation rate.

Next, we'll go back to the 1980s, as that period featured a major recession, but with a very different backdrop of labor force behavior.
The chart shows the population, aged 15-64 (just call this "population"), labor force, and employment (household survey) for the period from the beginning of the 1980 recession to the beginning of the 1990-91 recession, with each time series scaled to 100 at the first observation. This is a period over which the population grew at an average rate of 1.1%, while labor force and employment grew at average rates of 1.6% and 1.7%, respectively. Over this period, employment could grow at a higher rate, on average, than the population, because of an increase in labor force participation, driven primarily by the behavior of prime-age workers. It should be clear that, over the long run, population, labor force, and employment have to grow at the same average rates - again, as a matter of arithmetic. But, over the short run, employment can grow at a higher rate than the labor force if unemployment is falling, and the labor force can grow at a higher rate than the population if the participation rate is rising.

Fast forward to the recent data.
Over the period since the beginning of the Great Recession, population has grown at an average rate of 0.5%, and labor force and employment at 0.3%. As you can see in the chart, employment has essentially caught up with the labor force, reflected of course in a drop in the unemployment rate to close to its pre-recession level. Year-over-year payroll employment growth looks like this:
For more than three years, employment growth has been sustained, at close to or greater than 2%, year-over-year. And, with labor force participation falling, that growth in employment, in excess of the 0.5% growth rate in population, has come from falling unemployment.

What most people seem to view as "normal" payroll employment growth, 200,000 per month, amounts to a 1.7% growth rate per annum, given the current level of employment. To sustain that into the future, given 0.5% population growth, requires further sustained decreases in unemployment and/or an increase in the participation rate. Are there enough unemployed people out there to generate that level of employment growth? In this post, I showed unemployment rates by duration, indicating that unemployment rates for the short and medium-term unemployed have returned to pre-recession levels or lower. What remains elevated is the number of long-term unemployed - those unemployed 27 weeks or more. Here's an interesting chart:
This shows (with the two series scaled differently to highlight the correlation) the time series of long-term unemployed and the monthly flow from unemployment to not-in-the-labor-force. Clearly, the two time series track each other closely. This is related to a phenomenon labor economists call "duration dependence." During a spell of unemployment for a typical unemployed person, the job-finding rate falls. A person unemployed a few weeks is much more likely to find a job than a person unemployed for a year, for example. Thus, as we can see in the chart, it is likely that a long-term unemployed person does not find a job, and exits the labor force.

So, suppose that about 1 million (roughly the difference between the net increase in long-term unemployment from the beginning of the recession until now) long-term unemployed leave the labor force. This would imply an unemployment rate of about 4.6%. Can unemployment go much lower than 4.6%? Probably not. This means that there is little employment growth left to be squeezed out of the current unemployment pool. So, if payroll employment growth is to be sustained at 200,000 per month, this will require an increase in the labor force participation rate. Could that happen? This next chart shows the flows into the not-in-the-labor-force (NILF) state:
Here, note that the flows into NILF from both employment and unemployment are elevated relative to to pre-recession levels. Further, about 70% of the flow currently comes directly from employment. From the previous chart, it seems clear that the flow from the unemployment state will fall to normal levels as the number of long-term unemployed falls, but that should not stem the reduction in the labor force participation rate, if the high flow continues from employment to NILF. Checking what is going on with respect to flows out of the NILF state:
These flows were high relative to pre-recession levels, but are close to, and moving back to, those levels.

These charts reinforce a view that the fall in labor force participation, post-recession, has been driven by long-run factors, and those factors show no sign of abating. Thus, we should not expect the labor force participation rate to stop falling any time soon, nor should we expect it to change course soon.

Conclusion? With the population aged 15-64 growing at 0.5% per year, if we're getting payroll employment growth of more than about 60,000 per month (that's 0.5% growth in payroll employment per year), this has to be coming from the pool of unemployed people, or from those not in the labor force. But further significant flows of workers from unemployment to employment are unlikely, and the net flows from the labor force to NILF are likely to continue. Thus, employment growth of 142,000 may seem grim and dismal, but labor market arithmetic tells us that employment growth is likely to go lower in the immediate future.

Tuesday, September 22, 2015

Knut Was a Neo-Fisherian

In the midst of this Paul Krugman post, I found a description of Wicksellian dynamics:
As I’ve been trying to point out – and as others, notably Ben Bernanke, have also tried to point out – such monetary wisdom as we possess starts with Knut Wicksell’s concept of the natural interest rate. Try to keep rates too low, and inflation accelerates; try to keep them too high, and inflation decelerates and heads toward deflation.
So, I was thinking, what happens if we write that down and work it out?

To keep it simple, we'll just deal with a deterministic world. It's more or less New Keynesian, but a little different. To start, we have the standard Euler equation, which prices a one-period nominal bond - after taking logs and linearizing:

(1) R(t) = r* + ag(t+1) + i(t+1),

where R(t) is the nominal interest rate, r* is the subjective discount rate, a is the coefficient of relative risk aversion (assumed constant), g(t+1) is the growth rate in consumption between period t and period t+1, and i(t+1) is the inflation rate, between period t and period t+1. Similarly, the real interest rate is given by

(2) r(t) = r* + ag(t+1).

Assume there is no investment, and all output is consumed.

To capture Krugman's concept of Wicksellian inflation dynamics, first let r* + ag* denote the Wicksellian natural rate of interest, where g* is the economy's long-run growth rate. Krugman says that inflation goes up when the the real interest rate is low relative to the natural rate, and inflation goes down when the opposite holds. So, write this as a linear relationship,

(3) i(t+1) - i(t) = -b[r(t) - r* - ag*],

where b > 0. Then, from (2) and (3),

(4) i(t) = ba[g(t+1)-g*] + i(t+1),

which is basically a Phillips curve - given anticipated inflation, inflation is high if the growth rate of output is high.

Then, substitute for g(t+1) in equation (1), using (4), and write

(5) i(t+1) = -[b/(1-b)][R(t) - r* - ag*] + [1/1-b]i(t).

So this is easy now, as to determine an equilibrium we just need to solve the difference equation (5) for the sequence of inflation rates, given some path for R(t), or some policy rule for R(t), determined by the central bank.

First, suppose that R(t) = R, a constant. Then, from (5), the unique steady state is

(6) i = R - r* - ag*.

That's just the long-run Fisher relation - the inflation rate is the nominal interest rate minus the natural real rate of interest. But what about other equilibria? If 0 < b < 1, or b > 2, then in fact the steady state given by (6) is the only equilibrium. If 1 < b < 2 then there are many equilibria which all converge to the steady state.

Next, suppose that R(t) = R1, for t = 0, 1, 2, ..., T-1, and R(t) = R2, for t = T, T+1, T+2,..., where R2 > R1. This is an experiment in which the nominal interest rate goes up, once and for all, at time T, and this change in monetary policy is perfectly anticipated. In the case where 0 < b < 1, there is a unique equilibrium that looks like this:

So, inflation increases prior to the nominal interest increase, and achieves the Fisherian steady state in period T, and the growth rate in output and the real interest rate are low and falling before the nominal interest rate increase occurs.

We can look at the other cases, in which b > 1, and the dynamics will be more complicated. Indeed, we get multiple equilibria in the case 1 < b < 2. But, in all of these cases, a higher nominal interest rate implies convergence to the Fisherian steady state with a higher inflation rate. Increasing the nominal interest rate serves to increase the inflation rate. Keeping the nominal interest rate at zero serves only to keep the inflation rate low, in spite of the fact that this model has Wicksellian dynamics and a Phillips curve.

I'm not endorsing this model - just showing you its implications. And those implications certainly don't conform to "try to keep rates too low, and inflation accelerates; try to keep them too high, and inflation decelerates and heads toward deflation," as Krugman says. The Wicksellian process is built into the model, just as Krugman describes it, but the model has neo-Fisherian properties.

Sunday, September 20, 2015

The ZIRP Blues

Here's the time series of the fed funds rate and inflation rate in the United States, from the time Paul Volcker became Fed Chair:
Suppose an alien with a high IQ lands in my back yard. I show her this picture, and explain that the central bank moves the fed funds rate up and down so as to control inflation. Ms. Alien points out that the fed funds rate and inflation were in the neighborhood of 10% in August 1979. Now, 36 years later, the fed funds rate and the inflation rate are close to zero. So, says Ms. Alien, it looks like the central bank spent 36 years fighting the inflation rate down to zero.

Ms. Alien would be surprised to learn that most people are not happy with the current state of affairs. There are always exceptions, of course - in this case, John Cochrane. But popular views on current U.S. monetary policy fall basically into two camps:

1. Phillips curve A: These people think inflation is too low. But eventually the Phillips curve will re-assert itself, and inflation will rise of its own accord. When that happens, we can worry about liftoff - an increase in interest rates to hold inflation down.
2. Phillips curve B: These people also think inflation is too low, that, eventually, the Phillips curve will re-assert itself, and that inflation will rise of its own accord. But a Phillips curve B type thinks that we need to get ahead of the game. Milton Friedman told us that there are "long and variable lags" associated with monetary policy. If we wait too long, then monetary policy will be scrambling to keep up with higher inflation, and interest rates will need to climb at a high rate, at the expense of real economic activity.

The Phillips curve A group includes Summers, Stiglitz, and Krugman, who states that we should "wait until you see the whites of inflation’s eyes." Members of the Phillips curve A and B camps have to somehow come to grips with the Phillips curve we see in the recent data, which looks like this:
The line joins the points in the scatter plot in temporal sequence, roughly from right to left. Krugman's point in his piece is that the natural rate of unemployment (NAIRU) has been receding as we get closer to it. In this view, we're supposed to have faith that the Phillips curve looks like this:

An alternative to Phillips A/B is the neo-Fisherian view. As John Cochrane says:
But if a 0% interest rate peg is stable, then so is a 1% interest rate peg. It follows that raising rates 1% will eventually raise inflation 1%. New Keynesian models echo this consequence of experience. And then the Fed will congratulate itself for foreseeing the inflation that, in fact, it caused.
Cochrane's saying that central bankers have to come to terms with the Fisher effect. If the short-term nominal interest rate is low for a long time, we should not be surprised that the inflation rate is low. And John is quite happy with low inflation. While the Phillips curve A and B camps fight it out over how to get inflation up, and sing the ZIRP (zero interest rate policy) blues, he's hoping they never figure it out.

There's a more subtle idea in the quote from Cochrane above, which is that a neo-Fisherian could find common cause with the Phillips curve B camp. They could all agree to liftoff, the inflation rate could rise due to the Fisher effect, and the central bank "will congratulate itself for foreseeing the inflation that, in fact, it caused."

If you're wondering what central bankers are thinking, a nice summary of conventional views is in a speech by Andy Haldane, Chief Economist at the Bank of England. It's a long speech, by U.S. central banker standards, but certainly thorough. Much of the speech focuses on the "problem" of the zero lower bound (ZLB). In most of the monetary models we write down, and in the traditional thinking of central bankers, zero is a lower bound on central bank's policy interest rate. The ZLB is thought to be a problem as, once the central bank reaches it, its policy options are limited. If one takes this seriously, there are two responses: (i) stay away from the ZLB; (ii) get more creative about policy options at the ZLB.

How do we stay away from the ZLB? Haldane tells us why we're now seeing ZLB policies:
... by lowering steady-state levels of nominal interest rates, lower inflation targets ... increased the probability of the ZLB constraint binding.
He's saying that low inflation targets, i.e. average rates of inflation that are low, imply lower nominal interest rates. So,
... one option for loosening [the ZLB] constraint would simply be to revise upwards inflation targets. For example, raising inflation targets to 4% from 2% would provide 2 extra percentage points of interest rate wiggle room.
So this is entirely consistent with John Cochrane and the neo-Fisherians. If the central bank's inflation target is higher by two percentage points, then the nominal interest rate must on average be higher by two percentage points, and the chances that monetary policy will take us to the ZLB should be much smaller.

But, Haldane is certainly not a neo-Fisherian. He's more in the Phillips curve A camp, as this is his policy recommendation:
In my view, the balance of risks to UK growth, and to UK inflation at the two-year horizon, is skewed squarely and significantly to the downside.

Against that backdrop, the case for raising UK interest rates in the current environment is, for me, some way from being made. One reason not to do so is that, were the downside risks I have discussed to materialise, there could be a need to loosen rather than tighten the monetary reins as a next step to support UK growth and return inflation to target.
Haldane makes it clear that he thinks the way to "return inflation to target," i.e. 2%, is not to let the central bank's interest rate target go up. And, as I wrote here, it's not as if the UK data will make you a believer in the Phillips curve. Here's the policy problem the Bank of England faces:
The policy interest rate target is currently at 0.5% in the UK but, as in the U.S., the inflation target is at 2% and actual inflation is hovering around 0%.

Haldane discusses ways in which central banks can get creative when confronted with the ZLB. The options that have been discussed (and in some cases implemented by some central banks) are:

1. Quantitative Easing: The idea here is that, at the ZLB, purchases by the central bank of short-term government debt are essentially irrelevant, as there is no fundamental difference between short-term government debt and reserves at the ZLB. But, the central bank could purchase long-maturity government debt or other assets at the ZLB. Perhaps that does something? Post-Great Recession, the U.S. of course acquired a large portfolio of long-maturity Treasury securities and mortgage-backed securities, and maintains the nominal value of that portfolio of assets through a reinvestment policy that is still in place. Whatever the effects of U.S. QE programs, it's an inescapable reality that inflation is close to zero. But, even larger asset purchases were carried out by the Swiss National Bank, and the Bank of Japan. Here's what's happened in Switzerland:
In this case, both the policy rate and the inflation rate are well below zero. The Swiss National Bank has a goal of price stability, which it defines as less than 2% inflation. I'm not sure if they are OK with an inflation rate less than -1%.

The Bank of Japan began a program of "qualitative and quantitative monetary easing" in April of 2013. Here's the overnight interest rate and inflation rate time series for Japan:
I've included the whole 20-year period over which Japan's overnight interest rate was below 1%. Japan is, as you know, our stock example of what ZIRP produces. But what of the effects of the Bank of Japan's recent QE experiment? Don't be deceived by that burst of inflation in 2014. In April 2014, the consumption tax in Japan went up from 5% to 8%, and that feeds directly into the CPI - the prices in the index are measured after-tax. If we look at the CPI levels since the beginning of the QE program in April 2013, you can see that more clearly:
So, from April 2013 to July 2015, the CPI increased about 4%. If 3 percentage points of that is simply due to the consumption tax increase, then we're left with less than 1/2% per year in inflation since the QE program began. The Bank of Japan's inflation target is 2%, which it is missing by a wide margin on the low side, in spite of an increase in the monetary base in Japan that looks like this:
You can't blame John Cochrane for stating the following, with respect to the U.S.:
Even the strongest empirical research argues that QE bond buying announcements lowered rates on specific issues a few tenths of a percentage point for a few months. But that's not much effect for your $3 trillion. And it does not verify the much larger reach-for-yield, bubble-inducing, or other effects.

An acid test: If QE is indeed so powerful, why did the Fed not just announce, say, a 1% 10 year rate, and buy whatever it takes to get that price? A likely answer: they feared that they would have been steamrolled with demand. And then, the markets would have found out that the Fed can’t really control 10 year rates. Successful soothsayers stay in the shadows of doubt.

I've written down a model of QE, in which swaps of short-maturity assets for long-maturity assets by the central bank can have real effects. Basically, this increases the stock of effective collateral in the economy, relaxes collateral constraints, and increases the real interest rate. It's a good thing. But, if the nominal interest rate is pegged at zero, this will lower the inflation rate.

2. Lower the lower bound: If the ZLB is a problem, possibly we can make the problem go away by relaxing the bound. In models we write down, the zero lower bound arises because it is costless to hold currency which, given current technological constraints, cannot bear interest. When the central bank has excess reserves outstanding in the financial system, if an attempt were made to charge financial institutions for the privilege of holding reserves with the central bank, these institutions would opt to hold currency instead - in some of our models. But, in the real world it is not costless to hold currency. Making interbank transactions using currency is impractical, as millions of dollars in currency takes up a lot of space, and because real resources would have to be expended in preventing theft. This implies that market nominal interest rates can be negative and, indeed, some jurisdictions have opted for negative interest rates on reserve balances held at the central bank. One of those, as you can see in the chart above, is Switzerland, where the inflation rate is now below 1%. Another is the Euro area:
European overnight interest rates have not gone as low as in Switzerland, nor is the inflation rate as low, but it's a similar picture - not much inflation.

Relaxing the lower bound meets with a difficulty similar to that for QE - in the long run, this just serves to make inflation lower. To see this, consider a very crude monetary model - cash-in-advance. There's a representative consumer who gets utility u(c) from consumption goods c, and suffers disutility v(n) from supplying n units of labor, which produces n units of consumption goods. Consumption goods must be purchased with cash. There are also one period bonds, which sell at a price q at the beginning of the period, and pay off one unit of cash next period. Cash and bonds are held across periods, and fraction t of cash holdings held between periods is stolen. Suppose for simplicity that thieves steal money and burn it. To make things easy, look at an equilibrium in which the money growth rate is a constant, i. Letting B denote the discount factor, in equilibrium the price of the bond is given by

(1) q = B/(1+i)

That's just the Fisher relation. There are no liquidity effects in this model, and in equilibrium the nominal interest rate is (roughly) given by

(2) R = p + i,

where p = 1/B -1 is the real interest rate. In equilibrium c = n, i.e. all output is consumed, and c is determined by

(3) v'(c) = [B(1-t)u'(c)]/(1+i)

What's the lower bound on the nominal interest rate. It's R* = - t, that is, it's determined by the cost of holding cash. And, if the nominal interest rate is at its lower bound, R*, then the inflation rate is

(4) i* = - p - t,

so lowering the lower bound only serves to decrease the inflation rate. You can add bells and whistles - reasons for the real interest rate to be low, endogenous theft of currency, short run non-neutralities of money, or whatever, and I think the basic idea will go through.

Another suggested approach to increasing the inflation rate, given ZIRP, is:

3. Helicopter Drops: The "helicopter drop" was a thought experiment in Milton Friedman's "Optimum Quantity of Money" essay. In the thought experiment, Friedman asks you to consider what would happen if the government sent out helicopters to spew money across the countryside. People would pick up the money, spend it, and prices would go up, etc. Surely, if inflation is perceived to be too low, and we're at a loss as to how to increase it, we should be thinking about this, the argument goes. Can't the government just send people checks and make inflation go up?

Paul Krugman has a suggestion along these lines, for Japan, though what he's suggesting is not Friedman's helicopter transfers (which increase the government budget deficit), but increases in spending on goods and services, financed by printing money:

What’s remarkable about this record of dubious achievement is that there actually is a surefire way to fight deflation: When you print money, don’t use it to buy assets; use it to buy stuff. That is, run budget deficits paid for with the printing press.
Actually, that's exactly what has been going on in Japan. The Japanese government has been running a deficit, the quantity of government debt outstanding is very large (in excess of 200% of GDP) and, as we can see in the chart above, the monetary base is growing at a very high rate. That's what printing money amounts to. But, the central bank can only control the total quantity of outside money in existence, not its composition. How outside money is split between currency and reserves is determined by the banks who hold the reserves and the private firms and consumers who hold the currency. The central bank can do all the money printing it wants, but if the new money sits as reserves, as appears to be happening, it's not going to have the effect that Krugman wants.

Increasing interest rates is hard for central bankers. A decrease in rates rarely produces any flack, but central banks have few supporters when they talk about rate increases. Media pieces like this one in the NYT and this one in the Economist propagate the idea that interest rate increases are fraught with peril. One example people like to use is tightening by the Swedish Riksbank in 2010-2011. Here's the relevant chart:
The tightening that occurred was an increase of 1.75 percentage points in the Riksbank's target interest rate, in quarter-point steps, from July 2010 to August 2011. In the realm of central bank tightening phases, this isn't a big deal. Compare it to the previous tightening phase in Sweden, or the 4.25 percentage point increase that occurred in the U.S. over the 2004-2006 period. But, the Riksbank caught hell from Lars Svennson as a result. The Riksbank seems to have more or less followed Lars's advice since, but as you can see it is now keeping company with other central banks, with a negative policy rate, and inflation close to zero - two percentage points south of its target.

What are we to conclude? Central banks are not forced to adopt ZIRP, or NIRP (negative interest rate policy). ZIRP and NIRP are choices. And, after 20 years of Japanese experience with ZIRP, and/or familiarity with standard monetary models, we should not be surprised when ZIRP produces low inflation. We should also not be surprised that NIRP produces even lower inflation. Further, experience with QE should make us question whether large scale asset purchases, given ZIRP or NIRP, will produce higher inflation. The world's central bankers may eventually try all other possible options and be left with only two: (i) Embrace ZIRP, but recognize that this means a decrease in the inflation target - zero might be about right; (ii) Come to terms with the possibility that the Phillips curve will never re-assert itself, and there is no way to achieve a 2% inflation target other than having a nominal interest rate target well above zero, on average. To get there from here may require "tightening" in the face of low inflation.

Sunday, September 6, 2015

Bad Ideas?

Paul Krugman concludes that "hiking rates now is still a really bad idea." So, his opinion is clear. What's not so clear is his argument, which is this:
When the Fed funds rate was 5 percent, there was room to cut if a rate hike turned out to be premature — that is, the risks of moving too soon and moving too late were more or less symmetrical. Now they aren’t: if the Fed moves too late, it can always raise rates more, but if it moves too soon, it can push us into a trap that’s hard to escape.
So, suppose we're in the pre-financial crisis era, and the fed funds rate is 5%. As a thought experiment, suppose the FOMC decided at its regular meeting to hike the fed funds rate target to 5.25%. Then, at its next meeting it decided that the previous hike was a mistake, and undid it, reducing the fed funds rate target to 5%. I think Krugman is telling us that, in those circumstances, ex post we would prefer the policy that stayed at 5% to the one that went up a quarter point and then back down. I think he's also telling us that, once we discover the mistake, the best policy would be to reduce the fed funds rate below 5%. That's the basis for the asymmetry argument he's making - there's no problem if you're at 5%, but when you're at zero (essentially), you can't correct the mistake. So, fundamentally, this argument revolves around the assumption that there is an economically significant difference between going up to 5.25% this meeting, then down to 5% at the next meeting, vs. having stayed at 5%.

If that's the crux of it, Krugman needs to do a better job of making the case. In terms of modern macroeconomic theory, we don't think in terms of "too early" and "too late." Policy is state-dependent, i.e. data-dependent.
The policymaker takes an action based on what he or she sees, and what that indicates about where the economy is going. The question is: What is Krugman's desired policy rule, and where would that lead us? What exactly is the nature of the "hard to escape" trap that might befall us? As is, Krugman's not giving us much to go on.

Addendum: Here's another thought. Krugman seems to like the "normal" world of 5% fed funds rate better than the zero-lower-bound world - because, as he says, the normal world allows you more latitude to correct "mistakes." So why wouldn't he use that as an argument for liftoff?

Friday, September 4, 2015


Paul Romer is worried that the field of macroeconomics is too tribal - somehow our behavior is impeding scientific progress.

Romer starts his post with two statements:
1.The model in Lucas (1972), Expectations and the Neutrality of Money, made a path breaking contribution to economic theory. It is comparable in importance to the Solow model and the Dixit-Stiglitz formulation of monopolistic competition.

2. The model in Prescott and Kydland (1982), “Time to Build and Aggregate Fluctuations”, has no scientific validity.
As Romer points out, the first statement concerns a modeling contribution, while the second has to do with empirical usefulness. But Romer thinks that how we - that is, macroeconomists in particular - think about those two statements should be revealing.

Most of us can read those two statements and know how the extended arguments are likely to play out. Of course, it helps to have been around for a while - anyone under 33 would not have been born in 1982, and would see Kydland/Prescott as ancient history. And Lucas (1972), though of course highly influential, does not show up on many PhD reading lists these days. But, even if we know the typical arguments, we would like to know more. Has the author of the statement got anything new to say? How do they flesh out the argument? I might think, for example, that the author of the second statement isn't just commenting on how Kydland-Prescott fits the data. Maybe he or she has something to say about the whole methodological approach. In any case, I'm curious. I would like to know. I'm open to persuasion. Indeed, that's what economists do - we try to persuade others, using whatever means possible. And a lot of that persuasion involves words - written and spoken. My ex-colleague Deirdre McCloskey, had a lot to say about this. Here's an excerpt:
I like that. Science is human persuasion, not mechanical demonstration. From reading Romer's stuff lately, I think he believes in mechanical demonstration. According to Romer, scientific progress should be obvious to some self-appointed group of elite scientists, and if we could just get rid of some of the clutter, we would be moving on much more quickly toward ultimate Romerian truth.

In spite of my reluctance, I'll play along with Romer. He says:
Think of some macroeconomist X that you know.
Fine. Some people would say I'm a macroeconomist, so I'll volunteer. Mr. X at your service. The next step is the following:
Consider these questions:

A. Would X agree that there is an objective sense in which statements 1 and 2 can be said to be either true or false?

B. Would X agree that a reasonable person could conclude that statements 1 and 2 are both true?

C. Would X be able to examine dispassionately the evidence for and against these two statements and evaluate them independently?
So, note that I'm going Romer one better. He's asking you to put words in someone else's mouth. That seems a little weird.

In answer to A: Stupid question. (i) Give me the rest of the argument, not just a blunt statement. I want you to try to persuade me. This is definitely not about true and false. What's true and false is something we'll never know - we're just scientists in the dark trying to figure things out. (ii) What you should be asking is: Are you persuaded? Maybe, after hearing the whole argument, I'm halfway-persuaded, but I have something I can add to the argument to make it more persuasive. Maybe I've got a clarifying question. Maybe I want the author to expand on the argument.

B: No idea. First I want to see if the authors of 1 and 2 are giving me what I think is a persuasive argument.

C: No. Dispassionate? Remember, we're talking about human persuasion here. Humans are passionate. If macroeconomists were not passionate about their work, working with them would be deathly dull. I would rather paint houses for a living. And why would we be thinking about 1 and 2 independently? Indeed, given the nature of the statements, we should be thinking about these things in the same context. How you argue one could have a lot to do with how you argue the other.

Where is Romer leading us? Well, he seems to want to make the case that we (macroeconomists) are "infected by tribalism." He also argues that physicists are not tribalists.

I've argued elsewhere that, taking macroeconomics in particular, that the field is much less factional than some people would like to claim. Emphasis on factionalism sometimes makes an interesting story for undergraduate macro students. In the old days, there was a conflict between Monetarists and Keynesians - Chicago vs. the east coast. In the 1970s there was a conflict between "saltwaters and freshwaters" - CMU/Minnesota/Chicago/Rochester vs. the east coast. But, as the technology has changed, and people and ideas have moved around, it's much harder to identify warring camps, or a war. You'll note that statements 1 and 2 concern very old ideas. Romer didn't give us, say, post-2000 statements along these lines. Why? Because he would have a hard time finding such things, except perhaps on the blogosphere, where people seem to love rehashing old - and long-ago resolved - disputes.

But, researchers in macro - as with researchers in other fields in economics - will split off into groups that are internally relatively homogeneous. That's how we make progress. Persuasion is hard. If we try to work in heterogenous groups in which we're constantly going back to first principles to justify what we're doing, we're not going to advance much. Sometimes we make the most progress in a group where we can agree on assumptions. I spend some of my time interacting with a group of monetary theorists who share a common view about research methods and direction, and we tend to share an evolving set of models. I've learned a lot from that, and from the continuing relationship with people in the group. And so what if two groups are having a dispute. That's just healthy competition.

So, within economics, is macro unusual? Of course not. Indeed, the whole emphasis of post-1970 macroeconomics is to do it like everyone else. Before 1970, no one would have been discussing macro and Dixit-Stiglitz in the same sentence. Should economics work like physics? Of course not. We're studying very different problems requiring very different methods. Why would you expect economists to behave like physicists?

What's my bottom line? Romer is just leading us through an unproductive conversation - one that's not going to persuade anyone of anything. Here's something that would be more fruitful. Romer's chief beef with the macro profession seems to be that we don't give him enough credit. The two characters who wrote the articles in statements 1 and 2 get plenty of credit. They are well-cited, and they have Nobel prizes. Romer also has plenty of citations, but seems to want something more. I'm not a close follower of research on economic growth, but I see growth papers sometimes, and my familiarity with this stuff is roughly that of your average macroeconomist. Romer made a couple of key contributions to the literature on economic growth early in his career, building on the seminal work of Solow and the optimal growth theorists - Cass and Koopmans for example. Romer's work, and Lucas's for example, was highly influential, and spawned a whole literature - endogenous growth theory.

The hope for this line of research was that we would gain an understanding of the forces behind technological change. This type of research, it was thought, could give us huge rewards. Some countries are extremely poor, while others are extremely rich. If we can figure out how to make the extremely poor extremely rich, this would be a huge payoff for macroeconomic research. My impression - and I could be entirely wrong - is that this line of research has been something of a bust. Most of the insight we have into economic growth and the sources of disparities in standards of living in the world comes mainly through the lens of the Solow growth model, and Solow's paper was published in 1956.

So, I think it is incumbent on Romer, if he wants more credit, and more recognition, to make the case for himself - for his older ideas - and to give us some new ideas. I'm willing to be persuaded, as I'm sure most macroeconomists are. But, arguments about "mathiness," "macro gone wrong," and unsubstantiated charges of dishonesty aren't persuading anyone, as far as I can tell.