“Experience has shown that Mother Nature is eclectic”
– Paul Samuelson
This recent blog on the merits of eclecticism in macroeconomics seems to have resonated. It follows a theme Dani Rodrik has advocated in his excellent book, reviewed here by Diane Coyle. Simon Wren-Lewis responds with a sympathetic assessment, Noah Smith is more sceptical.
In summary, my central thesis is that it is incorrect to view the main macroeconomic frameworks as “general” theories, they are all best understood as being contingent – on the structure of the economy being analysed and the problem under consideration. I make this claim based on a set examples. Minkysian ‘models’ are extremely useful in analysing the distribution of cyclical outcomes in recessions, or the probability that an economic downturn could coincide with a financial panic. At the same time, they tell you very little – if anything – about the outlook for inflation. Rational expectations models are extremely useful when thinking about counter-cyclical policy constraints in emerging markets, but useless when thinking about whether or not helicopter drops need to be “permanent” (as discussed in this excellent piece by Martin Sandbu).
Simon Wren-Lewis addresses a slightly different question – whether “mainstream academic macroeconomics is eclectic”. I am not usually a fan of categories such as “mainstream”, “heterodox”, or even “academic” when it comes to economics – in part because I think they’re a distraction, but also because they may not be meaningful: Is Fama mainstream? Is Stiglitz heterodox? Was Paul Samuelson’s work consumption theory mainstream, but his work on Richard Goodwin or Marxian economics, heterodox? Is research done by the BIS “academic”? Or for that matter, Goldman Sachs? In which category do I put one of my favourite economists, Fisher Black? Or someone I never hear anyone discuss, Mordecai Kurz?[1]
That said, I protest too much – and I know what Simon is getting at. There is such a thing as “mainstream macro”, let’s just call it the set of new Keynesian models which we find on a typical post-graduate economics course, and which form the basis of most of the macro models central banks are using for their forecasts.
Simon raises some very important points, which I want to come back to. But before doing so, it is worth addressing Noah’s concerns, because he suggests that eclecticism is inherently problematic.
Noah’s problem is very straightforward. He starts with Simon’s summary of my argument – “The big models/schools of thought are not right or wrong, they are just more or less applicable to different situations” – and asks, “For situationalism [eclecticism] to be useful, you need to have some way of telling which model to use ex ante.”
Noah observes that this is a problem in “any field where there are alternative models”. I think the ex post/ex ante problem he describes is a much more general problem, particularly in areas like economics where you cannot assume the “uniformity of nature”, to borrow a phrase from Hume. Rigorous testing in all areas usually requires success out of sample – explaining the world ex post is rarely useful, we want to explain it correctly ex ante. But that is a general problem – not unique to eclecticism. In economics, it is actually less of a problem with eclecticism. Here’s why.
Hume’s idea of the uniformity of nature could be described as saying that a theory which was true, remains true. This is a reasonable assumption in the natural sciences because the nature of the world under study is plausibly constant though time. The physical properties of a vacuum should not depend on whether it is 1970 or 2000.[2] To be clear, this does not mean that better theories are unavailable, it is simply a reflection of the fact that the phenomena in need of explanation have constant properties.
Now this is definitively not true in much of economics, because the structure of the economy is changing. It is highly likely that a model which did explain wage behaviour in the 1970s – and had predictive power in the 1970s – is no longer valid.
In fact, that is precisely why an eclectic approach is more rigorous – it requires us to define the regime in which the theory is applicable (perhaps requiring valid micro-foundations) – we are making no claim to universal validity.
Noah implies, mistakenly in my view, that in some sense this renders eclecticism less testable. That is not true. I can say that a rational expectations model of inflation determination is more valid in an economy with a high degree of indexation – for example, Argentina – than in a deregulated economy such as the United States. That generates a testable hypothesis about the effects of a currency devaluation on inflation.
Similarly, I can say that an economy with higher levels of leverage in its private sector is more likely to suffer more severe recessions than a less levered economy – that too is a testable hypothesis generated by a Minskyian framework, on which a typical new Keynesian model is silent.
In summary, I see nothing in an eclectic approach to macroeconomics which is inconsistent with valid hypothesis testing.
The problems Simon raises are deeper. As a general observation on the practice of economics in academia and in policy-making, I do think that there was a pre-crisis trend away from eclecticism. For example, when I read a book like Victor Zarnowitz’s classic on business cycles written in the early 1990s – and presumably “mainstream”, (published by the NBER, with h/t to Brad DeLong in the preface), it is unashamedly eclectic – with detailed analysis of cycles from Minsky to Bernanke/Gertler to Lucas, with a very clear sense of the circumstances in which each perspective may be helpful. But by the mid-2000s the transformation and narrowing of focus in mainstream academic macro is striking – as this book in the same NBER series, edited by Ben Bernanke, attests. I cannot help but notice a coincident trend towards more formal modelling. Formal difficulty may pose a challenge to eclecticism – because there is simply too much to master.[3]
My reading of Simon is also that he views the focus on “microfoundations” in macroeconomics as an obstacle to eclecticism. He is surely right, but I would phrase the problem differently. As I have argued before, in response to Brad DeLong, the problem is not microfoundations per se, but which microfoundations we choose to use. I am an eclectic on “microfoundations” too!
It is a great shame that the only methodology all economists know is a naive form of Milton Friedman’s. Friedman is famously stereotyped as showing that implausible (i.e. empirically false) assumptions are fine as long as the theory they produce tests positive. The opposite should be true – if your assumptions are patently false, have a healthy scepticism of the positive test results of your theory.[4]
Armed with pseudo-methodology, economists have tended to care more about simply having “microfoundations” than having empirically valid microfoundations.
I have no problem analysing, for example, the short end of the yield curve using a standardly micro-founded efficient markets view of the world, but I think that’s bonkers when thinking about the effects of pay on productivity, or the effects of tax cuts on consumer spending.
Eclecticism applies to the relevance of macro models, and microfoundations too.
[1]As an aside, I don’t know where I’d put the brilliant Stephen Kinsella, but I hope the students in Limerick realise how lucky they are getting lecture notes like these!
[2]Before a physicist gets back to me and tells me this is in fact false, let me say this is example is used for illustrative purposes only!
[3]Paul Samuelson was an unabashed eclectic – he may also have been the last economist to be able to claim he could converse at the highest level across the field, as Robert Merton suggests in this great interview.
[4] As is almost always the case, his original argument is far subtler than the version re-iterated by his disciples and embedded in the subconscious of many economists. This from Dan Hausman is worth reading.
Leave a Reply