Monday, June 29, 2009

Reproducing business cycle features: what for?

There is a tradition of trying to have models replicating statistical features of the business cycle, this is what the real business cycle theory is about: find a model that seems to mimic the data reasonably well, so that one can then use the model to study the impact of some policies. These models have had mixed results, in particular regarding the labor market, but this is somewhat understandable, as these models are obviously abstractions and they are not fitted to the data. But the point is that one has a structural model that can be applied for something useful.

So when I stumbled upon a recent paper by James Morely, Jeremy Piger and Pao-Lin Tien on reproducing business cycle facts, I was expecting something along the lines described above. Not so. This paper is about fitting VAR models and the like with non-linear effects to business cycle features. What would this be good for? These statistical model are good for short-term forecasting, and then you try to minimize some criterion like out-of-sample forecasting errors. Why would you try to have these models try to mimic business cycle features? They are not even good for any policy use, as they are reduced form models. Beats me.

5 comments:

Anonymous said...

Consider this example. You develop a new statistical device that you would like to use to study business cycle features and their changes over time. The statistical framework is atheoretical, in the sense that it does not have an underlying economic theory. Yet, it is useful to make inference on the stylized facts of business cycles (that you would then use another time to build up your theoretical models, like in the RBC literature).

How would you assess the goodness of such a statistical device? You would run Monte Carlos, right? Your results (and your assessment of the properties of the framework) would then depend on how well you are able to replicate business cycle features in your numerical simulations. If you failed to do so, you would not be able to say anything on that statistics.

To put things in a different way, what is, for example, the "right" data-generating process of GDP to use in a simulation? Well, an econometric model that is capable of reproducing GDP feature would do the work.

Anonymous said...

Something that I forgot to mention. That statistical device should have some nice asymptotic properties, but it may also be based on numerical methods backed only by some weak asymptotics. When you apply it to a time series framework, you cannot be sure that the conditions needed for asymptotics to work are satisfied in practice. That is why you need to run Monte Carlos.

Economic Logician said...

So, if I got it right, the goal is to find a process that generates data that has the same properties as the business cycle. Presumably, this is to feed it to some other model. Then, why not simply use actual GDP data?

Anonymous said...

To run Monte Carlos you need more than one dataset. You need a DGP which is able to reproduce as closely as possible the features of that dataset. If you manage to find such a DGP, then you can use the random shocks to that DGP to generate artificial datasets at each replication of the Monte Carlo experiment. This is not always an easy task.

For a nice example of what I am talking about, see the Monte Carlo experiments part in Doyle and Faust (RESTAT, 2005). In that case, they use a VAR(1) to match the moments of the growth rates of GDP, Consumption, and Investment and test their statistical device (parametric bootstrap), which is able to detect variations in correlations, variances, and covariances.

Unknown said...

This is an interesting defence of DSGE models! Econometric models are far from perfect, but they have the added virtue that they explicitly consider the data, and more and more methods are being developed or have been developed to ensure this is the case, even in time of structural change like now.

Your structural model may have structure, but almost certainly it's totally the wrong structure, calibrated with some values the model builder thought were "about right". Sure you can learn something about your hypothesised economy, but what about the real world of non-homogenous, non-rational agents?

If nothing else, research like this allows to to characterise what is happening out there, and hopefully then the theorists can develop models that replicate the patterns that the data show. I haven't read the paper, but I'd be loathe to describe it as bad just because it's a VAR-based, non-theory based analysis.