David Clingingsmith suggested on Twitter to use the hashtag #FieldFriday to “tweet about a favorite paper published in a field journal”. I think it is a great idea! Field journals are important, and works published in these journals deserve much more attention.
Most of the agent-based computational economics (ACE) literature is published in field journals, so it is a perfect opportunity to shed some light on this methodology[mfn]Don’t worry, I will not exclusively post about agent-based models! I have other obsessions too…[/mfn]. I will try to post consistently every Friday on this hashtag. It is a bit of work, but I believe it definitely worths it!
But as not everyone is on Twitter, I figured it could be interesting to repost my tweets here.
For my first Twitter thread, I decided to give a shot at an important paper that introduced robust sensitivity analysis in the ACE literature. Here it is! You can also read it directly on Twitter.
My take on #FieldFriday:
- “Schumpeter meeting Keynes: A policy-friendly model of endogenous growth and business cycles”
- Giovanni Dosi, Giorgio Fagiolo and Andrea Roventini
- Journal of Economic Dynamics and Control, 2010 #CompEcon
The paper is an agent-based model (ABM) of growth. The subject itself is quite far from my field, so I won’t discuss it.
The reason why I choose this paper is because of its methodology. It’s an important paper for ABMs in economics.
ABMs are computer simulations where a number of agents make decentralized actions and decisions during a certain length of time (“steps”). As explained by Leigh Tesfatsion, they are “computational Petri dishes”.
These simulations usually don’t have closed form solutions. Their lack of tractability is due to their (extremely) large number of degrees of freedom.
So why using an ABM? Well, because it’s inexpensive to have heterogeneous agents in those models.
Sidenote: their large number of degrees of freedom is due to their decentralized nature.
Each agent has its own set of rules and parameters, and they can evolve at each period of time. So this, times the number of agents. Plus the interactions between the agents…
Back to the paper. Because they have so much degrees of freedom, ABMs produce a huge ton of (simulated) data. Even simple models can produce files weighting dozen (or hundred) of MBs. Exploring those data to find meaningful patterns is a real issue.
Most importantly, how to be sure that the researchers didn’t pick the specific set of parameters that produced the results they precisely wanted to have?
After all, with so much data, it shouldn’t be hard to find something that will suit with what you want to show…
This paper addresses this issue by running Monte Carlo simulations as sensitivity analysis en.wikipedia.org/wiki/Monte_Carlo_method.
The idea is that once you find a set of parameters that replicates a time series or a stylised fact, you run thousands of simulations with small differences.
By looking at the standard errors across all your simulations, you can have an idea how much your ABM is sensitive to a small change in one of its parameters.
If you compute an average for instance, you want the standard error/average ratio to be as small as possible.
This paper was the first to introduce this way of thinking, and personally, I think it was an important step forward for ABMs.
Results produced by ABMs have to be trustable, and it helped a lot!
That being said, this methodology is still quite young, and there is so much room for new contributions.
It is probably why I am so fascinated by ABMS – that, and because I love to write code that mimics human behavior.
Even though I started to work with ABMs in 2015, I am still discovering the literature. It may not be mainstream (yet!), but for sure it is already a largely studied methodology by economists – and others!
Thanks for reading this #FieldFriday thread!
I hope you enjoyed this #FieldFriday thread! Feel free to follow the blog if you want to get notified when a new post is published.