ArticlePDF Available

Abstract and Figures

In mainstream economics, and particularly in New Keynesian macroeconomics, the booms and busts that characterize capitalism arise because of large external shocks. The combination of these shocks and the slow adjustments of wages and prices by rational agents leads to cyclical movements. In this book, Paul De Grauwe argues for a different macroeconomics model--one that works with an internal explanation of the business cycle and factors in agents' limited cognitive abilities. By creating a behavioral model that is not dependent on the prevailing concept of rationality, De Grauwe is better able to explain the fluctuations of economic activity that are an endemic feature of market economies. This new approach illustrates a richer macroeconomic dynamic that provides for a better understanding of fluctuations in output and inflation. De Grauwe shows that the behavioral model is driven by self-fulfilling waves of optimism and pessimism, or animal spirits. Booms and busts in economic activity are therefore natural outcomes of a behavioral model. The author uses this to analyze central issues in monetary policies, such as output stabilization, before extending his investigation into asset markets and more sophisticated forecasting rules. He also examines how well the theoretical predictions of the behavioral model perform when confronted with empirical data.
Content may be subject to copyright.
BEHAVIORAL MACROECONOMICS
Paul De Grauwe
University of Leuven
September 2010
2
Table of Contents
Introduction
Chapter 1: The New Keynesian Macroeconomic Model
Chapter 2: The Scientific Foundation of the New Keynesian Macroeconomic
Model
Chapter 3: A Behavioral Macroeconomic Model
Chapter 4: The Transmission of Shocks
Chapter 5: Optimal Monetary Policy
Chapter 6: Flexibility, Animal Spirits and Stabilization
Chapter 7: Stock Prices and Monetary Policy
Chapter 8: Extensions of the Basic Model
Chapter 9: Empirical Issues
Conclusion
3
Introduction
Until the eruption of the financial crisis in 2007 it looked as if macroeconomics
had achieved the pinnacle of scientific success. The industrial world experienced
a time of great macroeconomic stability with low and stable inflation, high and
sustained economic growth, and low volatility of many economic and financial
variables. Economists were debating the causes of this “Great Moderation” and
there was a general consensus that at least part of it was due to the new
scientific insights provided by modern macroeconomic theory. This theory
embodied the rational agent, who continuously optimizes his utility using all
available information. In this world were individual agents make no systematic
mistakes, stability reigns. Sure there was a recognition that macroeconomic
variables could be subjected to large changes, but these changes always found
their source outside the world of these rational agents. If left alone the latter,
with the help of efficient markets, would produce their wonderful stabilizing
work. The macroeconomy was modeled as a world of rationality and supreme
understanding that unfortunately was regularly hit by outside disturbances.
It is no exaggeration to state that the financial and economic upheavals following
the crash in the US subprime market have undermined this idyllic view of
stability created in a world of fully rational and fully informed agents. These
upheavals have also strengthened the view of those who have argued that
macroeconomics must take into account departures from rationality, in
particular departures from the assumption of rational expectations.
There is a risk, of course, in trying to model departures from rationality and
rational expectations. The proponents of the paradigm of the fully informed
rational agent have told us that there are millions of different ways one can
depart from rationality. There is thus no hope to come to any meaningful
conclusion once we wander into the world of irrationality. This argument has
been very powerful. It has been used to discredit any attempt to depart from the
rational and fully informed agent paradigm. As a result, many academic
researchers have been discouraged from departing from the mainstream
macroeconomic theory.
4
The problem with the objection that “everything becomes possible when we
move into the territory of irrationality” is that it is based on the view that there is
only one possible formulation of what a rational agent is. This is the formulation
now found in mainstream macroeconomic models. It is my contention that one
can depart from that particular formulation of rationality without having to
wander in the dark world of irrationality.
My intention is to show that once we accept the notion that individuals have
cognitive limitations, and thus are not capable of understanding the full
complexity of the world (as is routinely assumed in the mainstream
macroeconomic models), it is possible to develop models based on a different
notion of rationality. I also intend to show that his leads to a richer
macroeconomic dynamics that comes closer to the observed dynamics of output
and inflation than the one produced by the mainstream macroeconomic models.
I will start by presenting the standard macroeconomic model, i.e. the New
Keynesian model, whose most successful embodiment is the Dynamic Stochastic
General Equilibrium (DSGE) Model (Chapter 1). This will be followed by a critical
analysis of this model (Chapter 2). Having cleared the path, I will then present
the main alternative model, the behavioral macroeconomic model, in the
subsequent chapters (chapters 3 and 4). This will then lead to an analysis of
monetary policies in such a behavioral model (chapters 5 and 6). The next two
chapters will discuss the extensions to the basic model. One extension is to
introduce asset markets in the model (chapter 7); another extension
incorporates a richer menu of forecasting rules than the ones used in the basic
model (chapter 8). Finally in chapter 9, I discuss some empirical issues relating
to the question of how well the theoretical predictions of the behavioral model
perform when confronted with the data.
Clearly this is not a definitive book. As the reader will find out, in much of the
material that will be presented, there are loose ends and unresolved issues. My
intention is to explore new ways of thinking about the macroeconomy; ways of
thinking that depart from mainstream thinking which in my opinion has turned
out to be unhelpful in understanding why output and inflation fluctuate as they
do in the real world.
5
I developed many of the ideas in this book through debate with colleagues
during seminars and at other occasions. Without implicating them I would like to
thank Yunus Aksoy, Tony Atkinson, William Branch, Carl Chiarella, Domenico
delli Gatti, Stephan Fahr, Daniel Gros, Richard Harrison, Timo Henckel, Cars
Hommes, Romain Houssa, Gerhard Illing, Mordecai Kurz, Pablo Rovira
Kaltwasser, Christian Keuschnigg, Alan Kirman, Giovanni Lombardo, Lars
Ljungqvist, Patrick Minford, John Muellbauer, Ilbas Pelin, Bruce Preston, Frank
Smets, Robert Solow, Leopold von Thadden, David Vines, Mike Wickens, and
Tony Yates.
6
CHAPTER 1: THE NEW KEYNESIAN MACROECONOMIC MODEL
7
1. Introduction
In this chapter the standard macroeconomic model as it has evolved in the last
few decades is presented. This model has received different names. It is
sometimes called the New-Keynesian macroeconomic model (in contrast with
the “New Classical macroeconomic model). It is also often called “Dynamic
Stochastic General Equilibrium model” (DSGE-model). The main features of this
model are the following
First it has a micro-foundation, i.e. consumers are assumed to maximize their
utilities and producers their profits in a dynamic (multi-period context). This
implies that macroeconomic equations should be derived from this optimizing
behavior of consumers and producers.
Second, consumers and producers are assumed to have Rational Expectations
(RE), i.e. they make forecasts using all available information, including the
information embedded in the model. The assumption of RE also implies that
agents know the true statistical distribution of all shocks hitting the economy.
They then use this information in their optimization procedure. Since consumers
and producers all use the same information, we can just take one representative
consumer and producer to model the whole economy. There is no heterogeneity
in the behavior of consumers and producers.
Third, and this is the New Keynesian feature, it is assumed that prices do not
adjust instantaneously. Although firms continuously optimize there are
institutional constraints on the speed with which they can adjust prices to their
optimal level. This feature contrasts with the New Classical model (sometimes
also labeled “Real Business Cycle” model) that assumes perfect price flexibility.
Although we will use the New Keynesian model as our benchmark model we will
sometimes contrast the results of this model with those obtained in the New
Classical model.
This chapter does not go in all the details of the New Keynesian model. For that
there are excellent textbooks (e.g. Gali(2008), Woodford(2003) and
Walsh(2008)). The purpose of the present chapter is to set the stage for a
comparison of the behavioral model that will be developed in subsequent
8
chapters with the standard New Keynesian model. We will also subject the New
Keynesian model (DSGE-model) to a methodological criticism in the next
chapter.
2. The New Keynesian macroeconomic model
We will use the simplest possible representation of the model here.
The aggregate demand equation is derived from utility maximization of a
representative consumer. How this is done is shown in Gali(2008), Chapter 21.
Here we represent the result:
(1.1)
where yt is the output gap in period t, rt is the nominal interest rate,
π
t is the rate
of inflation, and
ε
t is a white noise disturbance term. Et represents the
expectations (forecasts) made in period t by the representative consumer. These
expectations are assumed to be formed using all available information (rational
expectations). Note that is the real interest rate and a2<0.
The aggregate demand equation has a very simple interpretation. Utility
maximizing agents will want to spend more on goods and services today when
they expect future income (output gap) to increase and to spend less when the
real interest rate increases.
The aggregate supply equation is derived from profit maximization of individual
producers (see Gali(2008), chapter 3). In addition, it is assumed that producers
cannot adjust their prices instantaneously. Instead, for institutional reasons, they
have to wait to adjust their prices. The most popular specification of this price
adjustment mechanism is the Calvo pricing mechanism (Calvo(1983); for a
criticism see McCallum(2005)). This assumes that in period t, a fraction
θ
of
prices remains unchanged. Under those conditions the aggregate supply
equation (which is often referred to as the New Keynesian Philips curve) can be
derived as :
(1.2)
1 Other sources are Woodford(2002), or Minford and Ou(2009).
y
t
=
E
t
y
t
+
1
+
a
2
(
t
E
t
π
t
+
1
)
+
ε
t
(
t
E
t
π
t
+
1
)
π
t
=
β
E
t
π
t
+
1
+
κ
y
t
+
η
t
9
where the parameter
κ
is a complex expression of the underlying parameters of
the optimizing model (see Gali(2008));
κ
is a function of the fraction
θ
which can
be interpreted as an index of price rigidity. When
θ
= 0 (i.e. there is no price
rigidity)
κ
= and when
θ
= 1 (i.e. there is complete price rigidity)
κ
= 0. The
former case corresponds to a vertical aggregate supply curve (the classical case)
while the latter case corresponds to a horizontal aggregate supply curve (the
Keynesian case).
The previous two equations determine the two endogenous variables, inflation
and output gap, given the nominal interest rate. The model has to be closed by
specifying the way the nominal interest rate is determined. The most popular
way to do this has been to invoked the Taylor rule (see Taylor(1993)). This rule
describes the behavior of the central bank. It is usually written as follows:
(1.3)
where is the inflation target. Thus the central bank is assumed to raise the
interest when the observed inflation rate increases relative to the announced
inflation target. The intensity with which it does this is measured by the
coefficient c1. Similarly when the output gap increases the central bank is
assumed to raise the interest rate. The intensity with which it does this is
measured by c2. The latter parameter then also tells us something about the
ambitions the central bank has to stabilize output. A central bank that does not
care about output stabilization sets c2=0. We say that this central bank applies
strict inflation targeting. Finally note that, as is commonly done, the central bank
is assumed to smooth the interest rate. This smoothing behavior is represented
by the lagged interest rate in equation (1.3).
The parameter c1 is important. It has been shown (see Woodford(2003), chapter
4, or Gali(2008)) that it must exceed 1 for the model to be stable. This is also
sometimes called the “Taylor principle”.
Ideally, the Taylor rule should be formulated using a forward-looking inflation
variable, i.e. central banks set the interest rate on the basis of their forecasts
ttttt urcyccr +++= 132
*
1)(
ππ
*
π
10
about the rate of inflation. This is not done here in order to maintain simplicity in
the model (again see Woodford(2003), p. 257)2.
We have added error terms in each of the three equations. These error terms
describe the nature of the different shocks that can hit the economy. There are
demand shocks ,
ε
t , supply shocks ,
η
t , and interest rate shocks,
υ
t . We will
generally assume that these shocks are normally distributed with mean zero and
a constant standard deviation. Agents with rational expectations are assumed to
know the distribution of these shocks. It will turn out that this quite a crucial
assumption.
The model consisting of equations (1.1) to (1.3) can be written in matrix notation
as follows:
(1.4)
(1.5)
This model can be solved under rational expectations. There are several ways
one can do this (see Minford and Peel(1983), Walsh(2003)). Here we will use
numerical methods to solve the system mainly because the behavioral model
proposed in future chapters is highly non-linear (in contrast with the present
model which is linear) necessitating the use of numerical solution techniques.
We use the Binder-Pesaran(1996) procedure. The Matlab-code is provided in
appendix. The numerical values of the parameters are also presented in
appendix. They are based on values commonly used in these models (see
Gali(2008), p. 52).
We use the model to analyze two questions that will also return later when we
develop the behavioral model. The first question relates to the effectiveness of
monetary policy in influencing output and inflation, and the conditions under
2 As is shown in Woodford(2003) forward looking Taylor rules may not lead to a determinate solution
even if the Taylor principle is satisfied.
1b20
0 1 a2
c1c21
π
t
yt
rt
=
b10 0
a2a10
0 0 0
Et
π
t+1
Etyt+1
Etrt+1
+
η
t
ε
t
ut
Z
t
=
Φ
E
t
Z
t
+
1
+
v
t
Z t= Ω 1 Φ EtZt+1+vt
[
]
11
which it achieves this effectiveness. This question will also lead us to an analysis
of optimal monetary policy. The second question has to do with the capacity of
this model to generate business cycle movements.
3. What monetary policy can do.
How effective is monetary policy in influencing output and inflation? This is the
question we analyze in this section. We do this by computing the impulse
responses to an interest rate shock. We assume this shock to be an unanticipated
increase in the interest rate, i.e. an increase in the stochastic shock ut in equation
(1.3) by 1 percentage point (100 basis points). This shock occurs in period 1.
The impulse response function then traces how output and inflation are affected
over time3. We apply the same shock for different values of the parameter
κ
in
the New Keynesian Philips curve which, as was mentioned earlier, measures the
degree of price flexibility.
From Figure 1.1 we draw the following conclusion. First, when price rigidity is
high (low flexibility), an unanticipated increase in the interest rate has a
relatively strong effect on the output gap. The effect on inflation is then relatively
small. Thus with price rigidity, monetary policy can have a strong real” effect
(i.e. an effect on output. Second, the effect of monetary policy on the output gap
is temporary. After some time the output gap returns to its equilibrium value.
Third, when flexibility increases, the impact of a change in the interest rate on
output declines, while the impact on the rate of inflation increases. In the limit
when flexibility is perfect (
κ
=
) changes in the interest rate have no impact on
the output gap anymore. It is said that monetary policy is neutral: it does not
affect real variables; it only affects the rate of inflation.
The previous results lead to the conclusion that the existence of price rigidities
creates a potential for the central bank to influence output. This is the rationale
3 These impulse response functions describe the path of one of the endogenous variables (output
gap, inflation) following the occurrence of the shock. In order to do so we simulate two series of
these endogenous variables. One is the series without the shock (the benchmark series); the
other is the series with the shock. We then subtract the first from the second one. This yields a
new series, the impulse response, that shows how the endogenous variable that embodies the
shock evolves relative to the benchmark.
provided by New Keynesian macroeconomic model for output stabilization. Such
rationale does not exist in the New Classical model that assumes perfect price
flexibility.
F
igure 1.1: Impulse responses of output and inflation to interest rate
increase
provided by New Keynesian macroeconomic model for output stabilization. Such
rationale does not exist in the New Classical model that assumes perfect price
igure 1.1: Impulse responses of output and inflation to interest rate
Low flexibility:
κ
= 0.05
Medium flexibility:
κ
= 0.5
High flexibility:
κ
= 5
provided by New Keynesian macroeconomic model for output stabilization. Such
rationale does not exist in the New Classical model that assumes perfect price
igure 1.1: Impulse responses of output and inflation to interest rate
13
The fact that the central bank can affect output in the presence of price rigidities
does not necessarily mean that it should do so. In order to analyze the
desirability of such stabilization one has to perform a welfare analysis of
stabilization policies. The way we will do this here is to derive the trade-off
between inflation and output variability faced by the central bank.
4. What monetary policy should do: trade-off between output and inflation
variability.
The tradeoffs are constructed as follows. Figure 1.2 shows how output variability
(panel a) and inflation variability (panel b) change as the output coefficient (c2)
in the Taylor rule increases from 0 to 1. Each line represents the outcome for
different values of the inflation coefficient (c1) in the Taylor rule.
Panel a showing the evolution of output variability exhibits the expected result:
as the output coefficient increases (i.e. the central bank increases its attempt at
stabilizing output) output variability tends to decrease. This decline in output
variability, however, comes at a price. This is shown in panel b. The decline in
output variability resulting from more active stabilization comes at the cost of
more inflation variability. Note also that when the inflation coefficient (c1)
increases the variability of output increases. This is seen from the fact that in
panel a the lines shift up when c1 increases. Thus more attention by the central
bank for inflation control increases output variability. In panel b the opposite
happens: when c1 increases the variability of inflation declines.
The combination of panels a and b into one allows us to derive the trade-off. This
is shown in panel c of Figure 1.2, which has the variability of inflation on the
vertical axis and the variability of output on the horizontal axis. We now obtain
downward sloping lines (trade-offs). These represent the price the central bank
pays in terms of inflation variability when it attempts to reduce output volatility.
Put differently, when the central bank succeeds in reducing output variability it
does this at the cost of higher inflation variability. We obtain a different trade-off
for every value of the inflation parameter c1, and we observe that when c1
increases the trade-off exhibits a downward movement. Thus, stricter inflation
targeting tends to improve the trade-off. There is a limit though. For values of c1
around 2 the downward shifts tend to stop.
the lowest possible combina
achieved by the central bank. There is a choice to be made. This choice will then
depend on the preferences of the central bank, which hopefully represent the
preferences of society.
Figure 1.2: Trade-
offs between inflation and output variability
panel a
This analysis of the optimal amount of stabilization in the New Keynesian model
is elegant and important. Its power, of course, depends on the existence o
rigidities. In the absence of such rigidities, the trade
output volatility disappears. In the limit
lines in panel a bec
ome horizontal and coincide for all values of c
central bank increases its output stabilization attempts this has no effect on the
volatility of output; in panel b these lines are also horizontal but shift down when
c
1
increases, i.e. th
e central bank can reduce inflation volatility by tighter
around 2 the downward shifts tend to stop.
We achieve an efficient frontier, i.e.
the lowest possible combina
tions of inflation and output volatility that can be
achieved by the central bank. There is a choice to be made. This choice will then
depend on the preferences of the central bank, which hopefully represent the
offs between inflation and output variability
panel b
panel c
This analysis of the optimal amount of stabilization in the New Keynesian model
is elegant and important. Its power, of course, depends on the existence o
rigidities. In the absence of such rigidities, the trade
-
off between inflation and
output volatility disappears. In the limit
of complete price flexibility (
κ
ome horizontal and coincide for all values of c
1
, i.e. when
central bank increases its output stabilization attempts this has no effect on the
volatility of output; in panel b these lines are also horizontal but shift down when
e central bank can reduce inflation volatility by tighter
We achieve an efficient frontier, i.e.
tions of inflation and output volatility that can be
achieved by the central bank. There is a choice to be made. This choice will then
depend on the preferences of the central bank, which hopefully represent the
This analysis of the optimal amount of stabilization in the New Keynesian model
is elegant and important. Its power, of course, depends on the existence o
f price
off between inflation and
κ
=
) the
, i.e. when
the
central bank increases its output stabilization attempts this has no effect on the
volatility of output; in panel b these lines are also horizontal but shift down when
e central bank can reduce inflation volatility by tighter
15
inflation targeting. All this then results in a vertical trade-off in panel c, i.e. the
central bank can only affect inflation volatility, not output volatility.
5. The business cycle theory of the New Keynesian model
Capitalism is characterized by booms and busts; periods of strong growth in
output followed by periods of declines in economic growth. Every macro-
economic theory should make an attempt at explaining these endemic business
cycle movements. How does the New Keynesian model explain booms and busts
in economic activity?
In order to answer this question it is useful to present some stylized facts about
the cyclical movements of output. In Figure 1.3 we show the movements of the
output gap in the US since 1960. We observe strong cyclical movements. These
cyclical movements imply that there is strong autocorrelation in the output gap
numbers, i.e. the output gap in period t is strongly correlated with the output gap
in period t-1. The intuition is that if there are cyclical movements we will observe
clustering of good and bad times. A positive (negative) output gap is likely to be
followed by a positive (negative) output gap in the next period. This is what we
find for the US output gap over the period 1960-2009: the autocorrelation
coefficient is 0.94. Similar autocorrelation coefficients are found in other
countries.
A second stylized fact found about the movements in the output gap is that these
are not normally distributed. We show the evidence for the US in Figure 1.4. We
find, first, that there is excess kurtosis (kurtosis= 3.62), which means that there
is too much concentration of observations around the mean. Second, we find that
there are fat tails, i.e. there are more large movements in the output gap than is
compatible with the normal distribution. This also means that if we were basing
our forecasts on the normal distribution we would underestimate the probability
that in any one period a large increase or decrease in the output gap can occur.
Finally, the Jarque-Bera test leads to a formal rejection of normality of the
movements in the US output gap series.
Figure 1.3
Source: US Department of Commerce and Congressional Budget Office
Figure 1.4
:frequency distribution of US Output gap (1960
Source: US Department of Commerce and Congressional Budget Office
kurtosis: 3.61; Jarque-
Bera: 7.17 with p
Against this empirical background we analyze the movements of the output gap
as predicted by the
New Keynesian model.
output
gap under price rigidity,
-0,1
-0,08
-0,06
-0,04
-0,02
0
0,02
0,04
0,06
0,08
1960Q1
1962Q3
1965Q1
1967Q3
1970Q1
1972Q3
Source: US Department of Commerce and Congressional Budget Office
:frequency distribution of US Output gap (1960
-
2009)
Source: US Department of Commerce and Congressional Budget Office
Bera: 7.17 with p
-value=0.027
Against this empirical background we analyze the movements of the output gap
New Keynesian model.
We first
show simulations of
gap under price rigidity,
assuming white noise shocks,
in figure 1.5
1972Q3
1975Q1
1977Q3
1980Q1
1982Q3
1985Q1
1987Q3
1990Q1
1992Q3
1995Q1
1997Q3
2000Q1
2002Q3
2005Q1
Output Gap US 1960-2009
Source: US Department of Commerce and Congressional Budget Office
2009)
Against this empirical background we analyze the movements of the output gap
show simulations of
the
in figure 1.5
. It is
2005Q1
2007Q3
17
immediately clear that the New Keynesian model fails to mimic the typical
business cycle movements in the output gap identified earlier. These movements
in Figure 1.5 are essentially white noise. The auto-correlation coefficient
between subsequent output gaps is only 0.06 suggesting that there is no
correlation between subsequent output gaps, and thus very little cyclical
movement.
The lower panel of figure 1.5 shows the frequency distribution of the simulated
output gaps. We observe that these output gaps are normally distributed (The
Jarque-Bera test is unable to reject normality). This contrasts very much with the
non-normality observed in the real data (Figure 1.4).
Thus the simple New Keynesian model fails to capture the typical features of real
life business cycle movements, i.e. the correlation between subsequent
observations of the output gap (autocorrelation) and the occurrence of large
booms and busts (fat tails). In this sense one can say that the New Keynesian
model presented in the previous sections does not have an interesting theory of
the business cycle. In fact it has no theory of the business cycle.
In order to produce a model that comes closer in mimicking real-life business
cycle movements New Keynesians have adjusted the basic model in two ways.
The first one consisted in adding lags in the demand and supply equations so as
to produce more inertia and to better mimic the autocorrelation in the data. The
second one consisted in adding more structure in the error terms in the different
equations of the model. We now discuss these two attempts.
Figure 1.5:
Simulated output gap
kurtosis: 3.0; Jarque
6. The need for inertia
There are two features of the New Keynesian model discussed in the previous
sections that are unsatisfactory and that could explain the lack of predictive
power of the model. The first on
optimal plans instan
taneously. This is unrealistic. There is
formation that
has the effect of slowing down the adjustment of consumption to
its optimal level after some shock. In order to deal with this, macro
Simulated output gap
in New Keynesian model
kurtosis: 3.0; Jarque
-Bera: 0.01 with p-value=0.5
There are two features of the New Keynesian model discussed in the previous
sections that are unsatisfactory and that could explain the lack of predictive
power of the model. The first on
e
is that it assumes that consumers adjust their
taneously. This is unrealistic. There is
,
for example
has the effect of slowing down the adjustment of consumption to
its optimal level after some shock. In order to deal with this, macro
-
economists
in New Keynesian model
There are two features of the New Keynesian model discussed in the previous
sections that are unsatisfactory and that could explain the lack of predictive
is that it assumes that consumers adjust their
for example
, habit
has the effect of slowing down the adjustment of consumption to
economists
19
have introduced habit formation explicitly in consumers’ optimization problem
(see Smets and Wouters(2007)). This has the effect of introducing a lag in the
aggregate demand equation. The aggregate demand equation is therefore
rewritten as:
(1.6)
where 0 < a1 <1
We have added a lagged output gap. As a result, there is now a forward-looking
component in the aggregate demand equation (the first term on the right hand
side) and a backward looking component (the second term). a1 tends to decline
with the degree of habit formation.
A second unsatisfactory feature of the New Keynesian model of the previous
section is that it relies too much on the Calvo price formation model. In this
model firms get a lottery ticket that will determine whether in period t they will
be allowed to adjust their price. If they draw the wrong number they have to
wait until a new period when again they get a new lottery ticket. This is certainly
an unattractive feature, mainly because the price adjustment process in the
Calvo pricing model is state independent (McCallum(2005)). In practice the
desire to adjust prices will very much depend on the state of the economy. One
way out of this problem is to assume that firms that have drawn the wrong
lottery number will adjust their prices using an indexing procedure (i.e. index to
previous prices). This extension of the Calvo pricing model is now the standard
procedure in New Keynesian models (DSGE-models). We follow this procedure
here. The new aggregate supply equation (New Keynesian Philips curve)
becomes:
(1.7)
As in the demand equation we obtain a forward looking (first term RHS) and a
backward looking (second term RHS) inflation variable. The relative importance
of the forward and backward looking terms is measured by b1 (0 < b1 < 1).
Equations (1.6) and (1.7) are now the standard aggregate demand and supply
equations implicit in the DSGE-models. We will therefore use them here. We ask
y
t
=
a
1
E
t
y
t
+
1
+
(
1
a
1
)
y
t
1
+
a
2
(
t
E
t
π
t
+
1
)
+
ε
t
π
t
=
b
1
E
t
π
t
+
1
+
(
1
b
1
)
π
t
1
+
b
2
y
t
+
η
t
20
the question how this extension of the basic New Keynesian model improves its
empirical performance.
We show the movements of the simulated output gap (assuming a1 = 0.5 and b1 =
0.5) in Figure 1.6. The upper panel shows the output gap in the time domain and
the lower panel in the frequency domain. We now obtain movements that come
closer to cyclical movements: the autocorrelation in the output gap is 0.77. This
is still significantly lower than in the observed data (for the US we found 0.94).
In addition, these output gap movements are still normally distributed (see
lower panel). We could not reject that the distribution is normal. Thus, although
the extended New Keynesian model comes closer to explaining typically
observed business cycle movements, it is still far removed from a satisfactory
explanation.
The next step in making this model more empirically relevant has consisted in
adding autocorrelation in the error terms. This is now the standard procedure in
DSGE-models (see Smets and Wouters(2003)). We do the same with our version
of the New Keynesian model and assume that the autocorrelation of the error
terms in the three equations (1.6, 1.7 and 1.3) is equal to 0.9. The result of this
assumption is shown in the simulations of the output gap in Figure 1.7. We now
obtain movements of the output gap that resemble real-life movements. The
autocorrelation of the output gap is now 0.98, which is very close to the observed
number of 0.94 in the postwar US output gap. We still cannot reject normality
though (see the Jarque-Bera test). This is a problem that, as we will see later,
DSGE-models have not been able to solve.
Figure 1.6:
Simulated output gap in extended New Keynesian model
kurtosis: 2.9; Jarque
Simulated output gap in extended New Keynesian model
kurtosis: 2.9; Jarque
-Bera: 1.03 with p-value=0.5
Simulated output gap in extended New Keynesian model
Figure 1.7 : Simulated output gap in extended New Keyne
autocorrelated errors
kurtosis: 3.16; Jarque
Let us sum up what we have found about the capacity of the New Keynesian
model (DSGE-
models) to explain cyclical movements in output. First, the simple
version of the model (without lags in the transmission process and with white
noise error terms) does no
order to obtain business cycle movements (and thus autocorrelation in output)
lags in the aggregate demand and supply equations must be introduced. This is a
sensible thing to do because even rational agent
optimal plans about consuming and producing instantaneously. This extended
Figure 1.7 : Simulated output gap in extended New Keyne
sian model and
kurtosis: 3.16; Jarque
-Bera: 3.2 with p-value=0.17
Let us sum up what we have found about the capacity of the New Keynesian
models) to explain cyclical movements in output. First, the simple
version of the model (without lags in the transmission process and with white
noise error terms) does no
t produce any business cycle dynamics. Second, in
order to obtain business cycle movements (and thus autocorrelation in output)
lags in the aggregate demand and supply equations must be introduced. This is a
sensible thing to do because even rational agent
s can often not adjust their
optimal plans about consuming and producing instantaneously. This extended
sian model and
Let us sum up what we have found about the capacity of the New Keynesian
models) to explain cyclical movements in output. First, the simple
version of the model (without lags in the transmission process and with white
t produce any business cycle dynamics. Second, in
order to obtain business cycle movements (and thus autocorrelation in output)
lags in the aggregate demand and supply equations must be introduced. This is a
s can often not adjust their
optimal plans about consuming and producing instantaneously. This extended
23
model then produces autocorrelation in output but typically not sufficiently so to
come close to explaining the dynamics of the business cycle. The latter is also
characterized by the existence of fat tails, i.e. the regular occurrence of booms
and busts.
Third, in order to mimic business cycle movements, the New Keynesian (DSGE)
model builders have had recourse to introducing autocorrelation in the error
terms (the shocks that hit the economy). This trick has allowed DSGE-models to
closely fit observed data (see Smets and Wouters(2003)). This success has been
limited to the first and second moments of the movements of output, but not to
the higher moments (kurtosis, fat tails). The latter failure has the implication
that in order to explain a large movement in output (e.g a deep recession, or a
strong boom) DSGE-models have to rely on large unpredictable shocks.
There are two problems with this theory of the business cycle implicit in the
DSGE-models.
First, business cycles in DSGE-models are not the result of an endogenous
dynamics. They occur as a result of exogenous shocks and slow transmission of
these shocks. Put differently, the DSGE-models picture a world populated by
rational agents who are fully informed. In such a world there would never be
business cycles. The latter arise because of exogenous disturbances and of
constraints on agents’ ability to react instantaneously to these shocks. Thus a
given shock will produce ripple effects in the economy, i.e. cyclical movements.
A second problem is methodological. When the New Keynesian model is tested
empirically the researcher finds that there is a lot of the output dynamics that is
not predicted by the model. This unexplained dynamics is then to be found in the
error term. So far so good. The next step taken by DSGE-modelers is to conclude
that these errors (typically autocorrelated) should be considered to be
exogenous shocks.
The problem with this approach is that it is not scientific. When the DSGE-
modeler finds a dynamics not predicted by the model he decides that the New
Keynesian model must nevertheless be right (because there can be no doubt that
individual agents are rational) and that thus the deviation between the observed
24
dynamics and the one predicted by the model must come from outside the
model.
Macroeconomic theory must do better than the current standard DSGE model
does. It is the ambition of this book to suggest how this can be done. Before we
do this some additional methodological issues in the current DSGE-models must
be discussed. This is done in the next chapter.
25
CHAPTER 2: THE SCIENTIFIC FOUNDATION OF
THE NEW KEYNESIAN MACROECONOMIC MODEL
26
1. Introduction
In the previous chapter we analyzed the main characteristics of the New Keynesian
macroeconomic model (the DSGE-model). We concluded that this model has a
particular, problematic, view of the business cycle. In this chapter we subject this
model to a more intense methodological criticism.
One of the surprising developments in macroeconomics is the systematic
incorporation of the paradigm of the utility maximizing forward looking and fully
informed agent into macroeconomic models. This development started with the
rational expectations revolution of the 1970s, which taught us that macroeconomic
models can be accepted only if agents’ expectations are consistent with the underlying
model structure. The real business cycle theory (RBC) introduced the idea that
macroeconomic models should be “micro-founded”, i.e., should be based on dynamic
utility maximization (Kydland and Prescott(1982)). While RBC model had no place
for price rigidities and other inertia (that’s why it is sometimes called the New
Classical model), the New Keynesian School systematically introduced rigidities of
different kinds into similar micro-founded models. These developments occurred in
the ivory towers of academia for several decades until in recent years these models
were implemented empirically in such a way that they have now become tools of
analysis in the boardrooms of central banks. The most successful implementation of
these developments are to be found in the Dynamic Stochastic General Equilibrium
models (DSGE-models) that are increasingly used in central banks for policy analysis
(see Smets and Wouters 2003; Christiano et al. 2007; Smets and Wouters 2007;
Adjemian et al. 2007).
These developments are surprising for several reasons. First, while macroeconomic
theory enthusiastically embraced the view that agents fully understand the structure of
the underlying models in which they operate, other sciences like psychology and
neurology increasingly uncovered the cognitive limitations of individuals (see e.g.
Damasio 2003; Kahneman 2002; Camerer et al. 2005). We learn from these sciences
that agents understand only small bits and pieces of the world in which they live, and
instead of maximizing continuously taking all available information into account,
agents use simple rules (heuristics) in guiding their behavior and their forecasts about
27
the future. This raises the question of whether the micro-founded macro-economic
theory that has become the standard is well grounded scientifically.
A second source of surprise in the development of macroeconomic modeling in
general and the DSGE-models in particular is that other branches of economics, like
game theory and experimental economics have increasingly recognized the need to
incorporate the limitations agents face in understanding the world. This has led to
models that depart from the rational expectations paradigm (see e.g. Thaler 1994).
Standard macroeconomics has been immune for these developments. True, under
the impulse of Sargent (1993) and Evans and Honkapohja (2001) there has been an
attempt to introduce the notion in macroeconomic models that agents should not be
assumed to be cleverer than econometricians and that therefore they should be
modeled as agents who learn about the underlying model as time passes. This has led
to learning in macroeconomics. The incorporation of learning in macroeconomics,
however, has up to now left few traces in standard macroeconomic models and in the
DSGE-models.
2. Plausibility and empirical validity of rational expectations.
The New Keynesian DSGE-models embody the two central tenets of modern
macroeconomics. The first one is that a macroeconomic model should be based
(“micro founded”) on dynamic utility maximization of a representative agent. The
second one is that expectations should be model-consistent which implies that agents
make forecasts based on the information embedded in the model. This idea in turn
implies that agents have a full understanding of the structure of the underlying model.
In this chapter we analyze the scientific validity of these underlying assumptions of
the DSGE-models. In addition we analyze in very general terms the implications of
these assumptions for macroeconomic modeling.
There can be no doubt that this approach to macroeconomics has important
advantages compared to previous macroeconomic models. The main advantage is that
it provides for a coherent and self-contained framework of analysis. This has great
intellectual appeal. There is no need to invoke ad-hoc assumptions about how agents
28
behave and how they make forecasts. Rational expectations and utility maximization
introduce discipline in modeling the behavior of agents.
The scientific validity of a model should not be based on its logical coherence or
on its intellectual appeal, however. It can be judged only on its capacity of making
empirical predictions that are not rejected by the data. If it fails to do so, coherent and
intellectually appealing models should be discarded. Before turning our attention to
the empirical validation of models based on dynamic utility maximization and rational
expectations, of which the DSGE-models are now the most prominent examples, we
analyze the plausibility of the underlying assumptions about human behavior in these
models.
There is a very large literature documenting deviations from the paradigm of the
utility maximizing agent who understands the nature of the underlying economic
model. For recent surveys, see Kahneman and Thaler (2006) and Della Vigna (2007).
This literature has followed two tracks. One was to question the idea of utility
maximization as a description of agents’ behavior (see Kirchgässner 2008 for an
illuminating analysis of how this idea has influenced social sciences). Many
deviations have been found. A well-known one is the framing effect. Agents are often
influenced by the way a choice is framed in making their decisions (see Tversky and
Kahneman 1981). Another well-known deviation from the standard model is the fact
that agents do not appear to attach the same utility value to gains and losses. This led
Kahneman and Tversky (1973) to formulate prospect theory as an alternative to the
standard utility maximization under uncertainty.
We will not deal with deviations from the standard utility maximization model
here, mainly because many (but not all) of these anomalies can be taken care of by
suitably specifying alternative utility functions. Instead, we will focus on the
plausibility of the rational expectations assumption and its logical implication, i.e.,
that agents understand the nature of the underlying model.
It is no exaggeration to say that there is now overwhelming evidence that
individual agents suffer from deep cognitive problems limiting their capacity to
understand and to process the complexity of the information they receive.
Many anomalies that challenge the rational expectations assumption were
discovered (see Thaler 1994 for spirited discussions of these anomalies; see also
29
Camerer and Lovallo 1999; Read and van Leeuwen 1998; Della Vigna 2007). We just
mention "anchoring" effects here, whereby agents who do not fully understand the
world in which they live are highly selective in the way they use information and
concentrate on the information they understand or the information that is fresh in their
minds. This anchoring effect explains why agents often extrapolate recent movements
in prices.
In general the cognitive problem which agents face leads them to use simple rules
("heuristics") to guide their behavior (see Gabaix, et al. 2006). They do this not
because they are irrational, but rather because the complexity of the world is
overwhelming. In a way it can be said that using heuristics is a rational response of
agents who are aware of their limited capacity to understand the world. The challenge
when we try to model heuristics will be to introduce discipline in the selection of rules
so as to avoid that “everything becomes possible”.
One important implication of the assumption that agents know the underlying
model’s structure is that all agents are the same. They all use the same information set
including the information embedded in the underlying model. As a result, DSGE-
models routinely restrict the analysis to a representative agent to fully describe how
all agents in the model process information. There is no heterogeneity in the use and
the processing of information in these models. This strips models based on rational
expectations from much of their interest in analyzing short-term and medium-term
macroeconomic problems which is about the dynamics of aggregating heterogeneous
behavior and beliefs (see Solow(2005, Colander et al. 2009)4. As will become
abundantly clear in this book the dynamics of interacting agents with limited
understanding but often with different strongly held beliefs is what drives the business
cycle. In standard DSGE-models this dynamics is absent creating a very shallow
theory of the business cycle. We will return to these issues in the following chapters.
It is fair to conclude that the accumulated scientific evidence casts doubts about the
plausibility of the main assumption concerning the behavior of individual agents in
DSGE-models, i.e., that they are capable of understanding the economic model in
which they operate and of processing the complex information distilled from this
4 There have been attempts to model heterogeneity of information processing in rational
expectations models. These have been developed mainly in asset market models. Typically, it
is assumed in these models that some agents are fully informed (rational) while others, the
noise traders, are not. See e.g. De Long, et al. (1990).
30
model. Instead the scientific evidence suggests that individual agents are not capable
of doing so, and that they rely on rules that use only small parts of the available
information.
One could object here and argue that a model should not be judged by the
plausibility of its assumptions but rather by its ability to make powerful empirical
predictions. Thus, despite the apparent implausibility of its informational assumption,
the macroeconomic model based on rational expectations could still be a powerful
one if it makes the right predictions. This argument, which was often stressed by
Milton Friedman, is entirely correct. It leads to the question of the empirical validity
of the rational macromodels in general and the DSGE-models in particular.
We have discussed the failure of the DSGE-models to predict a dynamics that
comes close to the dynamics of the observed output movements, except when the step
is taken to assume that the unexplained dynamics in the error terms is in fact an
exogenous force driving an otherwise correct model. This problem of standard
DSGE-models has also been noted by Chari, et al (2008), who conclude that most of
the dynamics produced by the standard DSGE-model (e.g. Smets and Wouters(2003))
comes from the autoregressive error terms, i.e. from outside the model.
The correct conclusion from such an empirical failure should be to question the
underlying assumptions of the model. But surprisingly, this has not been done by
DSGE-modelers who have kept their faith in the existence of rational and fully
informed agents.
The issue then is how much is left over from the paradigm of the fully informed
rational agent in the existing DSGE-models? This leads to the question of whether it
is not preferable to admit that agents’ behavior is guided by heuristics, and to
incorporate these heuristics into the model from the start, rather than to pretend that
agents are fully rational but to rely in a nontransparent way on statistical tricks to
improve the fit of the model. That is what we plan to do in the next chapter.
3. Top-Down versus Bottom-Up models
In order to understand the nature of different macroeconomic models it is useful to
make a distinction between top-down and bottom-up systems. In its most general
definition a top-down system is one in which one or more agents fully understand the
31
system. These agents are capable of representing the whole system in a blueprint that
they can store in their mind. Depending on their position in the system they can use
this blueprint to take over the command, or they can use it to optimize their own
private welfare. These are systems in which there is a one to one mapping of the
information embedded in the system and the information contained in the brain of one
(or more) individuals. An example of such a top-down system is a building that can
be represented by a blueprint and is fully understood by the architect.
Bottom-up systems are very different in nature. These are systems in which no
individual understands the whole picture. Each individual understands only a very
small part of the whole. These systems function as a result of the application of
simple rules by the individuals populating the system. Most living systems follow this
bottom-up logic (see the beautiful description of the growth of the embryo by
Dawkins(2009)). The market system is also a bottom-up system. The best description
made of this bottom-up system is still the one made by Hayek(1945). Hayek argued
that no individual exists who is capable of understanding the full complexity of a
market system. Instead individuals only understand small bits of the total information.
The main function of markets consists in aggregating this diverse information. If there
were individuals capable of understanding the whole picture, we would not need
markets. This was in fact Hayek’s criticism of the “socialist” economists who took the
view that the central planner understood the whole picture, and would therefore be
able to compute the whole set of optimal prices, making the market system
superfluous. (For further insightful analysis see Leijonhufvud(1993)).
The previous discussion leads to the following interesting and surprising insight.
Macroeconomic models that use the rational expectations assumption are the
intellectual heirs of these central-planning models. Not in the sense that individuals in
these rational expectations models aim at planning the whole, but in the sense that, as
the central planner, they understand the whole picture. Individuals in these rational
expectations models are assumed to know and understand the complex structure of
the economy and the statistical distribution of all the shocks that will hit the economy.
These individuals then use this superior information to obtain the optimum
optimorum” for their own private welfare. In this sense they are top-down models.
In the next chapters the rational expectations top-down model will be contrasted with
a bottom-up macroeconomic model. This will be a model in which agents have
32
cognitive limitations and do not understand the whole picture (the underlying model).
Instead they only understand small bits and pieces of the whole model and use simple
rules to guide their behavior. Rationality will be introduced in the model through a
selection mechanism in which agents evaluate the performance of the rule they are
following and decide to switch or to stick to the rule depending on how well the rule
performs relative to other rules. As will be seen, this leads to a surprisingly rich
dynamics that comes much closer in understanding the dynamics producing short-
term macroeconomic fluctuations that the standard DSGE-models.
33
CHAPTER 3: A BEHAVIORAL MACROECONOMIC MODEL
34
3.1 Introduction
There is a need to develop macroeconomic models that do not impose
implausible cognitive abilities on individual agents. In this chapter an attempt is
made at developing such a model. In addition, this chapter aims at highlighting
the implications this model has for our understanding of the workings of the
macroeconomy and the business cycle.
The building blocks of the model consist of the same equations as those used in
the standard macroeconomic model that was discussed in chapter 1. It consists
of an aggregate demand equation, an aggregate supply equation and a Taylor
rule equation. The difference with the standard model will be in the way agents
use information to make forecasts. That is, it will be assumed that agents use
simple rules, heuristics, to forecast the future. In this chapter, the simplest
possible heuristic will be assumed. In a later chapter (chapter 7) other rules are
introduced. This will be done to study how more complexity in the heuristics
affects the results.
Agents who use simple rules of behavior are no fools. They use simple rules only
because the real world is too complex to understand, but they are willing to learn
from their mistakes, i.e. they regularly subject the rules they use to some
criterion of success. There are essentially two ways this can be done. The first
one is called statistical learning. It has been pioneered by Sargent(1993) and
Evans and Honkapohja(2001). It consists in assuming that agents learn like
econometricians do. They estimate a regression equation explaining the variable
to be forecasted by a number of exogenous variables. This equation is then used
to make forecasts. When new data become available the equation is re-
estimated. Thus each time new information becomes available the forecasting
rule is updated. The statistical learning literature leads to important new insights
(see e.g. Bullard and Mitra(2002), Gaspar and Smets(2006), Orphanides and
Williams(2004), Milani(2007a), Branch and Evans(2009)). However, this
approach loads individual agents with a lot of cognitive skills that they may or
35
may not have5. I will instead use another learning strategy that can be called
“trial and error” learning. It is also often labeled “adaptive learning”. I will use
both labels as synonyms.
Adaptive learning is a procedure whereby agents use simple forecasting rules
and then subject these rules to a “fitness” test, i.e., agents endogenously select
the forecasting rules that have delivered the highest performance (“fitness”) in
the past. Thus, an agent will start using one particular rule. She will regularly
evaluate this rule against the alternative rules. If the former rule performs well,
she keeps it. If not, she switches to another rule. In this sense the rule can be
called a “trial and error” rule.
This “trial and error” selection mechanism acts as a disciplining device on the
kind of rules that are acceptable. Not every rule is acceptable. It has to perform
well. What that means will be made clear later. It is important to have such a
disciplining device, otherwise everything becomes possible. The need to
discipline the forecasting rule was also one of the basic justifications underlying
rational expectations. By imposing the condition that forecasts must be
consistent with the underlying model, the model builder severely limits the rule
that agents can use to make forecasts. The adaptive selections mechanism used
here plays a similar disciplining role.
There is another important implication of using “trial and error” rules that
contrasts a great deal with the rational expectations forecasting rule. As will be
remembered, using rational expectations implies that agents understand the
complex structure of the underlying model. Since there is only one underlying
model (there is only one “Truth), agents understand the same “Truth”. They all
make exactly the same forecast. This allows builders of rational expectations
models to focus on just one “representative agent”. In the adaptive learning
mechanism that will be used here, this will not be possible because agents can
use different forecasting rules. Thus there will be heterogeneity among agents.
This is an important feature of the model because, as will be seen, this
heterogeneity creates interactions between agents. These interactions ensure
5 See the fascinating book of Gigerenzer and Todd(1999) which argues that individual agents
experience great difficulties in using statistical learning techniques. It has a fascinating analysis on the
use of simple heuristics as compared to statistical (regression) learning.
36
that agents influence each other, leading to a dynamics that is completely absent
from rational expectations models.
3.2 The model
The model consists of an aggregate demand equation, an aggregate supply
equation and a Taylor rule.
As in chapter 1, the aggregate demand equation is specified in the standard way.
For the sake of convenience the equation is repeated here, i.e.
(3.1)
where yt is the output gap in period t, rt is the nominal interest rate,
π
t is the rate
of inflation, and
ε
t is a white noise disturbance term. The difference with the
demand equation of chapter 1 is that rational expectations are not assumed. That
is why I place a tilde above E. Thus is the expectations operator where the
tilde refers to expectations that are not formed rationally. How exactly these
expectations are formed will be specified subsequently.
I follow the procedure introduced in New Keynesian DSGE-models of adding a
lagged output in the demand equation. This can be justified by invoking inertia in
decision making (see chapter 1). It takes time for agents to adjust to new signals
because there is habit formation or because of institutional constraints. For
example, contracts cannot be renegotiated instantaneously. I keep this
assumption of inertia here. However, I will analyze later how much the results
depend on this inertia.
The aggregate supply equation is derived from profit maximization of individual
producers (see chapter 1). As in DSGE-models, a Calvo pricing rule and some
indexation rule used in adjusting prices is assumed. This leads to a lagged
inflation variable in the equation6. The supply curve can also be interpreted as a
New Keynesian Philips curve:
6 It is now standard in DSGE-models to use a pricing equation in which marginal costs enter on the
right hand side. Such an equation is derived from profit maximisation in a world of imperfect
tttttttt ErayayEay
επ
+++= ++ )
~
()1(
~
121111
t
E
~
37
(3.2)
Finally the Taylor rule describes the behavior of the central bank
(3.3)
where is the inflation target which for the sake of convenience will be set
equal to 0. The central bank is assumed to smooth the interest rate. This
smoothing behavior is represented by the lagged interest rate in equation (3.3).
Introducing heuristics in forecasting output
Agents are assumed to use simple rules (heuristics) to forecast the future output
and inflation. The way I proceed is as follows. I assume two types of forecasting
rules. A first rule can be called a “fundamentalist” one. Agents estimate the
steady state value of the output gap (which is normalized at 0) and use this to
forecast the future output gap. (In a later extension in chapter 7, it will be
assumed that agents do not know the steady state output gap with certainty and
only have biased estimates of it). A second forecasting rule is an “extrapolative”
one. This is a rule that does not presuppose that agents know the steady state
output gap. They are agnostic about it. Instead, they extrapolate the previous
observed output gap into the future.
The two rules are specified as follows
The fundamentalist rule is defined by (3.4)
The extrapolative rule is defined by (3.5)
This kind of simple heuristic has often been used in the behavioral finance
literature where agents are assumed to use fundamentalist and chartist rules
(see Brock and Hommes(1997), Branch and Evans(2006), De Grauwe and
Grimaldi(2006)). It is probably the simplest possible assumption one can make
about how agents who experience cognitive limitations, use rules that embody
limited knowledge to guide their behavior. They only require agents to use
competition. It can be shown that under certain conditions the aggregate supply equation (3.2) is
equivalent to such a pricing equation (see Gali(2008), Smets and Wouters(2003)).
tttttt ybbEb
ηπππ
+++= + 21111 )1(
~
ttttt urcyccr +++= 132
*
1)(
ππ
*
π
0
~
1=
+t
f
tyE
11
~
+ =tt
e
tyyE
38
information they understand, and do not require them to understand the whole
picture.
Thus the specification of the heuristics in (3.4) and (3.5) should not be
interpreted as a realistic representation of how agents forecast. Rather is it a
parsimonious representation of a world where agents do not know the “Truth”
(i.e. the underlying model). The use of simple rules does not mean that the
agents are dumb and that they do not want to learn from their errors. I will
specify a learning mechanism later in this section in which these agents
continuously try to correct for their errors by switching from one rule to the
other.
The market forecast is obtained as a weighted average of these two forecasts, i.e.
(3.6)
(3.7)
and (3.8)
where and are the probabilities that agents use a fundamentalist,
respectively, an extrapolative rule.
A methodological issue arises here. The forecasting rules (heuristics) introduced
here are not derived at the micro level and then aggregated. Instead, they are
imposed ex post, on the demand and supply equations. This has also been the
approach in the learning literature pioneered by Evans and Honkapohja(2001).
Ideally one would like to derive the heuristics from the micro-level in an
environment in which agents experience cognitive problems. Our knowledge
about how to model this behavior at the micro level and how to aggregate it is
too sketchy, however. Psychologists and brains scientists struggle to understand
how our brain processes information. There is as yet no generally accepted
model we could use to model the micro-foundations of information processing in
e
ttct
f
ttftt EyEyE
~
~
~
,1,1
αα
+= ++
1,,1 0
~
+ += ttctftt yyE
αα
1
,,
=
+
tetf
α
α
tf ,
α
te,
α
39
a world in which agents experience cognitive limitations. I have not tried to do
so7.
Selecting the forecasting rules
As indicated earlier, agents in our model are not fools. They are willing to learn,
i.e. they continuously evaluate their forecast performance. This willingness to
learn and to change one’s behavior is the most fundamental definition of rational
behavior. Thus our agents in the model are rational, not in the sense of having
rational expectations. We have rejected the latter because it is an implausible
assumption to make about the capacity of individuals to understand the world.
Instead our agents are rational in the sense that they learn from their mistakes.
The concept of “bounded rationality” is often used to characterize this behavior.
The first step in the analysis then consists in defining a criterion of success. This
will be the forecast performance of a particular rule. Thus in this first step,
agents compute the forecast performance of the two different forecasting rules
as follows:
(3.9)
(3.10)
where Uf,t and Ue,t are the forecast performances (utilities) of the fundamentalist
and extrapolating rules, respectively. These are defined as the mean squared
forecasting errors (MSFEs) of the forecasting rules;
ω
k are geometrically
declining weights. We make these weights declining because we assume that
agents tend to forget. Put differently, they give a lower weight to errors made far
in the past as compared to errors made recently. The degree of forgetting will
turn out to play a major role in our model.
The next step consists in evaluating these forecast performances (utilities). I
apply discrete choice theory (see Anderson, de Palma, and Thisse, (1992) and
Brock & Hommes(1997)) in specifying the procedure agents follow in this
evaluation process. If agents were purely rational they would just compare Uf,t
7 There are some attempts to provide micro-foundations of models with agents experiencing cognitive
limitations, though. See e.g. Kirman, (1992), Delli Gatti, et al.(2005).
Uf,t= −
ω
k
k=0
ytk1˜
E f,tk2ytk1
[ ]2
Ue,t= −
ω
k
k=0
ytk1˜
E
e,tk2ytk1
[ ]2
40
and Ue,t in (3.9) and (3.10) and choose the rule that produces the highest value.
Thus under pure rationality, agents would choose the fundamentalist rule if Uf,t >
Ue,t, and vice versa. However, things are not so simple. Psychologists have found
out that when we have to choose among alternatives we are also influenced by
our state of mind. The latter is to a large extent unpredictable. It can be
influenced by many things, the weather, recent emotional experiences, etc. One
way to formalize this is that the utilities of the two alternatives have a
deterministic component (these are Uf,t and Ue,t in (3.9) and (3.10)) and a random
component
ε
f,t and
ε
e,t The probability of choosing the fundamentalist rule is then
given by
(3.11)
In words, this means that the probability of selecting the fundamentalist rule is
equal to the probability that the stochastic utility associated with using the
fundamentalist rule exceeds the stochastic utility of using an extrapolative rule.
In order to derive a more precise expression one has to specify the distribution
of the random variables
ε
f,t and
ε
e,t . It is customary in the discrete choice
literature to assume that these random variables are logistically distributed (see
Anderson, Palma, and Thisse(1992), p. 35). One then obtains the following
expressions for the probability of choosing the fundamentalist rule:
 
(3.12)
Similarly the probability that an agent will use the extrapolative forecasting rule
is given by:
 
   (3.13)
Equation (3.12) says that as the past forecast performance of the fundamentalist
rule improves relative to that of the extrapolative rule, agents are more likely to
select the fundamentalist rule for their forecasts of the output gap. Equation
(3.13) has a similar interpretation. The parameter γ measures the “intensity of
α
f,t=P U f,t+
ε
f,t>(Ue,t+
ε
e,t
[
]
41
choice”. It is related to the variance of the random components
ε
f,t and
ε
e,t.. If the
variance is very high, γ approaches 0. In that case agents decide to be
fundamentalist or extrapolator by tossing a coin and the probability to be
fundamentalist (or extrapolator) is exactly 0.5. When γ = the variance of the
random components is zero (utility is then fully deterministic) and the
probability of using a fundamentalist rule is either 1 or 0. The parameter γ can
also be interpreted as expressing a willingness to learn from past performance.
When γ = 0 this willingness is zero; it increases with the size of γ.
Note that this selection mechanism is the disciplining device introduced in this
model on the kind of rules of behavior that are acceptable. Only those rules that
pass the fitness test remain in place. The others are weeded out. In contrast with
the disciplining device implicit in rational expectations models, which implies
that agents have superior cognitive capacities, we do not have to make such an
assumption here.
As argued earlier, the selection mechanism used should be interpreted as a
learning mechanism based on “trial and error”. When observing that the rule
they use performs less well than the alternative rule, agents are willing to switch
to the more performing rule. Put differently, agents avoid making systematic
mistakes by constantly being willing to learn from past mistakes and to change
their behavior. This also ensures that the market forecasts are unbiased.
The mechanism driving the selection of the rules introduces a self-organizing
dynamics in the model. It is a dynamics that is beyond the capacity of any one
individual in the model to understand. In this sense it is a bottom-up system. It
contrasts with the mainstream macroeconomic models in which it is assumed
that some or all agents can take a bird’s eye view and understand the whole
picture. These agents not only understand the whole picture but also use this
whole picture to decide about their optimal behavior.
Heuristics and selection mechanism in forecasting inflation
Agents also have to forecast inflation. A similar simple heuristics is used as in the
case of output gap forecasting, with one rule that could be called a
42
fundamentalist rule and the other an extrapolative rule. (See Brazier et al. (2006)
for a similar setup). We assume an institutional set-up in which the central bank
announces an explicit inflation target. The fundamentalist rule then is based on
this announced inflation target, i.e. agents using this rule have confidence in the
credibility of this rule and use it to forecast inflation. Agents who do not trust
the announced inflation target use the extrapolative rule, which consists
extrapolating inflation from the past into the future.
The fundamentalist rule will be called an “inflation targeting” rule. It consists in
using the central bank’s inflation target to forecast future inflation, i.e.
(3.14)
where the inflation target is normalized to be equal to 0
The “extrapolators” are defined by
(3.15)
The market forecast is a weighted average of these two forecasts, i.e.
(3.16)
or
(3.17)
and (3.18)
The same selection mechanism is used as in the case of output forecasting to
determine the probabilities of agents trusting the inflation target and those who
do not trust it and revert to extrapolation of past inflation, i.e.
(3.19)
(3.20)
where Utar,t and Uext,t are the forecast performances (utilities) associated with the
use of the fundamentalist and extrapolative rules. These are defined in the same
way as in (3.9) and (3.10), i.e. they are the negatives of the weighted averages of
*
~
π
=
tar
t
E
*
π
11 + =tt
ext
t
E
ππ
1,1,1
~
~
~
+++ += t
ext
ttextt
tar
tttartt EEE
πβπβπ
1,
*
,1
~
+ += ttextttartt
E
πβπβπ
1
,,
=
+
textttar
β
β
(
)
)exp()exp(
exp
,,
,
,
textttar
ttar
ttar UU
U
γγ
γ
β
+
=
(
)
)exp()exp(
exp
,,
,
,
textttar
text
text UU
U
γγ
γ
β
+
=
43
past squared forecast errors of using fundamentalist (inflation targeting) and
extrapolative rules, respectively
This inflation forecasting heuristics can be interpreted as a procedure of agents
to find out how credible the central bank’s inflation targeting is. If this is very
credible, using the announced inflation target will produce good forecasts and as
a result, the probability that agents will rely on the inflation target will be high. If
on the other hand the inflation target does not produce good forecasts
(compared to a simple extrapolation rule) the probability that agents will use it
will be small.
Solving the model
The solution of the model is found by first substituting (3.3) into (3.1) and
rewriting in matrix notation. This yields:
Or
 
       (3.21)
where bold characters refer to matrices and vectors. The solution for Zt is given
by
 
       (3.22)
The solution exists if the matrix A is non-singular, i.e. if (1-a2c2)-a2b2c1 ≠ 0. The
system (3.22) describes the solution for yt and
π
t given the forecasts of yt and
π
t .
The latter have been specified in equations (3.4) to (3.12) and can be substituted
into (3.22). Finally, the solution for rt is found by substituting yt and
π
t obtained
from (3.22) into (3.3).
The model has non-linear features making it difficult to arrive at analytical
solutions. That is why we will use numerical methods to analyze its dynamics. In
order to do so, we have to calibrate the model, i.e. to select numerical values for
the parameters of the model. In appendix A the parameters used in the
calibration exercise are presented. The model was calibrated in such a way that
+
+
+
+
=
+
+
tt
t
t
t
t
tt
tt
t
t
ua
r
cay
a
b
yE
E
aa
b
y
caca
b
ε
ηπ
π
π
2
1
321
1
1
1
1
1
12
1
2212
20
10
01
~
~
0
1
1
44
the time units can be considered to be months. A sensitivity analysis of the main
results to changes in the some of the parameters of the model will be presented.
The three shocks (demand shocks, supply shocks and interest rate shocks) are
independently and identically distributed (i.i.d.) with standard deviations of
0.5%.
3.3 Animal spirits, learning and forgetfulness
In this section simulations of the behavioral model in the time domain are
presented and interpreted. The upper panel of Figure 3.1 shows the time pattern
of output produced by the behavioral model given a particular realization of the
stochastic i.i.d. shocks. A strong cyclical movement in the output gap can be
observed. The autocorrelation coefficient of the output gap is 0.95 (which is very
close to 0.94, i.e. the autocorrelation of the output gap in the US during 1960-
2009). The lower panel of Figure 3.1 shows a variable called “animal spirits”. It
represents the evolution of the probabilities that agents extrapolate a positive
output gap. These probabilities can also be interpreted as the fraction of agents
using a positive extrapolation rule. Thus, when the probability that agents
extrapolate a positive output gap is 1, we will say that the fraction of agents
using this rule is 1. When in Figure 3.1 the curve reaches 1 all agents are
extrapolating a positive output gap; when the curve reaches 0 no agents are
extrapolating a positive output gap. In that case they all extrapolate a negative
output gap. Thus the curve can also be interpreted as showing the degree of
optimism and pessimism of agents who make forecasts of the output gap.
The concept of “animal spirits” has been introduced by Keynes(1936). Keynes
defined these as waves of optimism and pessimism of investors that have a self-
fulfilling property and that drive the movements of investment and output8. As a
result of the rational expectations revolution, the notion that business cycle
movements can be driven by independent waves of optimism and pessimism
was discarded from mainstream macroeconomic thinking. Recently it has been
given a renewed academic respectability by Akerlof and Shiller(2009)9. Our
8 See Mario Nuti (2009)on the different interpretations of “Animal Spirits”. See also Farmer(2006).
9 There is an older literature trying to introduce the notion of animal spirits in macroeconomic models
that will be discussed in section 3.8.
45
model gives a precise definition of these “animal spirits”. We now show how
important these animal spirits are in shaping movements in the business cycle.
Combining the information of the two panels in figure 3.1 it can be seen that the
model generates endogenous waves of optimism and pessimism (“animal
spirits”). During some periods optimists (i.e. agents who extrapolate positive
output gaps) dominate and this translates into above average output growth.
These optimistic periods are followed by pessimistic ones when pessimists (i.e.
agents who extrapolate negative output gaps) dominate and the growth rate of
output is below average. These waves of optimism and pessimism are essentially
unpredictable. Other realizations of the shocks (the stochastic terms in equations
(3.1) – (3.3)) produce different cycles with the same general characteristics.
These endogenously generated cycles in output are made possible by a self-
fulfilling mechanism that can be described as follows. A series of random shocks
creates the possibility that one of the two forecasting rules, say the extrapolating
one, has a higher performance (utility), i.e. a lower mean squared forecast error
(MSFE). This attracts agents that were using the fundamentalist rule. If the
successful extrapolation happens to be a positive extrapolation, more agents will
start extrapolating the positive output gap. The “contagion-effect” leads to an
increasing use of the optimistic extrapolation of the output-gap, which in turn
stimulates aggregate demand. Optimism is therefore self-fulfilling. A boom is
created.
How does a turnaround arise? There are two mechanisms at work. First, there
are negative stochastic shocks that may trigger the turnaround. Second, there is
the application of the Taylor rule by the central bank. During a boom, the output
gap becomes positive and inflation overshoots its target. This leads the central
bank to raise the interest rate, thereby setting in motion a reverse movement in
output gap and inflation. This dynamics tends to make a dent in the performance
of the optimistic extrapolative forecasts. Fundamentalist forecasts may become
attractive again, but it is equally possible that pessimistic extrapolation becomes
attractive and therefore fashionable again. The economy turns around.
These waves of optimism and pessimism can be understood to be searching
(learning) mechanisms of agents who do not fully understand the underlying
model but are continuously searching for the truth. An essential characteristic of
this searching mechanism is that it leads to systematic correlation in beliefs (e.g.
optimistic ext
rapolations or pessimistic extrapolations). This systematic
correlation is at the core of the booms and busts created in the model. Note,
however, that when computed over a significantly large period of time the
average error in the forecasting goes to zer
tends to disappear asymptotically.
Figure 3.1
: Output gap in behavioral model
The results concerning the time path of inflation are shown in figure 3.2. First
concentrate on the lower panel of figure 3.2. This shows the fraction of agents
usin
g the extrapolator heuristics, i.e. the agents who do not trust the inflation
target of the central bank. One can identify two regimes. There is a regime in
model but are continuously searching for the truth. An essential characteristic of
this searching mechanism is that it leads to systematic correlation in beliefs (e.g.
rapolations or pessimistic extrapolations). This systematic
correlation is at the core of the booms and busts created in the model. Note,
however, that when computed over a significantly large period of time the
average error in the forecasting goes to zer
o. In this sense, the forecast bias
tends to disappear asymptotically.
: Output gap in behavioral model
The results concerning the time path of inflation are shown in figure 3.2. First
concentrate on the lower panel of figure 3.2. This shows the fraction of agents
g the extrapolator heuristics, i.e. the agents who do not trust the inflation
target of the central bank. One can identify two regimes. There is a regime in
model but are continuously searching for the truth. An essential characteristic of
this searching mechanism is that it leads to systematic correlation in beliefs (e.g.
rapolations or pessimistic extrapolations). This systematic
correlation is at the core of the booms and busts created in the model. Note,
however, that when computed over a significantly large period of time the
o. In this sense, the forecast bias
The results concerning the time path of inflation are shown in figure 3.2. First
concentrate on the lower panel of figure 3.2. This shows the fraction of agents
g the extrapolator heuristics, i.e. the agents who do not trust the inflation
target of the central bank. One can identify two regimes. There is a regime in
which the fraction of extrapolators fluctuates around 50
that the fraction of
forecasters using the inflation target as their guide (the
“inflation targeters”) is around 50%. This is sufficient to maintain the rate of
inflation within a narrow band of approximately + and
bank’s inflation target. There is a se
extrapolators are dominant. During this regime the rate of inflation fluctuates
significantly more. Thus the inflation targeting of the central bank is fragile. It
can be undermined when forecasters decide that rely
movements produces better forecast performances than relying on the central
bank’s inflation target. This can occur quite unpredictably as a result of
stochastic shocks in supply and/or demand. We will return to the question of
how th
e central can reduce this loss of credibility in chapter 5 on optimal
monetary policy.
Figure 3.2 Inflation in behavioral model
which the fraction of extrapolators fluctuates around 50
%, which
also implies
forecasters using the inflation target as their guide (the
“inflation targeters”) is around 50%. This is sufficient to maintain the rate of
inflation within a narrow band of approximately + and
1% around the central
bank’s inflation target. There is a se
cond regime though which occurs when the
extrapolators are dominant. During this regime the rate of inflation fluctuates
significantly more. Thus the inflation targeting of the central bank is fragile. It
can be undermined when forecasters decide that rely
ing on past inflation
movements produces better forecast performances than relying on the central
bank’s inflation target. This can occur quite unpredictably as a result of
stochastic shocks in supply and/or demand. We will return to the question of
e central can reduce this loss of credibility in chapter 5 on optimal
Figure 3.2 Inflation in behavioral model
also implies
forecasters using the inflation target as their guide (the
“inflation targeters”) is around 50%. This is sufficient to maintain the rate of
1% around the central
cond regime though which occurs when the
extrapolators are dominant. During this regime the rate of inflation fluctuates
significantly more. Thus the inflation targeting of the central bank is fragile. It
ing on past inflation
movements produces better forecast performances than relying on the central
bank’s inflation target. This can occur quite unpredictably as a result of
stochastic shocks in supply and/or demand. We will return to the question of
e central can reduce this loss of credibility in chapter 5 on optimal
48
Conditions for animal spirits to arise
The simulations reported in the previous section assumed a given set of
numerical values of the parameters of the model (see appendix). It was found
that for this set of parameter values animal spirits (measured by the movements
in the fraction of optimistic extrapolators) emerge and affect the fluctuations of
the output gap. The correlation coefficient between the fraction of optimists and
the output gap in the simulation reported in figure 3.1 is 0.86. One would like to
know how this correlation evolves when one changes the parameter values of
the model. I concentrate on two parameter values here, the intensity of choice
parameter, γ, and the memory agents have when calculating the performance of
their forecasting. This sensitivity analysis will allow us to detect under what
conditions “animal spirits” can arise.
A willingness to learn
We first concentrate on the intensity of choice parameter, γ. As will be
remembered this is the parameter that determines the intensity with which
agents switch from one rule to the other when the performances of these rules
change. This parameter is in turn related to the importance of the stochastic
component in the utility function of agents. When γ is zero the switching
mechanism is purely stochastic. In that case, agents decide about which rule to
apply by tossing a coin. They learn nothing from past mistakes. As γ increases
they are increasingly sensitive to past performance of the rule they use and are
therefore increasingly willing to learn from past errors.
To check the importance of this parameter γ in creating animal spirits we
simulated the model for consecutive values of γ starting from zero. For each
value of γ we computed the correlation between the animal spirits and the
output gap. We show the results of this exercise in figure 3.3. On the horizontal
axis the consecutive values of γ (intensity of choice) are presented. On the
vertical axis the correlation coefficient between output gap and animal spirits is
shown. We obtain a very interesting result. It can be seen that when γ is zero (i.e.
the switching mechanism is purely stochastic), this correlation is zero. The
49
interpretation is that in an environment in which agents decide purely randomly,
i.e. they do not react to the performance of their forecasting rule, there are no
systematic waves of optimism and pessimism (animal spirits) that can influence
the business cycle. When γ increases, the correlation increases sharply. Thus in
an environment in which agents learn from their mistakes, animal spirits arise.
In other words, one needs a minimum level of rationality (in the sense of a
willingness to learn) for animal spirits to emerge and to influence the business
cycle. It appears from figure 3.3 that this is achieved with relatively low levels of
γ. Thus surprisingly animal spirits arise not because agents are irrational. On the
contrary animal spirits can only emerge if agents are sufficiently rational.
Figure 3.3 Animal spirits and learning
A capacity to forget
When agents test the performance of the forecasting rules they compute past
forecasting errors. In doing so, they apply weights to these past forecast errors.
These weights are represented by the parameter
ω
k in equations (3.9)-(3.10).
We assume that these weights decline as the past recedes. In addition, we
assume that these weights decline exponentially. Let us define
(and ). We can then rewrite equations (3.9) and (3.10) as follows (if you
do not see this, try the reverse, i.e. start from (3.23) and (3.24), do repeated
substitutions of Uft-1, Uft-2, etc., and you then find (3.9) and (3.10)):
k
k
ρρω
)1( =
10
ρ
50
(3.23)
(3.24)
We can now interpret
ρ
, as a measure of the memory of agents. When
ρ
= 0
there is no memory; i.e. only last period’s performance matters in evaluating a
forecasting rule; when
ρ
= 1 there is infinite memory, i.e. all past errors,
however far in the past, obtain the same weight. Since in this case there are
infinitely many periods to remember, each period receives the same 0 weight.
Values of
ρ
between 0 and 1 reflect some but imperfect memory. Take as an
example
ρ
= 0.6. This number implies that agents give a weight of 0.4 to the last
observed error (in period t-1) and a weight of 0.6 to all the errors made in
periods beyond the last period.
We performed the same exercise as in the previous section and computed the
correlation between animal spirits and the output gap for consecutive values of
ρ
. The results are shown in figure 3.4. It can be seen that when
ρ
= 1 the
correlation is zero. This is the case where agents attach the same weight to all
past observations; however, far in the past they occur. Put differently, when
agents have infinite memory; they forget nothing. In that case animal spirits do
not occur. Thus one needs some forgetfulness (which is a cognitive limitation) to
produce animal spirits. Note that the degree of forgetfulness does not have to be
large. For values of
ρ
below 0.98 the correlations between output and animal
spirits are quite high.
This and the previous results lead to an interesting insight. Animal spirit emerge
when agents behave rationally (in the sense of a willingness to learn from
mistakes) and when they experience cognitive limitations. They do not emerge
in a world of either super-rationality or irrationality.
[
]
2
12,11,,
~
)1( = ttfttftf yEyUU
ρρ
[
]
2
12,11,,
~
)1( = ttettete yEyUU
ρρ
51
Figure 3.4 Animal Spirits and forgetting
3.4. The world is non-normal
The existence of waves of optimism and pessimism that have a self-fulfilling
nature has important implications for the nature of the uncertainty in
macroeconomic models. Mainstream macroeconomic and finance models almost
universally produce movements in output and asset prices that are normally
distributed. This was also made clear in chapter 1. This is due to the fact that in
these mainstream models the shocks hitting the economy are assumed to be
normally distributed. Since these models are linear these normally distributed
shocks are translated into movements of output and prices that are also
normally distributed10.
We showed in chapter 1 that in the real world the movements in the output gap
are not normally distributed. (In chapter 9, we will provide additional evidence).
Our behavioral macroeconomic model also produces movements of output that
are not normally distributed. We show this by presenting the histogram of the
output gaps obtained from a typical simulation like the one produced in figure
3.1. The result is presented in figure 3.5. The frequency distribution of the output
gap deviates significantly from a normal distribution. There is excess kurtosis
(kurtosis= 4.4) ,i.e. there is too much concentration of observations around the
10 We discussed the procedure of introducing autocorrelated disturbances in DSGE-models in chapter
1. These models typically also produce normally distributed movements in output.
mean for the distribution to be normal. In addition there are fat tails. This means
that there are too many observations that are extremely small or extremely large
to be compatible with a normal distri
observations that exceed five standard deviations. In a normal distribution
observations that deviate from the mean by more than 5 standard deviations
have a chance of occurrence of one in 1.7 million observations. The freq
distribution in figure 3.5 has 2000 observations. Yet we find 5 such observations.
One can conclude that the frequency distribution of the output gap is not a
normal distribution. We also applied a more formal test of normality, the Jarque
Bera test
, which rejected normality. Note that the non
distribution of the output gap is produced endogenously by the model, as we
feed the model with normally distributed shocks.
Figure 3.5: Frequency distribution of simulated output gap
Kurtosis=5.9
, Jarque
This result is not without implications. It implies that when we use the
assumption of normality in macroeconomic models we vastly un
probability of large changes. In this particular case,
tends to underestimate the probability that intense recessions or booms occur.
The same is true in finance models that assume normality. These models
seriously
underestimate the probability of extremely large asset price changes.
In other words they underestimate the probability of large bubbles and crashes.
mean for the distribution to be normal. In addition there are fat tails. This means
that there are too many observations that are extremely small or extremely large
to be compatible with a normal distri
bution. For example, we count 5
observations that exceed five standard deviations. In a normal distribution
observations that deviate from the mean by more than 5 standard deviations
have a chance of occurrence of one in 1.7 million observations. The freq
distribution in figure 3.5 has 2000 observations. Yet we find 5 such observations.
One can conclude that the frequency distribution of the output gap is not a
normal distribution. We also applied a more formal test of normality, the Jarque
, which rejected normality. Note that the non
-
normality of the
distribution of the output gap is produced endogenously by the model, as we
feed the model with normally distributed shocks.
Figure 3.5: Frequency distribution of simulated output gap
, Jarque
-Bera = 178.4 (p-value = 0.001)
This result is not without implications. It implies that when we use the
assumption of normality in macroeconomic models we vastly un
derestimate the
probability of large changes. In this particular case,
the normality assumption
tends to underestimate the probability that intense recessions or booms occur.
The same is true in finance models that assume normality. These models
underestimate the probability of extremely large asset price changes.
In other words they underestimate the probability of large bubbles and crashes.
mean for the distribution to be normal. In addition there are fat tails. This means
that there are too many observations that are extremely small or extremely large
bution. For example, we count 5
observations that exceed five standard deviations. In a normal distribution
observations that deviate from the mean by more than 5 standard deviations
have a chance of occurrence of one in 1.7 million observations. The freq
uency
distribution in figure 3.5 has 2000 observations. Yet we find 5 such observations.
One can conclude that the frequency distribution of the output gap is not a
normal distribution. We also applied a more formal test of normality, the Jarque
-
normality of the
distribution of the output gap is produced endogenously by the model, as we
This result is not without implications. It implies that when we use the
derestimate the
the normality assumption
tends to underestimate the probability that intense recessions or booms occur.
The same is true in finance models that assume normality. These models
underestimate the probability of extremely large asset price changes.
In other words they underestimate the probability of large bubbles and crashes.
To use the metaphor introduced by Nassim Taleb, there are many more Black
Swans than theoretical models ba
It is fine to observe this phenomenon. It is even better to have an explanation for
it. Our model provides such an explanation. It is based on the particular
dynamics of “animal spirits”. We illustrate this in figu
frequency distribution of the animal spirits index (defined earlier) which is
associated with the frequency distribution of the output gap obtained in figure
3.5. From Figure 3.6 we observe that there is a concentration of the animal
spirits at the extreme values of 0 and 1 and also in
This feature provides the key explanation of the non
movements of the output gap.
When the animal spirits index clusters in the middle of the distributi
tranquil periods. There is no particular optimism or pessimism, and agents use a
fundamentalist rule to forecast the output gap. At irregular intervals, however,
the economy is gripped by either a wave of optimism or of pessimism. The
nature of
these waves is that beliefs get correlated. Optimism breeds optimism;
pessimism breeds pessimism. This can lead to situations where everybody has
become either optimist of pessimist. These periods are characterized by extreme
positive of negative movements
Figure 3.6
:Frequency distribution simulated animal spirits
To use the metaphor introduced by Nassim Taleb, there are many more Black
Swans than theoretical models ba
sed on the normality assumption predict.
It is fine to observe this phenomenon. It is even better to have an explanation for
it. Our model provides such an explanation. It is based on the particular
dynamics of “animal spirits”. We illustrate this in figu
re 3.6. This shows the
frequency distribution of the animal spirits index (defined earlier) which is
associated with the frequency distribution of the output gap obtained in figure
3.5. From Figure 3.6 we observe that there is a concentration of the animal
spirits at the extreme values of 0 and 1 and also in
the middle of the distribution.
This feature provides the key explanation of the non
-
normality of the
movements of the output gap.
When the animal spirits index clusters in the middle of the distributi
on we have
tranquil periods. There is no particular optimism or pessimism, and agents use a
fundamentalist rule to forecast the output gap. At irregular intervals, however,
the economy is gripped by either a wave of optimism or of pessimism. The
these waves is that beliefs get correlated. Optimism breeds optimism;
pessimism breeds pessimism. This can lead to situations where everybody has
become either optimist of pessimist. These periods are characterized by extreme
positive of negative movements
in the output gap (booms and busts).
:Frequency distribution simulated animal spirits
To use the metaphor introduced by Nassim Taleb, there are many more Black
sed on the normality assumption predict.
It is fine to observe this phenomenon. It is even better to have an explanation for
it. Our model provides such an explanation. It is based on the particular
re 3.6. This shows the
frequency distribution of the animal spirits index (defined earlier) which is
associated with the frequency distribution of the output gap obtained in figure
3.5. From Figure 3.6 we observe that there is a concentration of the animal
the middle of the distribution.
normality of the
on we have
tranquil periods. There is no particular optimism or pessimism, and agents use a
fundamentalist rule to forecast the output gap. At irregular intervals, however,
the economy is gripped by either a wave of optimism or of pessimism. The
these waves is that beliefs get correlated. Optimism breeds optimism;
pessimism breeds pessimism. This can lead to situations where everybody has
become either optimist of pessimist. These periods are characterized by extreme
54
As mentioned earlier, the shocks in demand and supply in our behavioral model
are normally distributed. These normally distributed shocks, however, are
transformed into non-normally distributed movements in the output gap. Thus
our model explains non-normality; it does not assume it.
From the previous discussion it follows that our behavioral macroeconomic
model has a strong prediction about how the movements of the output gap are
distributed. These movements should be non-normal. We discussed the evidence
of non-normality of the distribution of the US output gap in the postwar period in
chapter 1, and we concluded there that indeed the real-life distribution is
characterized by non-normality. We will spend chapter 9 discussing further
empirical evidence for the behavioral macroeconomic model.
Thus we can conclude that models that are based on normal distributions will
tend to underestimate the probability that intense booms and busts occur. Put
differently, our behavioral model correctly predicts that large swings in output
gap are a regular feature of reality. This contrasts with mainstream linear
rational expectations models like the DSGE-models discussed in chapter 1.
3.5 Uncertainty and risk
Frank Knight, a famous professor of economics at the University of Chicago
before the Second World War introduced the distinction between risk and
uncertainty in his book “Risk, Uncertainty and Profits published in 1921. Risk
according to Knight is quantifiable. It has to do with events that have a
probability of occurrence that can be represented by a statistical distribution. As
a result, we can compute the probability that these events occur with great
precision. The reason we can do this is that there is some regularity in the
occurrence of these events and lots of data to detect this regularity. Uncertainty
in contrast does not allow for such quantification because of a lack of regularity
and/or an insufficiency of data to detect these regularities.
The mainstream macroeconomic models based on rational expectations
(including the DSGE-models) only allow for risk. In these models agents are
capable of making probabilistic statements about all future shocks based on
55
quantifiable statistical distributions obtained from the past. Thus in the DSGE-
models agents know, for example, that in any period there is a probability of say
10% that a negative supply shock of -5% will occur. In fact they can tabulate the
probability of all possible supply shocks, and all possible demand shocks. This is
certainly an extraordinary assumption.
The frequency distribution of the output gap presented in Figure 3.5 suggests
that although the distribution is non-normal, there is enough regularity in the
distribution for individual agents to use in order to make probabilistic
predictions. This regularity, however, appears only because of a large amount of
periods (2000) in the simulation exercise. Assuming that one period
corresponds to one month, we can see that the frequency distribution is obtained
using 170 years of observations. In most developed countries today the
maximum amount of years about output gap data is about 40 to 50 years, a
quarter of the amount of observations used to construct the frequency
distribution in Figure 3.5.
The question that arises then is how reliable are frequency distributions of the
output gap obtained from much shorter periods. In order to answer this question
we ran simulations of the behavioral model over short periods (400,
corresponding to approximately 40 years). For each 400-period simulation we
computed the frequency distribution of the output gap. The result is presented in
Figure 3.7. We observe that the frequency distributions of the output gap
obtained in different 400-period simulations look very different. All exhibit
excess kurtosis but the degree of excess kurtosis varies a great deal. In all cases
there is evidence of fat tails, but the exact shape varies a lot. In some 400-period
simulations there are only positive fat tails, in others only negative fat tails. In
still other simulations fat tails appear on both sides of the distributions.
This suggests that if our model of animal spirits is the right representation of the
real world observations over periods of approximately 40 years are by far
insufficient to detect regularities in the statistical distributions of important
variables as the output gap that can be used to make probabilistic statements
about this variable. Thus, our behavioral model comes close to representing a
world in which uncertainty rather than risk prevails at the macroeconomic level.
56
This contrasts with the standard rational expectations macroeconomic models in
which there is only risk and no uncertainty.
Figure 3.7: Frequency distribution of output gap in 400-period simulations
3.6 Credibility of inflation targeting and animal spirits
In the previous sections we identified the conditions in which animal spirits, i.e.
self-fulfilling waves of optimism and pessimism can arise. We argued that when
animal spirits prevail, uncertainty in the sense of Frank Knight is created. Our
implicit assumption was that the inflation target announced by the central bank
is not 100% credible. This imperfect credibility leads agents to be skeptical and
to continuously test the resolve of the central bank. We showed that in such an
environment animal spirits can arise.
57
In this chapter we ask the following question. Suppose the inflation target can be
made 100% credible. What does such a regime imply for the emergence of
animal spirits? We ask this question not because we believe that such a perfect
credibility can be achieved, but rather to analyze the conditions under which
animal spirits can arise.
We analyze this question in the following way. Equations (3.14) and (3.15)
define the forecasting rules agents use in an environment of imperfect
credibility. In such an environment, agents will occasionally be skeptical about
the announced inflation target. In that case they cease to use the inflation target
to forecast inflation and revert to an extrapolative rule. In a perfectly credible
inflation targeting regime agents have no reason to be skeptical and will
therefore always use the announced target as the basis for their forecast. Thus in
perfectly credible regime agents only use rule (3.14) and there is no switching.
The market forecast of inflation (equation (3.17)) now simplifies to
and the switching equations (3.19) and (3.20) disappear. The rest of the model is
unchanged.
We simulated this version of the model using the same techniques as in the
previous sections. We show some of the results in Figure 3.8 and compare them
with the results obtained in the regime of imperfect credibility of inflation
targeting analyzed in the previous section.
The contrast in the results is quite striking. When inflation targeting is perfectly
credible animal spirits are weak. This can be seen from the fact that the animal
spirits index does not show a concentration of observations at the extreme
values of 1 (extreme optimism) and 0 (extreme pessimism). This contrasts very
much with the imperfect credibility case. This difference in occurrence of animal
spirits has the effect of eliminating the fat tails in the frequency distribution of
the output gap and of inflation. In fact both distributions are now normal with a
kurtosis around 3. The Jarque-Bera test cannot reject the hypothesis that the
distributions of output gap and inflation are normal in the perfect credibility
*
1
~
ππ
=
+tt
E
58
case. The contrast with the distributions obtained in the imperfect credibility
case is striking: these exhibit fat tails and excess kurtosis.
Figure 3.8: Frequency distribution output gap and inflation, and animal
spirits
Perfect credibility Imperfect credibility
Thus when inflation targeting is perfectly credible, periods of intense booms and
busts produced by the existence of animal spirits do not occur. In addition,
Knightian uncertainty is absent. The normal distribution of output gap and
inflation allows agents to make reliable probabilistic statements about these
59
variables. Where does this result come from? The answer is that when inflation
targeting is perfectly credible, the central bank does not have to care about
inflation because inflation remains close to the target most of the time. As a
result, the interest rate instrument can be used to stabilize output most of the
time. Thus when animal spirits are optimistic and tend to create a boom, the
central bank can kill the boom by raising the interest rate. It can do the opposite
when animal spirits are pessimistic. Put differently, in the case of perfect
credibility the central bank is not put into a position where it has to choose
between inflation and output stabilization. Inflation stability is achieved
automatically. As a result, it can concentrate its attention on stabilizing output.
This then “kills” the animal spirits.
A fully credible inflation-targeting regime produces wonderfully stabilizing
results on output and inflation movements. How can a central bank achieve such
a regime of full credibility of its announced inflation target? A spontaneous
answer is that this could be achieved more easily by a central bank that only
focuses on stabilizing the rate of inflation and stops worrying about stabilizing
output. Thus by following a strict inflation targeting regime a central bank is, so
one may think, more likely to reach full credibility. We checked whether this
conclusion is correct in the following way. We simulated the model assuming
that the central bank sets the output coefficient in the Taylor rule equal to zero.
Thus this central bank does not care at all about output stabilization and only
focuses on the inflation target. Will such a central bank, applying strict inflation
targeting, come close to full credibility? We show the result of simulating the
model under strict inflation targeting in Figure 3.9. The answer is immediately
evident. The frequency distribution of output gap shows extreme deviations
from the normal distribution with very fat tails, suggesting large booms and
busts. Even more remarkably, we find the same feature in the frequency
distribution of the rate of inflation that now shows large deviations from the
target (normalized ate 0). Thus strict inflation targeting dramatically fails to
bring us closer to full inflation credibility. The reason why this is so, is that the
power of the animal spirits is enhanced. This can be seen by the middle graph in
Figure 3.9. We now see that most of the time the economy is gripped by either
extreme optimism or extreme pessimism. This tends to destabilize not only the
60
output gap but also the rate of inflation. Thus, strict inflation targeting instead of
bringing us closer to the nirvana of perfect credibility moves us away from it. We
will come back to this issue in chapter 5 where we analyze optimal monetary
policies in a behavioral macroeconomic model.
Figure 3.9 Frequency distribution output gap, animal spirits and inflation
with strict inflation targeting
61
3.7. Two different business cycle theories
The behavioral and rational expectations macroeconomic models lead to very
different views on the nature of business cycle. Business cycle movements in the
rational expectations (D