# TAYLOR RULES AND INTEREST RATE SMOOTHING IN THE EURO AREA

**ABSTRACT** Conventional wisdom suggests that central banks implement monetary policy in a gradual fashion. Some researchers claim that this gradualism is due to 'optimal cautiousness'; in contrast, Rudebusch (Journal of Monetary Economics, Vol. 49 (2002), pp. 1161-1187) states that the observed policy rate sluggishness is mainly due to serially correlated exogenous shocks. In this paper we use models in first differences to assess the 'endogenous' versus 'exogenous' gradualism hypothesis for the Euro area. Our results suggest that the joint formalization of the two hypotheses is likely to offer the best simple approximation of the Euro area monetary policy conduct. Copyright © 2007 The Author; Journal compilation © 2007 Blackwell Publishing Ltd and The University of Manchester.

**1**Bookmark

**·**

**145**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**Against the background of the recent discussion whether the smoothing behavior of the Fed detected by empirical Taylor rules is indeed a fact or rather a statistically fiction, this paper re-examines the empirical evidence for interest rate smoothing for the case of the European Central Bank (ECB). Based on data representing true ECB behavior, our findings reject the hypothesis of no smoothing but also find a role of serially correlated shocks. The degree of smoothing is estimated in the range of [0.38;0.82], reflecting model uncertainty with respect to the output gap and indicating a rather moderate extent of partial adjustment.Manchester School 07/2013; 81(4). · 0.26 Impact Factor - SourceAvailable from: scirp.org
##### Article: Interest-Rate Setting at the ECB Following the Financial and Sovereign Debt Crises, in Real-Time

[Show abstract] [Hide abstract]

**ABSTRACT:**We analyse European Central Bank (ECB) policy by estimating a forward-looking, augmented Taylor rule using expectations data. Specifically, we investigate the impact of the financial and sovereign debt crises on ECB policy. We find the European Overnight Index Average (EONIA) rises when expected economic activ-ity is strong. Regardless of the inflation measure, inflation is not associated with the EONIA. Using a recur-sive estimation and a Chow test, we identify a policy shift in December 2008. The more generally accepted starting date of the crisis, August 2007, does not correspond to a statistically significant shift in the ECB policy. Using December 2008 for a policy shift, general financial market sentiment, as measured by VSTOXX, is not significant in explaining EONIA movements. The ECB's response to a shock to economic activity has been more moderate since the crises. However, the EONIA increases as Greek sovereign risk rises, possibly from increasing demand for liquidity by banks. - SourceAvailable from: onlinelibrary.wiley.com[Show abstract] [Hide abstract]

**ABSTRACT:**This paper investigates the usefulness of information criteria for inference on the number of structural breaks in a standard linear regression model. In particular, we propose a modified penalty function for such criteria, which implies each break is equivalent to estimation of three individual regression coefficients. A Monte Carlo analysis compares information criteria to sequential testing, with the modified Bayesian and Hannan–Quinn criteria performing well overall, for data-generating processes both without and with breaks. The methods are also used to examine changes in Euro area monetary policy between 1971 and 2007.Manchester School 10/2013; 81(S3). · 0.26 Impact Factor

Page 1

Taylor Rules and Interest Rate Smoothing

in the Euro Area

Efrem Castelnuovo∗

University of Padua

April 2005

Abstract

Conventional wisdom suggests that Central Banks implement mon-

etary policy in a gradual fashion. Some researchers claim that this

gradualism is due to ’optimal cautiousness’; by contrast, Rudebusch

(2002) states that the observed policy-rate sluggishness is mainly due

to serially correlated exogenous shocks. In this paper we employ mod-

els in first-differences to assess the ’endogenous’ vs. ’exogenous’ grad-

ualism hypothesis for the Euro Area. Our results offer support to the

former one, so highlighting the importance to model the systematic

policy inertia when tracking the Euro Area monetary policy conduct.

JEL classification system: E4, E5.

Keywords: Taylor rules, interest rate smoothing, Euro Area, serial

correlation, omitted variables.

∗First draft: December 2002. We are grateful to Keith Blackburn (the Editor) and

two anonymous referees for helpful comments and suggestions. We also thank Marie-

Luce Bianne, Antonello D’Agostino, Carlo Favero, Marzio Galeotti, Dieter Gerdesmeier,

Petra Gerlach-Kristen, Victor Lopez, Roberto Motto, Sergio Nicoletti-Altimari, Jorge Ro-

drigues, Barbara Roffia, Massimo Rostagno, Frank Smets, Paul Söderlind, Paolo Surico,

Astrid Van Landschoot, and the participants at the ECB/Monetary Policy Strategy Di-

vision Seminar for insightful discussions. All remaining errors are ours. The hospitality

of the European Central Bank, where much of this work was developed, is gratefully ac-

knowledged. Author’s details: Efrem Castelnuovo, Department of Economics, University

of Padua, Via del Santo 33, I-35123 Padua (PD). Phone: +39 049 827 4257, Fax: +39

049 827 4211. E-mail: efrem.castelnuovo@unipd.it.

Page 2

1Introduction

The Taylor (1993) rule has captured the attention of researchers involved in

monetary policy analyses for more than a decade now. One of the reasons of

its success is that, in spite of its simplicity, this rule (which links the inflation

rate and a measure of the output gap to the monetary policy rate) provides

a good ex-post description of the monetary policy implemented by various

Central Banks all over the world. Interestingly, when estimating Taylor-type

rules econometricians typically find that the fit of such rules remarkably

improves when the lagged policy rate is included among the regressors. The

significance and high magnitude of the lagged interest rate has stimulated

several scholars in this field to investigate the rationale behind this apparent

gradualism in the conduct of monetary policy, gradualism often labelled as

’interest rate smoothing’, or ’monetary policy inertia’.1

Such a policy inertia may be rationalized in different ways. Mishkin

(1999) argues that monetary authorities are very averse to reversing the pol-

icy rate course too frequently because of credibility problems, i.e. sudden,

large reversals might lead agents in the economy to reduce their confidence

in the Central Bank (CB henceforth)’s competence. Goodfriend (1991) dis-

cusses how a too volatile policy rate might induce financial instability be-

cause of the likely over-reaction of the markets (e.g. drastic portfolio reallo-

cations, sharp modifications in the cost of loans for firms) that could lead to

disastrous economic feedbacks. Amato and Laubach (1999) and Woodford

(1999, 2001) demonstrate that a smooth policy rate path can be seen as an

optimal choice when the CB is not endowed with any commitment technol-

ogy. In fact, an inertial rate, perceived as such by forward-looking private

agents, may contribute to the reduction of the inflation bias arising under

discretion. Following the intuition provided by Brainard (1967), Söderström

1Clarida, Galí, and Gertler (2000) estimate such a partial adjustment degree with var-

ious specifications of the Taylor rule with US data, finding a magnitude ' 0.8. Approxi-

mately the same magnitude is found by Kozicki (1999), Amato and Laubach (1999), and

Domenéch, Ledo, and Taguas (2002). Estimates for some other industrialized countries

are present in Henderson and McKibbin (1993) and Clarida, Galí, and Gertler (1998),

while for the Euro area there exist contributions by e.g. Peersman and Smets (1999),

Taylor (1999), Gerlach and Schnabel (2000), Domenéch, Ledo, and Taguas (2002), Surico

(2003), Sauer and Sturm (2003), Hayo and Hofmann (2003), and Gerlach-Kristen (2003).

2

Page 3

(1999) and Sack (2000) show that parameter uncertainty may be another

element suggesting gradualism to a monetary authority whose knowledge of

the monetary transmission dynamics is limited. Positive exercises conducted

by Favero and Milani (2001) and Castelnuovo and Surico (2004) suggest that

model uncertainty is likely to have been a very important issue for the Fed.

Finally, Orphanides (2003) argues that monetary authorities respond mod-

erately to perceived shocks because it is mindful not to respond to noise in

the data.2

Interestingly enough, Rudebusch (2002) goes against the conventional

wisdom and claims that the interest rate smoothing at quarterly frequencies

is just an illusion.In a nutshell, his reasoning is the following. If the

partial adjustment strategy had such a high importance in the policy rate

setting, then rational agents should be capable to predict future values of

the quarterly rate with a high degree of precision. On the contrary, standard

term structure regressions show how unpredictable the policy rate is over

one quarter. Rudebusch takes this evidence as convincing to claim that the

quarterly interest rate smoothing is just negligible, and that the persistency

of the observed policy rate is probably due to serially correlated deviations

from the Taylor rate, due e.g. to commodity price scares, credit crunches,

financial crises.3

Indeed, the issue of dynamics is important from a policy perspective.

In fact, in the last two decades we have observed an improvement of the

inflation-output gap trade-off in many industrialized countries. Part of this

improvement is surely attributable to better monetary-policy management,

as remarked by Cecchetti, Flores-Lagunes, and Krause (2001) and Favero

and Rovelli (2003).4In general, it is necessary to understand the determi-

2Discussions concerning the interest rate smoothing issue may be found in Lowe and

Ellis (1998), Goodhart (1999), Sack and Wieland (2000), Cecchetti (2000), and Srour

(2001).

3Moreover, Rudebusch (2002) claims that there might be an omitted variable problem

in standard Taylor rules; a similar opinion is expressed by Söderlind, Söderström, and

Vredin (2004). Indeed, if the Taylor model is misspecified and missing an important

serially correlated regressor, then the importance of the lagged interest rate might be just

spurious. We deal with this relevant issue later in the paper.

4Both Cecchetti et al (2001) and Favero and Rovelli (2003) acknowledge that the

improved inflation-output gap trade-off has probably not been uniquely caused by a better

3

Page 4

nants of this successful management, in order to possibly replicate this suc-

cess in presence of similar macroeconomic conditions. Then, is the observed

gradualism endogenous, i.e. stemming from the systematic component of the

monetary policy under analysis, or exogenous, i.e. due to serially correlated

policy shocks?

While the discussion has been quite lively as far as the U.S. case is con-

cerned [Rudebusch (2002), English, Nelson, and Sack (2003, ENS hereafter),

Castelnuovo (2003a)], to our knowledge the literature is still silent with re-

gard to the Euro Area case. In fact, although several contributions about

the ’counterfactual’ as well as the true European Central Bank have already

focussed their attention on Taylor-type rules,5none of them has deepened

the important issue of dynamics discussed above.

In this paper we employ models in first differences to assess the impor-

tance of the interest rate smoothing argument vs. that of serially correlated

policy shocks in the Euro Area context. This strategy is followed to overcome

Rudebusch (2002)’s criticism on the mis-specification of the tests performed

with models in levels. In doing so, we take into account several definitions of

the Taylor rate, in order to control for possible omitted variables problems

as done by Clarida, Galí, and Gertler (1998, CGG henceforth), Gerlach and

Schnabel (2000), Surico (2003), and Gerdesmeier and Roffia (2004a). Our

results indicates that also European data supports the partial adjustment

mechanism hypothesis.

In performing this exercise, we have to keep in mind some important

caveats. First, this is an ex-post analysis mostly referring to a counterfactual

monetary policy conduct. In fact, the European Central Bank began to

manage the Euro Area monetary policy in 1999, while the sample we employ

starts much earlier, i.e. at the beginning of the year 1980. Second, we deal

with a dataset created by computing weighted-averages for the relevant data

for different, potentially ’heterogeneous’ countries such as those belonging

monetary policy management. In fact, there is evidence of a change in monetary policy

preferences, and of more favourable sequences of supply shocks. Still, better monetary

policy management seems to have been quite significant for the last two decades now. For

a contribution focussing on the measurement of policy-makers’ preferences over inflation

and the output gap volatilities, see Cecchetti et al (2002).

5A nice survey of such contributions is offered by Sauer and Sturm (2003).

4

Page 5

to the Euro Area. Third, we deal with revised-data, while a CB operates in

real-time.6

Nevertheless, although some breaks may be clearly identified in this pat-

tern (e.g. ERM crisis in 1992), a common effort to bring down inflation to

more sustainable levels has been implemented by several European countries

since the early ’80s, and continued under the monetary policy management

by the European Central Bank [Gerdesmeier and Roffia (2004a)]. Moreover,

the use of synthetic European data is fairly widespread among researchers

[e.g. Peersman and Smets (1999), Taylor (1999), Gerlach and Schnabel

(2000), Doménech et al (2002), Gerdesmeier and Roffia (2004a,b), Surico

(2003), Sauer and Sturm (2003)]. Therefore, we think that our exercise can

be considered as a fairly good first approximation of the track followed by

the ’average’ monetary policy conduct in the Euro Area during the last two

decades. Last but not least in terms of importance, given what written

above we obviously refrain from attaching any normative evaluation to our

estimated simple Taylor rules.7

The structure of the paper reads as follows. Section 2 explains the ad-

vantage of employing models in first differences when dealing with the iden-

tification issue affecting models in levels. In Section 3 we present and discuss

our findings. Section 4 concludes. A description of the dataset employed in

this paper is offered to the reader, and References follow.

2 A direct test for partial adjustment versus serial

correlation

Thinking of models in levels, an econometrician can easily build up two

frameworks for representing the partial adjustment (PA) vs. the serial cor-

relation (SC) hypothesis. In particular, the former may be captured by the

6Gerdesmeier and Roffia (2004b) show that Orphanides (2001)’s intuition on the im-

portance of dealing with the real-time data issue applies to the Euro Area as well. By

contrast, Sauer and Sturm (2003) demonstrate that the use of real-time industrial pro-

duction data does not seem to play a very significant role for the point estimates of the

Taylor rules they focus on. We leave the assessment of the impact of the data-revision

issue on the results presented in this study to future research.

7For a critical assessment of Taylor-type rules in such a context, see European Central

Bank (2001).

5

Page 6

following model:

it= (1 − ρ)eit+ ρit−1+ ηt

(1)

where itis the short-term policy rate managed by the CB in order to

influence the inflation rate and the business-cycle,eitis the target rate (i.e.

tive, and ηtis a white noise policy shock. Alternatively, a process relating

the serially correlated policy shock to the policy rate with no-endogenous

Taylor rate), ρ measures the importance of the interest rate smoothing mo-

persistence may be shaped as follows:

it=eit+ εt, εt= ρεεt−1+ ηt

(2)

where εtis an AR(1) process with root ρε.8

Once defined the PA model (1) and the SC model (2), a structure nesting

the two reads as follows:

it= (1 − ρ)eit+ ρit−1+ εt, εt= ρεεt−1+ ηt

Unfortunately, an identification problem arises with such a model [Rude-

busch (2002), Castelnuovo (2003b)]. In fact, both (1) and (2) tend to pro-

(3)

duce a persistent policy rate, then it is not very wise to rely upon models

in levels like (3) for scrutinizing the two hypothesis under investigation. By

contrast, it is much more useful to manipulate the model in levels in order

to work with their first-differences counterparts [ENS (2003)]. Once done

so, we are left with the following equations for the PA vs. SC hypothesis:

∆it= (1 − ρ)∆eit+ (1 − ρ)(eit−1− it−1) + ηt

(4)

vs.

∆it= ∆eit+ (1 − ρε)(eit−1− it−1) + ηt

(5)

8We performed some econometric exercises in order to measure which is the serial cor-

relation order featuring the residuals of simple backward and forward looking Taylor rules

without smoothing. Our findings suggest that an AR(1) process is a good approximation

of the errors. These findings - not included in the paper for sake of brevity - are available

upon request.

6

Page 7

The latter equation sheds some light on the implications of the SC engine.

Here, variations of the Taylor-rate cause an immediate and full reaction of

the policy rate change; in fact, there is no inertial adjustment, which is

by contrast present in equation (4) via the coefficient (1 − ρ). Then, it is

possible to build up a direct test on the PA vs. SC hypotheses by exploiting

the empirical model

∆it= γ2∆eit+ γ3(eit−1− it−1) + ηt

and testing the null hypothesis

(6)

H0SC: γ2= 1(7)

Under the null (7), the SC specification holds true. By contrast, a re-

jection of the null hypothesis has clear implications for the dynamics of the

policy rate, which must be at that point influenced also by its lag, if not nec-

essarily just by its lag, the latter case corresponding to the pure PA model.

Actually, there is no reason to believe that only one of the two hypotheses

holds. However, the rejection of the null (7) would imply that the lagged

policy rate enters the Taylor-type rule in its own right, and would support

the endogenous gradualism hypothesis typically discussed in the literature.

Taylor-rate definitions employed in our study

As far as the Taylor rateeitis concerned, it is natural to concentrate on

rate, which reads as follows:

some popular definitions of it. Our benchmark is the original Taylor (1993)

eit= c + bππHICP

t

= year-on-year HICP inflation rate, and yt

t

+ byyt

(8)

where c is a constant, πHICP

= the output gap.9,10A different specification of the Taylor rate has been

9For a description of the dataset employed in our study, as well as the construction of

the variables involved in our regressions, see the Data description at the end of the paper.

The dataset we used is available upon request.

10In Taylor (1993), the policy rule reads as follows: it = πt+0.5yt+0.5(πt−π∗)+r∗,

with π∗= r∗= 2%. Then, the constant c in the various Taylor rates is a linear convolution

of the inflation target π∗and the real interest rate of equilibrium r∗, i.e. r∗−bππ∗. Neither

7

Page 8

popularized by CGG (1998, 2000). These authors have underlined the im-

portance for the CB to adjust the policy rate with respect to future, forecast

movements of both inflation and output gap. Their idea finds its rationale

in the lags affecting the monetary policy transmission. Their definition of

the Taylor rate can be captured by the following modelization:11

eit= c + bπEt−1πHICP

t+4

+ byEt−1yt

(9)

However, as already mentioned above, Rudebusch (2002) calls for omit-

ted serially correlated variables as potential cause of the estimated high

degree of PA. To check also for this, we enrich the original specification (8)

by adding a third regressor, as follows:

eit= c + bππHICP

t

+ byyt+ bzzt

(10)

In our exercise, the regressor ztplays different roles. A variable that we

want to control for is a quadratic transformation of the output gap level,

i.e. zt= y2

t. In doing so we feel inspired by recent works on CBs’ asym-

metric preferences, which imply a non-quadratic representation of their loss

function.12

Many normative analyses conducted so far have relied on a

quadratic formalization of the CB’s penalty function. Indeed, apart from

analytical tractability, there does not seem to be an obvious reason why a

CB should symmetrically target the output gap measure [Blinder (1997),

Goodhart (1999), Mayer (2002)]. With our simple modeling strategy we

try to capture possibly asymmetric reactions by the CB to business-cycle

movements.

Moreover, we also aim at investigating the CB’s possible responses to

movements in variables such as money (M3) growth and the nominal effec-

tive exchange rate, on the lines of contributions such as CGG (1998), Gerlach

in Rudebusch (2002)’s nor in our study the focus is the one of assessing these elements; for

investigations concentrating on these components, see Judd and Rudebusch (1998) and

Domenéch, Ledo, and Taguas (2002).

11Sauer and Sturm (2003) underline the importance of considering a forward-looking

Taylor rule when describing the monetary policy implemented in the Euro Area.

12Researchers such as Gerlach (2003), Surico (2002, 2003), and Cukierman and Mus-

catelli (2003) have performed empirical endeavours along this avenue. See also the refer-

ences quoted in those papers.

8

Page 9

and Schnabel (2000), and Gerdesmeier and Roffia (2004a). The first element

was important for the Bundesbank (CGG, 1998) and it has still a prominent

status within the ECB’s monetary policy strategy [European Central Bank

(2003)]; by contrast, the latter component is meant to capture possible ’ex-

ternal pressures’ affecting Euroland. Given that a CB reacts to deviations of

the relevant aggregates from their long-run equilibrium values or reference

values, we estimate our policy rules by taking the nominal effective exchange

rate in deviations with respect to its sample mean, while the M3 growth rate

is considered in deviations with respect to its reference value, i.e. 4.5%.

The time-span we consider in our analysis is 1980Q1-2003Q4. We adopt

a Nonlinear Least Square estimator for models without expectations (i.e.

when either (8) or (10) is considered), while we employ a 2-Stage Nonlinear

Least Square procedure when (9) is taken into account.13

In the next Section we present and comment our findings.

3 Findings

Our estimates for the Euro area confirm the importance of the PA mech-

anism in explaining the dynamics of the short-term policy rate. This is

understandable when looking at Table 1, that displays the results stemming

from the implementation of the ENS test. Notably, the null (7) is strongly

rejected with all the different specifications of the Taylor rate considered.

Interestingly enough, almost all the point estimates of the inflation coef-

ficient bπ suggest that a fairly tight monetary policy was implemented in

Europe in the ’80s and ’90s.14Indeed, all these simple feedback rules find

13The initial conditions for the NLS/2SNLS are provided by LS/2SLS regressions. The

instruments used for our 2SLS regressions are a constant and 5 lags of the HICP inflation

rate, of the output gap, and of the short-term nominal interest rate. Such number of lags

was selected by running an unrestricted VAR(n) with HICP inflation and the output gap

as endogenous variables, and the policy rate (lags from 1 to n) as exogenous variable, and

by checking the indications stemming from standard lag-length criteria. Likelihood-ratio,

Final prediction error, Akaike, and Hannan-Quinn criteria all suggested a number of lags

equal to 5.

14According to the standard New-Keynesian model a la Clarida et al (1999), the nec-

essary and sufficient condition for having a unique and stable equilibrium in an economic

system populated by rational agents is (approximately) bπ > 1. Interestingly, with the

estimates at hand we can never statistically reject the null hypothesis of unique and stable

9

Page 10

in the output gap measure a significant regressor, so confirming also for the

Euro Area the goodness of Taylor (1993)’s descriptive scheme.15

[insert Table 1 about here]

Notably, all the additional regressors considered here do not show any

statistical relevance at the standard confidence levels. This result is in line

with those found by Peersman and Smets (1999) and Gerlach and Schnabel

(2000).16

Stability analysis

Table 2 shows the outcome of a stability analysis performed by exploiting

two very popular stability tests: The Chow-breakpoint test and the Chow-

forecast test.17

As a break-date we chose 1999Q1, i.e. the beginning of

Stage Three of EMU, a very important date for the Euro Area from an

economic perspective. Overall, our estimates turn out to be fairly stable.

In particular, the only doubts regarding the stability of our coefficients are

equilibrium.

15The figures displayed in Table 1 refer to Taylor rules estimated with an output gap

measured as log-deviation of the real GDP with respect to a linear trend (with constant),

i.e. our benchmark case. Our results in terms of statistical importance of the output

gap in such regressions turns out to be robust when an HP-filter measure of the potential

output is employed, as well as when the output gap provided in the Area-Wide Model

database is taken into account. The figures of the latter two cases are available upon

request.

16Instead, these findings seem to be at odds with those in CGG (1998), whose investi-

gated sample spans from 1979 up to 1993. One reason for this different findings may rely

on the different data at hand: National in CGG (1998)’s case, aggregate in ours. More-

over, a plausible explanation for this contrasting result may be the fact that the Maastricht

Treaty, signed up in 1992, forced all the signatory countries to implement tight monetary

and fiscal policies in order to quickly converge toward the Maastricht criteria. Then, al-

though important, external pressures might have been replaced by domestic concerns fully

captured by our sample choice, while only partially by CGG’s.

17The idea of the Chow-breakpoint test is to fit the equation at hand separately for each

subsample to see whether there are significant differences in the estimated equations. A

significant difference indicates a structural change in the relationship. Instead, the Chow-

forecast test first estimates the model for a subsample comprised of the first observations,

then exploits such estimated model to predict the values of the dependent variable in the

remaining data points. A large difference between the actual and predicted values casts

doubt on the stability of the estimated relation over the two subsamples. The distributions

of these two tests are presented in Table 2. For more information about these tests, see

Greene (1997), chapter 7.

10

Page 11

cast on the Forward-looking model and on the M3 growth rate gap model.

Nevertheless, the rejection of the stability of the estimated coefficients is not

clearcut, given the different and conflicting indications coming from the two

tests we employed.

4 Conclusions

In this paper we focussed our attention on the interest rate smoothing ar-

gument in Taylor-type schemes. In a recent contribution, Rudebusch (2002)

challenges the conventional wisdom, and states that the interest rate smooth-

ing at quarterly frequencies is just an illusion. As an indirect proof, he claims

that if this were not the case, then rational agents should be capable to pre-

dict future movements of the policy rate. Indeed, it does not seem to be

true in the real world.

Following English, Nelson, and Sack (2003), we implemented a direct

test for the interest rate smoothing hypothesis in the Euro Area case. Our

estimates turn out to be in favor of the modelization of an ’endogenous

persistence’ mechanisms when fitting Taylor rules for the Euro Area. This

finding, robust across different specifications of the Taylor rate, tends to

underline the importance of undertaking research oriented towards a better

understanding of the monetary policy endogenous gradualism.

5Data description

For our study we employed the ’Area Wide Model’ dataset. This dataset

collects seasonally adjusted series (except for the HICP index) built up with

the ’Index aggregation method’, i.e. the log-level index for any series is

constructed as a weighted average of the log-level country-specific indexes

of the 12 countries belonging to the Euro Area. A standard adjustment

for seasonality - i.e.the ’Ratio to moving average (multiplicative)’ was

applied to the HICP index, and it revealed there was no need of perform-

ing any seasonal-adjustment for such series. The series employed in this

studies are the following (labels as in Fagan et al, 2005): ’HICP’= harmo-

nized index of consumer prices, ’YER’ = real GDP, ’STN’ = short-term

11

Page 12

nominal interest rate, ’EEN’ = effective nominal exchange rate, ’YGA’ =

the output gap. The year-on-year HICP inflation rate was computed as

πHICP

t

= 100[log(HICPt/HICPt−4)]. Our benchmark measure of the out-

put gap was computed by applying a linear trend (with constant) to the

log of real GDP. As alternatives, we computed the potential output i) by

employing a standard HP-filter approach with smoothing parameter = 1600,

and ii) by employing the output gap measured by Fagan et al (2005), i.e.

by considering a potential output costructed with the production function

approach. The nominal exchange rate employed in our regressions was de-

meaned by considering the in-sample mean for 1980Q1-2003Q4. For a better

description of the above mentioned time-series, see Fagan et al (2005).

The annual growth rate of the money stock ’M3’ was downloaded from

the European Central Bank’s web-site (http://www.ecb.int, under ’Key indi-

cators’), and it refers to a seasonally-adjusted measure of M3. We performed

a ’monthly-to-quarterly’ frequency transformation by taking simple averages

of monthly observations. The M3 growth rate employed in our regressions

was considered in deviations with respect to the 4.5% reference value.

12

Page 13

References

Amato, J., and T. Laubach, 1999, The Value of Interest Rate Smoothing:

How the Private Sector Helps the Federal Reserve, Economic Review,

Federal Reserve Bank of Kansas City, 47-64.

Blinder, A.S., 1997, What Central Bankers Could Learn from Academics-

and Vice Versa, Distinguished Lecture on Economics in Government,

Journal of Economic Perspectives, 11(2), 3-19.

Brainard, W., 1967, Uncertainty and the Effectiveness of Policy, American

Economic Review Papers and Proceedings, 57, 211-425.

Castelnuovo E., 2003a, Taylor Rules, Omitted Variables, and Interest Rate

Smoothing in the US, Economics Letters, 81(1), 55-59, October.

Castelnuovo, E., 2003b, Describing the Fed’s Conduct with Taylor Rules:

Is Interest Rate Smoothing Important?, ECB Working Paper No. 232,

May.

Castelnuovo E., and P. Surico, 2004, Model Uncertainty, Optimal Monetary

Policy and the Preferences of the Fed, Scottish Journal of Political

Economy, 51(1), 105-126, February.

Cecchetti, S.G., 2000, Making Monetary Policy: Objectives and Rules,

Oxford Review of Economic Policy, 16(4), 43-59.

Cecchetti, S.G., A. Flores Lagunes and Stefan Krause, 2001. Has monetary

policy become more efficient? A cross country analysis, mimeo.

Cecchetti, S.G., M.M. McConnell, and G. Perez-Quiros, 2002, Policymak-

ers’s Revealed Preferences and the Output-Inflation Variability Trade-

off: Implications for the European System of Central Banks, The

Manchester School, 70(4), 596-618.

Clarida, R, J. Gali, and M. Gertler, 1998, Monetary Policy Rules in Prac-

tice: Some International Evidence, European Economic Review, 42,

1033-1067.

Clarida, R, J. Gali, and M. Gertler, 1999, The Science of Monetary Pol-

icy: A New Keynesian Perspective, Journal of Economic Literature,

XXXVII, December, 1661-1707.

Cukierman, A., and V. Anton Muscatelli, 2003, Do Central Banks have

Precautionary Demands for Expansions and for Price Stability? The-

ory and Evidence, mimeo.

13

Page 14

Doménech, R., M. Ledo, and D. Taguas, 2002, Some new results on interest

rate rules in EMU and in the US, Journal of Economics and Business,

54, 431-446.

English, W.B, W.R. Nelson, and B. Sack, 2003, Interpreting the Signifi-

cance of the Lagged Interest Rate in Estimated Monetary Policy Rules,

Contributions to Macroeconomics, Vol. 3(1), Article 5.

European Central Bank, 2001, Issues Related to Monetary Policy Rules,

Monthly Bulletin, October.

European Central Bank, 2003, The Outcome of the ECB’s Evaluation of

Its Monetary Policy Strategy, Monthly Bulletin, June.

Fagan, G., J. Henry, and R. Mestre, 2001, An Area-Wide Model for the

Euro Area, Economic Modelling, 22(1). 39-59, January.

Favero, C.A., and F. Milani, 2001, Parameter Instability, Model Uncer-

tainty and Optimal Monetary Policy, IGIER Working Paper No. 196.

Favero, C.A., and R. Rovelli, 2003, Macroeconomic Stability and the Pref-

erences of the Fed. A Formal Analysis, Journal of Money, Credit and

Banking, 35(4), 545-556.

Gerdesmeier, D., and B. Roffia, 2004a, Empirical Estimates of Reaction

Functions for the Euro Area, Swiss Journal of Economics and Statis-

tics, 140(1), 37-66, March.

Gerdesmeier, D., and B. Roffia, 2004b, The relevance of real-time data in

estimating reaction functions for the Euro Area, Deutsche Bundesbank

Discussion Paper, No. 37/2004.

Gerlach, S., 2000, Asymmetric Policy Reactions and Inflation, Bank for

International Settlements, mimeo.

Gerlach, S., 2003, Recession Aversion, Output and the Kydland-Prescott

Barro-Gordon Model, Economics Letters, 81(3), 389-394, December.

Gerlach, S., and G. Schnabel, 2000, The Taylor rule and interest rates in

the EMU area, Economics Letters, 67, 165-171.

Gerlach-Kristen, P., 2003, Interest rate reaction functions and the Taylor

rule in the Euro Area, ECB Working Paper No. 258.

Gerlach-Kristen, P., 2004, Interest-Rate Smoothing: Monetary Policy Iner-

tia or Unobserved Variables?, Contributions to Macroeconomics, 4(1),

Article 3.

14

Page 15

Goodfriend, M., 1991, Interest Rate Smoothing and the conduct of mone-

tary policy, Carnegie-Rochester Conference of Public Policy, 7-30.

Goodhart, C., 1999, Central Banks and Uncertainty, Bank of England

Quarterly Bulletin, February, 102-121.

Greene, W.H., 1997, Econometric Analysis, 3rd edition, Prentice-Hall In-

ternational, Inc.

Hayo, B., and B. Hofmann, 2003, Monetary Policy Reaction Functions:

ECB versus Bundesbank, ZEI Working Paper No. B03-24.

Henderson, D., and W.J. McKibbin, 1993, A Comparison of Some Basic

Monetary Policy Regimes for Open Economies: Implications of Differ-

ent Degrees of Instrument Adjustment and Wage Persistence, Carnegie

Rochester Conference Series on Public Policy, 39, S. 221-318.

Judd., J.P., and G.D. Rudebusch, Taylor’s Rule and the Fed: 1970-1997,

Economic Review, Federal Reserve Bank of San Francisco, 3, 3-16.

Kozicki, S., 1999, How Useful Are Taylor Rules for Monetary Policy?, Eco-

nomic Review, Federal Reserve Bank of Kansas City, 5-33.

Lowe and Ellis, 1998, The Smoothing of Official Interest Rates, in P. Lowe

(ed.): Monetary Policy and Inflation Targeting, Proceedings of a Con-

ference, Sidney: Reserve Bank of Australia.

Mayer, T., 2002, The Macroeconomic Loss Function: A Critical Note,

CESIFO Working Paper No. 771, September.

Orphanides, A., 2001, Monetary Policy Rules Based on Real-Time Data,

The American Economic Review, 91(4), 964-985.

Orphanides, A., 2003, Monetary Policy Evaluation with Noisy Information,

Journal of Monetary Economics, 50(3), 605-631, April.

Peersman G., and F. Smets, 1999, The Taylor Rule: A Useful Monetary

Policy Benchmark for the Euro Area?, International Finance, 2(1),

85-116.

Rudebusch, G.D., 2002, Term structure evidence on interest rate smooth-

ing and monetary policy inertia, Journal of Monetary Economics, 49,

1161-1187.

Sack, B., 1998, Uncertainty, Learning, and Gradual Monetary Policy, Fi-

nance and Economics Discussion Series Working Paper No. 1998-34,

Board of Governors of the Federal Reserve System.

15

#### View other sources

#### Hide other sources

- Available from Efrem Castelnuovo · Oct 30, 2014
- Available from unipd.it
- Available from SSRN