ArticlePDF Available

INFLATION AND RELATIVE PRICES IN AN OPEN ECONOMY

Authors:

Figures

No caption available
… 
No caption available
… 
No caption available
… 
No caption available
… 
Content may be subject to copyright.
A preview of the PDF is not available
... Weak empirical support for the exogeneity assumption made here can be found in Ashley (1981), who found direct Granger causality from inflation to relative-price variability for U. S. monthly data, but no direct feedback from relative-price variability to inflation. Similar studies for Sweden [see Assarsson (1984) Ch. 6] point in the same direction. ...
... 4 To estimate (6) a stochastic structure has to be imposed. A stochastic disturbance term u;~ is added to (5), with E(u~t ) = 0 and E(u lt ) 2 = 3In Assarsson (1984) two other measures of Vp, were used, both being weighted variances. In one case a highly aggregated measure with only six commodity groups was used and in another case a measure with 44 constant-quality commodities was used. ...
... "In Assarsson (1984) results are also given for the alternative tests with the equations based on (5). The results in general were consistent with the results reported here. ...
Article
The relationship between inflation and relative-price variability is analyzed empirically in a multi-market, partial information equilibrium model, which incorporates raw materials on the supply side, open economy characteristics and allows different supply responses across markets. The hypothesis that the expected, as well as the unexpected, rate of inflation affects relative-price variability is put forward and tested. The empirical results are consistent with the view that inflation is non-neutral, in the sense that it affects relative prices, and it is also shown that raw material prices as well as foreign demand are important determinants of relative-price variability in the Swedish economy.
... Alternative theoretical explanations for some of these features, particularly the relations between the mean rate of inflation and the variance and skewness of price relatives, are surveyed in Fischer (1981Fischer ( , 1982, Cuckierman (1983), and Assarsson (1984). Fischer distinguishes between three broad categories of explanations. ...
Article
Full-text available
This paper analyzes the relationship between banks’ divergent strategies toward specialization and diversification of financial activities and their ability to withstand a banking sector crash. We first generate market-based measures of banks’ systemic risk exposures using extreme value analysis. Systemic banking risk is measured as the tail beta, which equals the probability of a sharp decline in a bank’s stock price conditional on a crash in a banking index. Subsequently, the impact of (the correlation between) interest income and the components of non-interest income on this risk measure is assessed. The heterogeneity in extreme bank risk is attributed to differences in the scope of non-traditional banking activities: non-interest generating activities increase banks’ tail beta. In addition, smaller banks and better-capitalized banks are better able to withstand extremely adverse conditions. These relationships are stronger during turbulent times compared to normal economic conditions. Overall, diversifying financial activities under one umbrella institution does not improve banking system stability, which may explain why financial conglomerates trade at a discount.
Article
The article reviews the first two stages of the Patinkin Controversy; these stages deal with the question of whether the Classical system is logically sound. Two distinct meanings of the term inconsistency are established: inconsistency in the sense that, for a given system of equations, all equations cannot be simultaneously satisfied by at least one set of values of the variables; and inconsistency in the sense that, for a given set of assumptions, a contradiction is present. It is shown that much of the controversy's confusion has arisen from a failure to distinguish between these two meanings. The conclusion reached is that despite the attacks on Patinkin's position, the logical problem which he descovered still remains.
Chapter
The assumption of ‘steady’ inflation has proven to be a common starting point for most welfare analysis of inflation. As defined, steady inflation has three basic properties (1) it is perfectly foreseen, so that the expected inflation rate, which alone influences behaviour, is in fact the inflation rate actually realised; (2) market institutions adjust sufficiently to accommodate the inflation; one example of such an adjustment would be the use of price-indexed contracts; other examples will be given below; (3) the effects of inflation are uniform over all commodity groups; that is, inflation causes no changes in relative prices.
Article
There is widespread agreement that it is necessary to introduce into economics both dynamical relations and general interdependence. This is not a counsel of perfection or a manifestation of the desire for theoretical elegance and completeness. The most concrete, practical analysis is often vitiated by restrictive assumptions which ignore or ‘eliminate’ the elements of change and of interconnectedness. When two sectors of an economy are interdependent in some way (coupled, we may say), and when we have another than the null or trivial solution of no change, then it is quite inadmissible to discuss the one sector assuming the other unchanged. For, if the one does not remain constant, the other cannot either, by virtue of the coupling. Therefore to allow one sector to vary and keep the other constant would be to hold contradictory assumptions.
Article
Many statistical models, and in particular autoregressive-moving average time series models, can be regarded as means of transforming the data to white noise, that is, to an uncorrelated sequence of errors. If the parameters are known exactly, this random sequence can be computed directly from the observations; when this calculation is made with estimates substituted for the true parameter values, the resulting sequence is referred to as the "residuals," which can be regarded as estimates of the errors. If the appropriate model has been chosen, there will be zero autocorrelation in the errors. In checking adequacy of fit it is therefore logical to study the sample autocorrelation function of the residuals. For large samples the residuals from a correctly fitted model resemble very closely the true errors of the process; however, care is needed in interpreting the serial correlations of the residuals. It is shown here that the residual autocorrelations are to a close approximation representable as a singular linear transformation of the autocorrelations of the errors so that they possess a singular normal distribution. Failing to allow for this results in a tendency to overlook evidence of lack of fit. Tests of fit and diagnostic checks are devised which take these facts into account.