[Show abstract][Hide abstract] ABSTRACT: A general approach to Bayesian isotonic changepoint problems is developed. Such isotonic changepoint analysis includes trends
and other constraint problems and it captures linear, non-smooth as well as abrupt changes. Desired marginal posterior densities
are obtained using a Markov chain Monte Carlo method. The methodology is exemplified using one simulated and two real data
examples, where it is shown that our proposed Bayesian approach captures the qualitative conclusion about the shape of the
Annals of the Institute of Statistical Mathematics 01/2009; 61(2):355-370. · 0.66 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Labor market surveys usually partition individuals into three states: employed, unemployed, and out of the labor force. In particular, the Argentine " Encuesta Permanente de Hogares (EPH)" follows a rotating scheme so that each selected household is interviewed four times within two years. Each time, the current labor state of individuals is recorded, together with extensive demographic information. We model those labor paths as consecutive observations from independent Markov chains, were transition matrixes are related to covariates through a multivariate logistic link. Because the EPH is severely affected by attrition, a significant fraction of the surveyed paths contain just one single point. Instead of discarding those observations, we opt to base estimation on the full data by (i) assuming the Markov chains are stationary and (ii) incorporating the chronological time of the first interview as an additional covariate for each individual. This novel treatment represents a convenient approximation, which we illustrate with data from Argentina in the period 1995-2002 via maximum likelihood estimation. Several interesting labor market indexes, which are functionally related to the transition matrixes, are also presented in the last portion of the paper and illustrated with real data.
[Show abstract][Hide abstract] ABSTRACT: Consider a process that jumps back and forth between two states, with random times spent in between. Suppose the durations of subsequent on and off states are i.i.d. and that the process has started far in the past, so it has achieved stationarity. We estimate the sojourn distributions through maximum likelihood when data consist of several realizations observed over windows of fixed length. For discrete and continuous time Markov chains, we also examine if there is any loss of efficiency when ignoring the stationarity structure in the estimation.