Article

Empirical Bayes Test for the Parameter of the Truncated-Type Distribution Families

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The empirical Bayes test (EBT) is proposed for testing H 0 : θ ≤ θ0 ↔ H 1 : θ > θ0 in the truncated-type distribution families. It is found that the EBT proposed is obtained asymptotically optimal and its convergence rate is also obtained.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
An empirical Bayes test for testing ϑϑ0\vartheta \leq \vartheta_0 against ϑ>ϑ0\vartheta > \vartheta_0 for the uniform distribution on [0,ϑ)\lbrack 0, \vartheta) is discussed. The relation is shown with the estimation of a decreasing density on [0,)\lbrack 0, \infty) and a monotone empirical Bayes test is derived based on the least-concave majorant of the empirical distribution function. The asymptotic distribution of the Bayes risk is obtained and some Monte Carlo results are given.
Article
The purpose of this paper is to investigate the asymptotic optimality and the convergence rates of a sequence of empirical Bayes decision rules for two-action decision problems where the observations belong to the model Y=Xβ+ε, where ε∼N(0,σ 2 I), σ 2 is unknown. Using X, Y and the information contained in the observation vectors obtained from n independent past samples of the problem, the empirical Bayes testing procedures for θ=(β ' ,σ 2 ) ' are exhibited. The testing procedures are compared with the optimal Bayes testing procedure and are shown to be asymptotically optimal with rate near O(n -1/2 ).
Article
Abstract A class of empirical Bayes estimators (EBE's) is proposed for estimating the natural parameter of a one-parameter exponential family. In contrast to related EBE's proposed and investigated until now, the EBE's presented in this paper possess the nice property of being monotone by construction. Based on an arbitrary reasonable estimator of the underlying marginal density, a simple algorithm is given to construct a monotone EBE. Two representations of these EBE's are given, one of which serves as a tool in establishing asymptotic results, while the other one, related with isotonic regression, proves useful in the actual computation.
Article
This paper deals with the empirical Bayes estimation of the truncation position of the truncated family under the Linex loss. Nonparametric empirical Bayes estimator is proposed and its asymptotic optimality and rate are investigated. Under certain mild conditions without any differentiability assumption on either the prior or the marginal distribution, it is shown that the proposed empirical Bayes estimator is asymptotically optimal with convergence rate given. Simulation results on the performance of the proposed empirical Bayes estimator are also presented.
Article
For a general discussion of empirical Bayes problems and motivation of the present paper see Section 1 of the previous paper [1]. In that paper we studied the convergence to Bayes optimality and its rate properties for empirical Bayes two-action problems in certain discrete exponential families. This paper continues that investigation for the continuous case. Under appropriate conditions, Theorems 3 and 4 yield convergence rates to Bayes risk of O(nβ)O(n^{-\beta}) for 0<β<10 < \beta < 1, for the (n+1)st(n + 1)\mathrm{st} stage risk of the continuous case empirical Bayes procedures of Section 2. These theorems provide, for the continuous case, convergence rate results for the empirical Bayes procedures of the general type considered by Robbins [5] and Samuel [6] for two different parameterizations of a model. The rate results given here in the continuous case involve upper bounds and are weaker than the discrete case results in [1] wherein exact rates are reported. Specifically, in Section 2 we present the two cases to be considered and define the appropriate empirical Bayes procedures for each. Section 3 gives some technical lemmas and Section 4 establishes the asymptotic optimality (the asymptotic Bayes property) of the procedures introduced. The main results on rates, Theorems 3 and 4, are given in Section 5. Section 6 examines in detail two specific examples--the negative exponential and the normal distributions--and gives corollaries to Theorems 3 and 4 which state convergence rates depending on moment properties of the unknown prior distribution of the parameters. Section 7 gives an example with β\beta arbitrarily close to 1 in the rate O(nβ)O(n^{-\beta}). The model we consider is the following. Let fλ(x)f_\lambda(x) be a family of Lebesgue densities indexed by a parameter λ\lambda in an interval of the real line. As in [1], we wish to test the hypothesis H1:λcvs.H2:λ>cH_1: \lambda \leqq c \mathrm{vs}. H_2: \lambda > c with the loss function being \begin{align*}L_1(\lambda) &= 0 \quad\text{if}\quad \lambda \leqq c \\ &= b(\lambda - c) \quad\text{if}\quad \lambda > c \\ L_2(\lambda) &= b(c - \lambda) \quad\text{if}\quad \lambda \leqq c \\ &= 0 \quad\text{if}\quad \lambda > c\end{align*} where Li(λ)L_i(\lambda) indicates the loss when action i (deciding in favor of HiH_i) is taken, i=1,2i = 1, 2 and b is a positive constant. Let δ(x)=Pr{acceptingH1X=x}\delta(x) = \mathrm{Pr}\{\text{accepting} H_1 \mid X = x\} be a randomized decision rule for the above two-action problem. If G=G(λ)G = G(\lambda) is a prior distribution on λ\lambda, then the risk of the (randomized) decision procedure δ\delta under prior distribution G is given as in [1] by, \begin{align*}\tag{1}r(\delta, G) &= \int\int \{L_1(\lambda)f_\lambda(x)\delta(x) + L_2(\lambda)f_\lambda(x)(1 - \delta(x))\} dx dG(\lambda) \\ &= b \int \alpha(x)\delta(x) dx + C_G\end{align*} where CG=L2(λ)dG(λ)C_G = \int L_2(\lambda) dG(\lambda) and \begin{equation*}\tag{2}\alpha(x) = \int \lambda f_\lambda(x) dG(\lambda) - cf(x)\end{equation*} with \begin{equation*}\tag{3}f(x) = \int f_\lambda(x) dG(\lambda).\end{equation*} From (1) it is clear that a Bayes rule (the minimizer of (1) given G) is \begin{align*}\tag{4}\delta_G(x) &= 1\quad\text{if} \alpha(x) \leqq 0 \\ &= 0\quad\text{if} \alpha(x) > 0.\end{align*} Hence, the minimal attainable risk knowing G (the Bayes risk) is \begin{equation*}\tag{5}r^\ast(G) = \inf_\delta r(\delta, G) = r(\delta_G, G).\end{equation*}