Transformation function for θ 0 = 1, c = 0.4 and r = r 1 on the horizontal axis and its nonparametric estimator on the vertical axis. The identity is displayed in red Table 3 Rejection probabilities at θ 0 = 1 and θ 0 = 2 for r = r 1 Param. Alternative Original framework Modified weighting Level α = 0.05 α = 0.10 α = 0.05 α = 0.10 θ 0 = 1 Null hyp. 0.03125 0.07750 0.02875 0.07875 c = 0.2 0.010 0.015 0.040 0.100 c = 0.4 0.000 0.015 0.205 0.320 c = 0.6 0.085 0.150 0.590 0.715 c = 0.8 0.505 0.645 0.950 0.980 c = 1 0.975 0.985 1.000 1.000 θ 0 = 2 Null hyp. 0.01625 0.05625 0.05500 0.10375 c = 0.2 0.000 0.020 0.225 0.350 c = 0.4 0.120 0.200 0.575 0.710 c = 0.6 0.415 0.545 0.910 0.965 c = 0.8 0.785 0.890 0.990 1.000 c = 1 0.985 0.990 0.995 1.000

Transformation function for θ 0 = 1, c = 0.4 and r = r 1 on the horizontal axis and its nonparametric estimator on the vertical axis. The identity is displayed in red Table 3 Rejection probabilities at θ 0 = 1 and θ 0 = 2 for r = r 1 Param. Alternative Original framework Modified weighting Level α = 0.05 α = 0.10 α = 0.05 α = 0.10 θ 0 = 1 Null hyp. 0.03125 0.07750 0.02875 0.07875 c = 0.2 0.010 0.015 0.040 0.100 c = 0.4 0.000 0.015 0.205 0.320 c = 0.6 0.085 0.150 0.590 0.715 c = 0.8 0.505 0.645 0.950 0.980 c = 1 0.975 0.985 1.000 1.000 θ 0 = 2 Null hyp. 0.01625 0.05625 0.05500 0.10375 c = 0.2 0.000 0.020 0.225 0.350 c = 0.4 0.120 0.200 0.575 0.710 c = 0.6 0.415 0.545 0.910 0.965 c = 0.8 0.785 0.890 0.990 1.000 c = 1 0.985 0.990 0.995 1.000

Source publication
Article
Full-text available
In transformation regression models, the response is transformed before fitting a regression model to covariates and transformed response. We assume such a model where the errors are independent from the covariates and the regression function is modeled nonparametrically. We suggest a test for goodness-of-fit of a parametric transformation class ba...

Context in source publication

Context 1
... some alternatives the rejection probabilities are even smaller than the level. This behaviour indicates that from the presented test's perspective, these models seem to fulfil the null hypothesis more convincingly than the null hypothesis models themselves. The reason for this is shown in Fig. 4 for the setting θ 0 = 1, c = 0.4 and r = r 1 . There, the relationship between the nonparametric estimator of the transformation function and the true transformation function is shown. While the diagonal line represents the identity, the nonparametric estimator seems to flatten the edges of the transformation function. In contrast to ...

Similar publications

Article
Full-text available
This study was conducted by considering the data pattern that differs from each independent variable to the dependent variable. If only one estimator is used to estimate the nonparametric regression curve, the resulting estimator does not match the data pattern, less precise, and tends to produce large errors. Therefore, this study aimed to model t...

Citations

Article
Model averaging is an effective way to enhance prediction accuracy. However, most previous works focus on low-dimensional settings with completely observed responses. To attain an accurate prediction for the risk effect of survival data with high-dimensional predictors, we propose a novel method: rank-based greedy (RG) model averaging. Specifically, adopting the transformation model with splitting predictors as working models, we doubly use the smooth concordance index function to derive the candidate predictions and optimal model weights. The final prediction is achieved by weighted averaging all the candidates. Our approach is flexible, computationally efficient, and robust against model misspecification, as it neither requires the correctness of a joint model nor involves the estimation of the transformation function. We further adopt the greedy algorithm for high dimensions. Theoretically, we derive an asymptotic error bound for the optimal weights under some mild conditions. In addition, the summation of weights assigned to the correct candidate submodels is proven to approach one in probability when there are correct models included among the candidate submodels. Extensive numerical studies are carried out using both simulated and real datasets to show the proposed approach’s robust performance compared to the existing regularization approaches. Supplementary materials for this article are available online.