Ying Kuen Cheung

Columbia University, New York City, New York, United States

Are you Ying Kuen Cheung?

Claim your profile

Publications (31)68.14 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective The objective of this cross-sectional analysis was to investigate the relation between two major high-density lipoprotein cholesterol (HDL-C) subfractions (HDL2-C and HDL3-C) and carotid plaque in a population based cohort. Methods We evaluated 988 stroke-free participants (mean age 66 ± 8 years; 40% men; 66% Hispanic and 34% Non-Hispanic) with available data on HDL subfractions using precipitation method and carotid plaque area and thickness assessed by a high-resolution 2D ultrasound. The associations between HDL-C subfractions and plaque measurements were analyzed by quantile regression. Results Plaque was present in 56% of the study population. Among those with plaque, the mean ± SD plaque area was 19.40 ± 20.46 mm² and thickness 2.30 ± 4.45 mm. The mean ± SD total HDL-C was 46 ± 14 mg/dl, HDL2-C 14 ± 8 mg/dl, and HDL3-C 32 ± 8 mg/dl. After adjusting for demographics and vascular risk factors, there was an inverse association between HDL3-C and plaque area (per mg/dl: beta = −0.26 at the 75th percentile, p = 0.001 and beta = −0.32 at the 90th percentile, p = 0.02). A positive association was observed between HDL2-C and plaque thickness (per mg/dl; beta = 0.02 at the 90% percentile, p = 0.003). HDL-C was associated with plaque area (per mg/dl: beta = −0.18 at the 90th percentile, p = 0.01), but only among Hispanics. Conclusion In our cohort we observed an inverse association between HDL3-C and plaque area and a positive association between HDL2-C and plaque thickness. HDL-C subfractions may have different contributions to the risk of vascular disease. More studies are needed to fully elucidate HDL-C anti-atherosclerotic functions in order to improve HDL-based treatments in prevention of vascular disease and stroke.
    Atherosclerosis 11/2014; 237(1):163–168. · 3.71 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Implementation study is an important tool for deploying state-of-the-art treatments from clinical efficacy studies into a treatment program, with the dual goals of learning about effectiveness of the treatments and improving the quality of care for patients enrolled into the program. In this article, we deal with the design of a treatment program of dynamic treatment regimens (DTRs) for patients with depression post-acute coronary syndrome. We introduce a novel adaptive randomization scheme for a sequential multiple assignment randomized trial of DTRs. Our approach adapts the randomization probabilities to favor treatment sequences having comparatively superior Q-functions used in Q-learning. The proposed approach addresses three main concerns of an implementation study: it allows incorporation of historical data or opinions, it includes randomization for learning purposes, and it aims to improve care via adaptation throughout the program. We demonstrate how to apply our method to design a depression treatment program using data from a previous study. By simulation, we illustrate that the inputs from historical data are important for the program performance measured by the expected outcomes of the enrollees, but also show that the adaptive randomization scheme is able to compensate poorly specified historical inputs by improving patient outcomes within a reasonable horizon. The simulation results also confirm that the proposed design allows efficient learning of the treatments by alleviating the curse of dimensionality.
    Biometrics 10/2014; · 1.41 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The objective of this cross-sectional analysis was to investigate the relation between two major high-density lipoprotein cholesterol (HDL-C) subfractions (HDL2-C and HDL3-C) and carotid plaque in a population based cohort.
    Atherosclerosis 09/2014; 237(1):163-168. · 3.71 Impact Factor
  • Xiaoyu Jia, Shing M. Lee, Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with the design of the likelihood continual reassessment method, which is an increasingly widely used model-based method for dose-finding studies. It is common to implement the method in a two-stage approach, whereby the model-based stage is activated after an initial sequence of patients has been treated. While this two-stage approach is practically appealing, it lacks a theoretical framework, and it is often unclear how the design components should be specified. This paper develops a general framework based on the coherence principle, from which we derive a design calibration process. A real clinical-trial example is used to demonstrate that the proposed process can be implemented in a timely and reproducible manner, while offering competitive operating characteristics. We explore the operating characteristics of different models within this framework and show the performance to be insensitive to the choice of dose-toxicity model.
    Biometrika 08/2014; 3(3). · 1.65 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background: Subclinical cerebrovascular disease has been associated with multiple adverse events related to aging, including stroke and dementia. The modifiable risk factors for subclinical cerebrovascular disease beyond hypertension have not been well characterized. Our objective was to examine the association between baseline, and changes over time, in lipid profile components and subclinical cerebrovascular disease on magnetic resonance imaging (MRI). Methods: Fasting plasma lipids were collected on participants in the Northern Manhattan Study, a prospective cohort study examining risk factors for cardiovascular disease in a multiethnic elderly urban-dwelling population. A subsample of the cohort underwent brain MRI between 2003 and 2008 (a median of 6.2 years, range = 0-14, after enrollment), when repeat fasting lipids were obtained. We used lipid profile components at the time of initial enrollment (n = 1,256 with lipids available) as categorical variables, as well as change in clinical categories over the two measures (n = 1,029). The main outcome measures were (1) total white matter hyperintensity volume (WMHV) using linear regression and (2) silent brain infarcts (SBI) using logistic regression. Results: None of the plasma lipid profile components at the time of enrollment were associated with WMHV. The association between baseline lipids and WMHV was, however, modified by apolipoprotein E (apoE) status (χ(2) with 2 degrees of freedom, p = 0.03), such that among apoE4 carriers those with total cholesterol (TC) ≥200 mg/dl had a trend towards smaller WMHV than those with TC <200 mg/dl (difference in log WMHV -0.19, p = 0.07), while there was no difference among apoE3 carriers. When examining the association between WMHV and change in lipid profile components we noted an association with change in high-density lipoprotein cholesterol (HDL-C, >50 mg/dl for women, >40 mg/dl for men) and TC. A transition from low-risk HDL-C (>50 mg/dl for women, >40 mg/dl for men) at baseline to high-risk HDL-C at the time of MRI (vs. starting and remaining low risk) was associated with greater WMHV (difference in logWMHV 0.34, p value 0.03). We noted a similar association with transitioning to a TC ≥200 mg/dl at the time of MRI (difference in logWMHV 0.25, p value 0.006). There were no associations with baseline or change in lipid profile components with SBI. Conclusions: The association of plasma lipid profile components with greater WMHV may depend on apoE genotype and worsening HDL and TC risk levels over time. © 2014 S. Karger AG, Basel.
    Cerebrovascular diseases (Basel, Switzerland). 07/2014; 37(6):423-430.
  • Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.
    Biometrics 02/2014; · 1.41 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A meta-analysis was performed to examine differences in IQ profiles between individuals with Asperger's disorder (AspD) and high-functioning autism (HFA). Fifty-two studies were included for this study. The results showed that (a) individuals with AspD had significantly higher full-scale IQ, verbal IQ (VIQ), and performance IQ (PIQ) than did individuals with HFA; (b) individuals with AspD had significantly higher VIQ than PIQ; and (c) VIQ was similar to PIQ in individuals with HFA. These findings seem to suggest that AspD and HFA are two different subtypes of Autism. The implications of the present findings to DSM-5 Autism Spectrum Disorder are discussed.
    Journal of Autism and Developmental Disorders 12/2013; · 3.06 Impact Factor
  • Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: In the planning of a dose finding study, a primary design objective is to maintain high accuracy in terms of the probability of selecting the maximum tolerated dose. While numerous dose finding methods have been proposed in the literature, concrete guidance on sample size determination is lacking. With a motivation to provide quick and easy calculations during trial planning, we present closed form formulae for sample size determination associated with the use of the Bayesian continual reassessment method (CRM). We examine the sampling distribution of a nonparametric optimal design and exploit it as a proxy to empirically derive an accuracy index of the CRM using linear regression. We apply the formulae to determine the sample size of a phase I trial of PTEN-long in pancreatic cancer patients and demonstrate that the formulae give results very similar to simulation. The formulae are implemented by an R function 'getn' in the package 'dfcrm'. The results are developed for the Bayesian CRM and should be validated by simulation when used for other dose finding methods. The analytical formulae we propose give quick and accurate approximation of the required sample size for the CRM. The approach used to derive the formulae can be applied to obtain sample size formulae for other dose finding methods.
    Clinical Trials 08/2013; · 2.20 Impact Factor
  • Chih-Chi Hu, Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: Dose-finding in clinical studies is typically formulated as a quantile estimation problem, for which a correct specification of the variance function of the outcomes is important. This is especially true for sequential study where the variance assumption directly involves in the generation of the design points and hence sensitivity analysis may not be performed after the data are collected. In this light, there is a strong reason for avoiding parametric assumptions on the variance function, although this may incur efficiency loss. In this article, we investigate how much information one may retrieve by making additional parametric assumptions on the variance in the context of a sequential least squares recursion. By asymptotic comparison, we demonstrate that assuming homoscedasticity achieves only a modest efficiency gain when compared to nonparametric variance estimation: when homoscedasticity in truth holds, the latter is at worst 88% as efficient as the former in the limiting case, and often achieves well over 90% efficiency for most practical situations. Extensive simulation studies concur with this observation under a wide range of scenarios.
    Journal of Statistical Planning and Inference 03/2013; 143(3):593-602. · 0.71 Impact Factor
  • Ying Kuen Cheung
    Clinical Trials 01/2013; 10(1):86-7. · 2.20 Impact Factor
  • Hsu-Min Chiang, Ying Kuen Cheung, Huacheng Li, Luke Y Tsai
    [Show abstract] [Hide abstract]
    ABSTRACT: This study aimed to identify the factors associated with participation in employment for high school leavers with autism. A secondary data analysis of the National Longitudinal Transition Study 2 (NLTS2) data was performed. Potential factors were assessed using a weighted multivariate logistic regression. This study found that annual household income, parental education, gender, social skills, whether the child had intellectual disability, whether the child graduated from high school, whether the child received career counseling during high school, and whether the child's school contacted postsecondary vocational training programs or potential employers were the significant factors associated with participation in employment. These findings may have implications for professionals who provide transition services and post-secondary programs for individuals with autism.
    Journal of Autism and Developmental Disorders 12/2012; · 3.06 Impact Factor
  • Yi-Hsuan Tu, Bin Cheng, Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: We are concerned with the problem of estimating the treatment effects at the effective doses in a dose-finding study. Under monotone dose-response, the effective doses can be identified through the estimation of the minimum effective dose, for which there is an extensive set of statistical tools. In particular, when a fixed-sequence multiple testing procedure is used to estimate the minimum effective dose, Hsu and Berger (1999) show that the confidence lower bounds for the treatment effects can be constructed without the need to adjust for multiplicity. Their method, called the dose-response method, is simple to use, but does not account for the magnitude of the observed treatment effects. As a result, the dose-response method will estimate the treatment effects at effective doses with confidence bounds invariably identical to the hypothesized value. In this paper, we propose an error-splitting method as a variant of the dose-response method to construct confidence bounds at the identified effective doses after a fixed-sequence multiple testing procedure. Our proposed method has the virtue of simplicity as in the dose-response method, preserves the nominal coverage probability, and provides sharper bounds than the dose-response method in most cases.
    Journal of Statistical Planning and Inference 11/2012; 142(11):2993-2998. · 0.71 Impact Factor
  • Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: This chapter surveys the recent development of dose-finding designs for phase I trials of chemotherapy, reviews the statistical theory of dose-finding, and outlines the basic concepts of methods for dealing with specific clinical settings such as late-onset toxicities, gradation of toxicity severity, and bivariate outcomes. We will compare some of the promising approaches via simulations in the context of a combination chemotherapy trial in patients with lymphoma. Our goal is to highlight the relative advantages of each method, and provide guidance on the scenarios where some methods are more appropriate than the others. We will explore the robustness of these methods under violations of their underlying assumptions, with a particular focus on the model-based continual reassessment method. Finally, we will discuss the challenge of implementing these novel designs in practice, and introduce an R package for the planning and implementation of the continual reassessment method in a phase I trial.
    10/2011: pages 1-27;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This exploratory study was designed to identify the factors predictive of participation in postsecondary education for high school leavers with autism. A secondary data analysis of the National Longitudinal Transition Study 2 (NLTS2) data was performed for this study. Potential predictors of participation in postsecondary education were assessed using a backward logistic regression analysis. This study found that the high school's primary post-high school goal for the student, parental expectations, high school type, annual household income, and academic performance were significant predictors of participation in postsecondary education. The findings of this current study may provide critical information for parents of children with autism as well as educators and professionals who work with students with autism.
    Journal of Autism and Developmental Disorders 05/2011; 42(5):685-96. · 3.06 Impact Factor
  • Source
    Shing M Lee, Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in phase I clinical trials. Asymptotically, the method has been shown to select the correct dose given that certain conditions are satisfied. When sample size is small, specifying a reasonable model is important. While an algorithm has been proposed for the calibration of the initial guesses of the probabilities of toxicity, the calibration of the prior distribution of the parameter for the Bayesian CRM has not been addressed. In this paper, we introduce the concept of least informative prior variance for a normal prior distribution. We also propose two systematic approaches to jointly calibrate the prior variance and the initial guesses of the probability of toxicity at each dose. The proposed calibration approaches are compared with existing approaches in the context of two examples via simulations. The new approaches and the previously proposed methods yield very similar results since the latter used appropriate vague priors. However, the new approaches yield a smaller interval of toxicity probabilities in which a neighboring dose may be selected.
    Statistics in Medicine 03/2011; 30(17):2081-9. · 2.04 Impact Factor
  • Source
    Shing M Lee, Bin Cheng, Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the dose-finding problem in cancer trials in which we are concerned with the gradation of severe toxicities that are considered dose limiting. In order to differentiate the tolerance for different toxicity types and grades, we propose a novel extension of the continual reassessment method that explicitly accounts for multiple toxicity constraints. We apply the proposed methods to redesign a bortezomib trial in lymphoma patients and compare their performance with that of the existing methods. Based on simulations, our proposed methods achieve comparable accuracy in identifying the maximum tolerated dose but have better control of the erroneous allocation and recommendation of an overdose.
    Biostatistics 09/2010; 12(2):386-98. · 2.43 Impact Factor
  • Source
    Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: In 1951 Robbins and Monro published the seminal paper on stochastic approximation and made a specific reference to its application to the "estimation of a quantal using response, non-response data". Since the 1990s, statistical methodology for dose-finding studies has grown into an active area of research. The dose-finding problem is at its core a percentile estimation problem and is in line with what the Robbins-Monro method sets out to solve. In this light, it is quite surprising that the dose-finding literature has developed rather independently of the older stochastic approximation literature. The fact that stochastic approximation has seldom been used in actual clinical studies stands in stark contrast with its constant application in engineering and finance. In this article, I explore similarities and differences between the dose-finding and the stochastic approximation literatures. This review also sheds light on the present and future relevance of stochastic approximation to dose-finding clinical trials. Such connections will in turn steer dose-finding methodology on a rigorous course and extend its ability to handle increasingly complex clinical situations.
    Statistical Science 05/2010; 25(2):191-201. · 2.24 Impact Factor
  • Source
    Ying Kuen Cheung, Mitchell S V Elkind
    [Show abstract] [Hide abstract]
    ABSTRACT: Phase I clinical studies are experiments in which a new drug is administered to humans to determine the maximum dose that causes toxicity with a target probability. Phase I dose-finding is often formulated as a quantile estimation problem. For studies with a biological endpoint, it is common to define toxicity by dichotomizing the continuous biomarker expression. In this article, we propose a novel variant of the Robbins-Monro stochastic approximation that utilizes the continuous measurements for quantile estimation. The Robbins-Monro method has seldom seen clinical applications, because it does not perform well for quantile estimation with binary data and it works with a continuum of doses that are generally not available in practice. To address these issues, we formulate the dose-finding problem as root-finding for the mean of a continuous variable, for which the stochastic approximation procedure is efficient. To accommodate the use of discrete doses, we introduce the idea of virtual observation that is defined on a continuous dosage range. Our proposed method inherits the convergence properties of the stochastic approximation algorithm and its computational simplicity. Simulations based on real trial data show that our proposed method improves accuracy compared with the continual re-assessment method and produces results robust to model misspecification.
    Biometrika 03/2010; 97(1):109-121. · 1.65 Impact Factor
  • Source
    Shing M Lee, Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in dose finding clinical trials. A way to evaluate the sensitivity of a given CRM model including the functional form of the dose-toxicity curve, the prior distribution on the model parameter, and the initial guesses of toxicity probability at each dose is using indifference intervals. While the indifference interval technique provides a succinct summary of model sensitivity, there are infinitely many possible ways to specify the initial guesses of toxicity probability. In practice, these are generally specified by trial and error through extensive simulations. By using indifference intervals, the initial guesses used in the CRM can be selected by specifying a range of acceptable toxicity probabilities in addition to the target probability of toxicity. An algorithm is proposed for obtaining the indifference interval that maximizes the average percentage of correct selection across a set of scenarios of true probabilities of toxicity and providing a systematic approach for selecting initial guesses in a much less time-consuming manner than the trial-and-error method. The methods are compared in the context of two real CRM trials. For both trials, the initial guesses selected by the proposed algorithm had similar operating characteristics as measured by percentage of correct selection, average absolute difference between the true probability of the dose selected and the target probability of toxicity, percentage treated at each dose and overall percentage of toxicity compared to the initial guesses used during the conduct of the trials which were obtained by trial and error through a time-consuming calibration process. The average percentage of correct selection for the scenarios considered were 61.5 and 62.0% in the lymphoma trial, and 62.9 and 64.0% in the stroke trial for the trial-and-error method versus the proposed approach. We only present detailed results for the empiric dose toxicity curve, although the proposed methods are applicable for other dose-toxicity models such as the logistic. The proposed method provides a fast and systematic approach for selecting initial guesses of probabilities of toxicity used in the CRM that are competitive to those obtained by trial and error through a time-consuming process, thus, simplifying the model calibration process for the CRM.
    Clinical Trials 07/2009; 6(3):227-38. · 2.20 Impact Factor
  • Source
    Ying Kuen Cheung
    [Show abstract] [Hide abstract]
    ABSTRACT: The primary objective of Phase II cancer trials is to evaluate the potential efficacy of a new regimen in terms of its antitumor activity in a given type of cancer. Due to advances in oncology therapeutics and heterogeneity in the patient population, such evaluation can be interpreted objectively only in the presence of a prospective control group of an active standard treatment. This paper deals with the design problem of Phase II selection trials in which several experimental regimens are compared to an active control, with an objective to identify an experimental arm that is more effective than the control or to declare futility if no such treatment exists. Conducting a multi-arm randomized selection trial is a useful strategy to prioritize experimental treatments for further testing when many candidates are available, but the sample size required in such a trial with an active control could raise feasibility concerns. In this study, we extend the sequential probability ratio test for normal observations to the multi-arm selection setting. The proposed methods, allowing frequent interim monitoring, offer high likelihood of early trial termination, and as such enhance enrollment feasibility. The termination and selection criteria have closed form solutions and are easy to compute with respect to any given set of error constraints. The proposed methods are applied to design a selection trial in which combinations of sorafenib and erlotinib are compared to a control group in patients with non-small-cell lung cancer using a continuous endpoint of change in tumor size. The operating characteristics of the proposed methods are compared to that of a single-stage design via simulations: The sample size requirement is reduced substantially and is feasible at an early stage of drug development.
    Journal of Biopharmaceutical Statistics 02/2009; 19(3):494-508. · 0.73 Impact Factor

Publication Stats

380 Citations
68.14 Total Impact Points

Institutions

  • 2002–2014
    • Columbia University
      • Department of Biostatistics
      New York City, New York, United States
  • 2012
    • National Cheng Kung University
      • Department of Statistics
      Tainan, Taiwan, Taiwan
  • 2008
    • Amgen
      Thousand Oaks, California, United States
  • 2000–2001
    • University of Wisconsin, Madison
      • Department of Biostatistics and Medical Informatics
      Madison, MS, United States