
David M RindskopfCUNY Graduate Center | CUNY
David M Rindskopf
About
103
Publications
27,787
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,879
Citations
Publications
Publications (103)
The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results we...
Purpose:
The purpose of the study was to identify the sex-specific characteristics that predict depression among adult women with diabetes.
Methods:
Data from the 2007-2012 National Health and Nutrition Examination Survey in the United States were used to identify the predictors of depression in a large sample of women ages 20 years and older wi...
For a variety of reasons, researchers and evidence-based clearinghouses synthesizing the results of multiple studies often have very few studies that are eligible for any given research question. This situation is less than optimal for meta-analysis as it is usually practiced, that is, by employing inverse variance weights, which allows more inform...
Objective:
We reanalyzed data from a previous randomized crossover design that administered high or low doses of intraveneous immunoglobulin (IgG) to 12 patients with hypogammaglobulinaemia over 12 time points, with crossover after time 6. The objective was to see if results corresponded when analyzed as a set of single-case experimental designs v...
Depressed older adults with executive dysfunction (ED) may respond poorly to antidepressant treatment. ED is a multifaceted construct and different studies have measured different aspects of ED, making it unclear which aspects predict poor response. Meta-analytic methods were used to determine whether ED predicts poor antidepressant treatment respo...
Objectives:
We examined the potential for glycemic control monitoring and screening for diabetes in a dental setting among adults (n = 408) with or at risk for diabetes.
Methods:
In 2013 and 2014, we performed hemoglobin A1c (HbA1c) tests on dried blood samples of gingival crevicular blood and compared these with paired "gold-standard" HbA1c tes...
In this article, we respond to Wolery's critique of the What Works Clearinghouse (WWC) pilot Standards, which were developed by the current authors. We do so to provide additional information and clarify some points previously summarized in this journal. We also respond to several concerns raised by Maggin, Briesch, and Chafouleas after they applie...
Several authors have suggested the use of multilevel models for the analysis of data from single case designs. Multilevel models are a logical approach to analyzing such data, and deal well with the possible different time points and treatment phases for different subjects. However, they are limited in several ways that are addressed by Bayesian me...
For Latinas with fasting plasma glucose (FPG) levels in the prediabetes and diabetes ranges, early detection can support steps to optimize their health. Data collected in 2009–2010 indicate that 36.7% of Latinas in the United States had elevated FPG levels. Latinas with elevated
FPG who were unaware of their diabetes status were significantly less...
Bayesian statistical methods have great potential advantages for the analysis of data from single case designs. Bayesian methods combine prior information with data from a study to form a posterior distribution of information about their parameters and functions. The interpretation of results from a Bayesian analysis is more natural than those from...
Children with attention deficit/hyperactivity disorder (ADHD) have poorer neuropsychological functioning relative to their typically developing peers. However, it is unclear whether early neuropsychological functioning predicts later ADHD severity and/or the latter is longitudinally associated with subsequent neuropsychological functioning; and whe...
Objective:
This longitudinal study examined if changes in neuropsychological functioning were associated with the trajectory of symptoms related to attention deficit hyperactivity disorder (ADHD) and impairment between preschool and school age.
Method:
The sample consisted of 3- and 4-year-old children (N=138) who were identified as being at ris...
Several authors have proposed the use of multilevel models to analyze data from single-case designs. This article extends that work in 2 ways. First, examples are given of how to estimate these models when the single-case designs have features that have not been considered by past authors. These include the use of polynomial coefficients to model n...
Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to g...
In an effort to responsibly incorporate evidence based on single-case designs (SCDs) into the What Works Clearinghouse (WWC) evidence base, the WWC assembled a panel of individuals with expertise in quantitative methods and SCD methodology to draft SCD standards. In this article, the panel provides an overview of the SCD standards recommended by th...
Muthén and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By keepin...
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently-used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively di...
Over the last 10 years, numerous authors have proposed effect size estimators for single-case designs. None, however, has been shown to be equivalent to the usual between-groups standardized mean difference statistic, sometimes called d. The present paper remedies that omission. Most effect size estimators for single-case designs use the within-per...
This pilot study examines whether a novel diabetes screening approach using gingival crevicular blood (GCB) could be used to test for hemoglobin A1c (HbA1c) during periodontal visits.
Finger-stick blood (FSB) samples from 120 patients and GCB samples from those patients with adequate bleeding on probing (BOP) were collected on special blood collect...
This study investigated how study type, mean patient age, and amount of contact with research staff affected response rates to medication and placebo in acute antidepressant trials for pediatric depression.
Data were extracted from nine open, four active comparator, and 18 placebo-controlled studies of antidepressants for children and adolescents w...
To increase HCV-related support for patients in substance abuse treatment programs, we implemented an on-site staff training in 16 programs throughout the United States. It aimed to increase participants' self-efficacy in assisting patients with their HCV-related needs. Findings indicate that participants' self-efficacy increased both 1- and 3-mont...
In applications of generalized linear models to education, observations at one level are frequently nested within units at another level. For example, we have measurements on students, who are located within a class, which is within a school, and so on. Methods for analyzing such data go back several decades, but new methods have led to many extens...
In educational research, test scores are often summarized as if they emerge from a normal distribution. Such a distribution can be described by two parameters: the center (measured by the mean) and dispersion (measured by the variance or standard deviation). This article summarizes the most commonly used measures of dispersion: several types of ran...
In an effort to expand the pool of scientific evidence available for review, the What Works Clearinghouse (WWC) assembled a panel of national experts in single-case design (SCD) and analysis to draft SCD Standards. SCDs are adaptations of interrupted time-series designs and can provide a rigorous experimental evaluation of intervention effects. SCD...
Bock's model for multinomial responses considered contingency tables as consisting of two kinds of variables, sampling variables (that de- fined groups) and response variables. Contrasts among response variables were specified, and these were modeled as functions of contrasts among cat- egories defined by the sampling variables. This neat separatio...
Alcohol-related problems are especially common among opioid treatment program (OTP) patients, suggesting that educating OTP patients about alcohol and its harmful effects needs to be a priority in OTPs. Using data collected in interviews with a nationwide U.S. sample of OTP directors (N = 200) in 25 states, we identified factors that differentiate...
Objectives: The bidirectional relationship between periodontitis and diabetes suggests that the dental visit may offer a largely untapped opportunity to screen for undiagnosed diabetes. To better examine this potential opportunity, data from the National Health and Nutrition Examination Survey (NHANES) 2003-2004 were used to determine if a larger p...
Because many HIV care providers fail to detect patients' hazardous drinking, we examined the potential use of the AUDIT-C, the first 3 of the 10 items comprising the Alcohol Use Disorders Identification Test (AUDIT), to efficiently screen patients for alcohol abuse. To perform this examination, we used Item Response Theory (IRT) involving individua...
Certain research tasks require extracting data points from graphs and charts. Using 91 graphs that presented results from single-case designs, we investigated whether pairs of coders extract similar data from the same graphs (reliability), and whether the extracted data match numerical descriptions of the graph that the original author may have pre...
The articles in the previous, special issue of Evidence-Based Communication Assessment and Intervention provided an excellent review of the meta-analysis of single-case designs. This article weaves commentary about those articles into a larger narrative about two major lines of attack on this problem: the use of parametric approaches like regressio...
Vascular depression has been proposed as a unique diagnostic subtype in late life, yet no study has evaluated whether the specified clinical features associated with the illness are jointly indicative of an underlying diagnostic class.
We applied latent class analysis to two independent clinical samples: the prospective, cohort design, Neurocogniti...
Qualitative reviews of late-life antidepressant clinical trials suggest that antidepressant response rates in comparator trials are higher than antidepressant response rates in placebo-controlled trials. No quantitative review has been conducted to test this hypothesis.
A meta-analysis was conducted of all published articles in peer-reviewed journa...
Summary judgment in federal courts has been widely regarded as an initially underused procedural device that was revitalized by the 1986 Supreme Court trilogy of Celotex, Anderson, and Matsushita. Some recent commentators believe summary judgment activity has expanded to the point that it threatens the right to trial. We examined summary judgment p...
Organizational researchers, including those carrying out occupational stress research, often conduct longitudinal studies. Hierarchical linear modeling (HLM; also known as multilevel modeling and random regression) can efficiently organize analyses of longitudinal data by including within- and between-person levels of analysis. A great deal of long...
Good quantitative evidence does not require large, aggregate group designs. The authors describe ground-breaking work in managing the conceptual and practical demands in developing meta-analytic strategies for single subject designs in an effort to add to evidence-based practice.
Summary judgment in federal courts has been widely regarded as an initially underused procedural device that was revitalized by the 1986 Supreme Court trilogy of Celotex, Anderson, and Matsushia. Some recent commentators believe summary judgment activity has expanded to the point that it threatens the right to trial. We examined summary judgment pr...
Hepatitis C virus (HCV) infection is a global health problem, and in many countries (including the U.S.), illicit drug users constitute the group at greatest risk for contracting and transmitting HCV. Drug treatment programs are therefore unique sites of opportunity for providing medical care and support for many HCV infected individuals. This pape...
In this article, multilevel latent class analysis is used to examine the structure of heavy alcohol use. A model with three latent classes (types) of people fits the data well: those who seldom suffer major consequences from heavy drinking, those who typically suffer only a small number of major consequences, and those who suffer serious consequenc...
Illegal drug use remains one of the United States' most serious health problems, and the “War on Drugs” continues without an end in sight. Antidrug programs, which offer the potential to reduce substance abuse problems, are a component of efforts to deal with the problem, but they operate absent adequate scientific analysis. Although policy has shi...
Modeling the process by which participants are selected into groups, rather than adjusting for preexisting group differences, provides the basis for several new approaches to the analysis of data from nonrandomized studies.
We demonstrate a model for categorical data that parallels the MIMIC model for continuous data. The model is equivalent to a latent class model with observed covariates; further, it includes simple handling of missing data. The model is used on data from a large-scale study of HIV that had both biological measures of infection and self-report (miss...
D. J. Bauer and P. J. Curran (2003) raised some interesting issues with respect to mixture models of growth curves. Many useful lessons can be learned from their work, and more can be learned by extending the inquiry in related directions. These lessons involve the following issues: (a) what a mixture distribution looks like, (b) the meaning of the...
This article examines whether relationships between individual characteristics and HIV status can be identified when self-report data are used as a proxy for HIV serotest results. The analyses use data obtained from HIV serotests and face-to-face interviews with 7,256 out-of-treatment drug users in ten sites from 1992 to 1998. Relationships between...
Many HIV positive drug users are unaware that they have the virus, either because they never obtained testing for HIV or because they submitted a biological specimen for testing but never returned to obtain the result of the test. Using data collected from a large multi-site sample of out-of-treatment HIV positive drug users (N=1,544), we identify...
Infinite parameter estimates in logistic regression are commonly thought of as a problem. This article shows that in principle an analyst should be happy to have an infinite slope in logistic regression, because it indicates that a predictor is perfect. Using simple approaches, hypothesis tests may be performed and confidence intervals calculated e...
This study examined differences between the visibility of drugs and drug use in more than 2100 neighborhoods, challenging an assumption about drug use in poor, minority, and urban communities.
A telephone survey assessed substance use and attitudes across 41 communities in an evaluation of a national community-based demand reduction program. Three...
Reliability of measurement refers to the dependability or repeatability of the measurement process: Does one get about the same answer when one measures a construct several times, or using similar methods of measurement, or using different people? Relevant theory comes from classical test theory, item response theory, generalizability theory, and r...
Analysts evaluating the strengths of relationships between variables in behavioral science research must often contend with the problem of missing data. Analyses are typically performed using data for cases that are either complete in all the variables, or assume that the data are missing at random. Often, these approaches yield biased results. Usi...
This study examines the concurrence of drug users' self-reports of current HIV status with serotest results. The analyses are based on data obtained from face-to-face interviews with 7,256 out-of-treatment injection drug and/or crack users in 10 sites that participated in the Cooperative Agreement for AIDS Community-Based Outreach/Intervention Rese...
Surveys to depict substance abuse rates and monitor trends in specific areas have become increasingly important policy tools. Yet, as illustrated by two national multiwave surveys, using small sample survey data and making longitudinal comparisons is fraught with interpretative problems. In the case of the metropolitan area "oversample" of the Nati...
A. von Eye and J. Brandtstädter (1998) proposed 3 new concepts for log-linear modeling of categorical data for causal dependency: the wedge, the fork, and the chain. These concepts are difficult to operationalize uniquely and therefore difficult to translate unambiguously into statistical models. In addition, the statistical models proposed by von...
A. von Eye and J. Brandtstädter (1998) proposed 3 new concepts for log-linear modeling of categorical data for causal dependency: the wedge, the fork, and the chain. These concepts are difficult to operationalize uniquely and therefore difficult to translate unambiguously into statistical models. In addition, the statistical models proposed by von...
Many meta-analysts incorrectly use correlations or standardized mean difference statistics to compute effect sizes on dichotomous data. Odds ratios and their logarithms should almost always be preferred for such data. This article reviews the issues and shows how to use odds ratios in meta-analytic data, both alone and in combination with other eff...
Unfortunately, reading Chow's work is likely to leave
the reader more confused than enlightened. My preferred solutions
to the “controversy” about null- hypothesis testing are:
(1) recognize that we really want to test the hypothesis that an
effect is “small,” not null, and (2) use Bayesian methods,
which are much more in keeping with the way...
The evaluation of community-based programs poses special design and analysis problems. The present article focuses on two major types of errors that can occur in such evaluations: false positive--incorrectly declaring a program to be effective--and false negatives--incorrectly declaring a program to be ineffective. The evaluation of a national demo...
The devolution of national programs to state and local control creates special challenges for evaluators as policy ideas are implemented in a variety of ways in local communities. This paper describes the national evaluation of the Robert Wood Johnson Foundation's Fighting Back program, a multi-site demonstration of community-based substance abuse...
In obesity research it is common to have repeated measures on subjects. Traditional statistical analyses of repeated measures data are analysis of variance (ANOVA) for random effects, and multivariate analysis of variance (MANOVA). Each assume that every subject was measured (i) the same number of times, and (ii) at the same time points. Another ty...
A general approach for analyzing categorical data when there are missing data is described and illustrated. The method is based on generalized linear models with composite links. The approach can be used (among other applications) to fill in contingency tables with supplementary margins, fit loglinear models when data are missing, fit latent class...
This article demonstrates several useful varieties of nonstandard log-linear models. Some can be derived as nonhierarchical models by deleting lower-order effects in hierarchical models, but most often they will arise as the result of special hypotheses that the researcher wants to test. Three approaches to testing nonstandard models, partitioning...
This paper shows how confirmatory factor analysis can be used to test second- (and higher-) order factor models in the areas of the structure of abilities, allometry, and the separation of specific and error variance estimates. In the latter area, an idea of Joreskog's is extended to include a new conceptualization of the estimation of validity. Se...
Reviews the books, Fundamental Statistics for Behavioral Sciences (4th ed.) by Robert B. McCall (1986); and Using Statistics for Psychological Research: An Introduction by James Thomas Walker (1985). The material covered in both books is standard for introductory texts at the undergraduate or beginning graduate level: frequency distributions, descr...
While psychology in general has moved away from using typologies and toward using continua to conceptualize dimensions of behavior, certain aspects of learning and development are still fruitfully considered in typological terms. One reason for the abandonment of typologies was that a small number of types seemed unable to explain the enormous dive...
Assessment of the value of diagnostic indicators such as symptoms and laboratory tests results from calculation of the sensitivity and specificity of the indicators. Knowledge of the rate of occurrence of the disease allows for additional calculations of the error rates in using an indicator. These calculations are accurate only when the data on wh...
Assessment of the value of diagnostic indicators such as symptoms and laboratory tests results from calculation of the sensitivity and specificity of the indicators. Knowledge of the rate of occurrence of the disease allows for additional calculations of the error rates in using an indicator. These calculations are accurate only when the data on wh...
A book which summarizes many of the recent advances in the theory and practice of achievement testing, in the light of technological developments, and developments in psychometric and psychological theory. It provides an introduction to the two major psychometric models, item response theory and generalizability theory, and assesses their strengths...
Describes a simple method for implementing general linear equality restrictions on the parameters of linear models. These techniques, unlike those involving Lagrangian multipliers, can be easily implemented on common computer programs such as SPSS, BMDP, and SAS. For multiple regression, both homogenous and nonhomogenous constraints can be made, wh...
Heywood cases represent the most common form of a series of related problems in confirmatory factor analysis and structural equation modeling. Other problems include factor loadings and factor correlations outside the usual range, large variances of parameter estimates, and high correlations between parameter estimates. The concept of empirical und...
Most theories in the social sciences involve relationships among constructs which are not directly observable. Behavioral measures exist of all constructs, such as intelligence, creativity, and other cognitive traits, aggressiveness, sociability, and other personality and affective characteristics; but these observed measures are usually assumed to...
The most widely-used computer programs for structural equation models analysis are the LISREL series of Jöreskog and Sörbom. The only types of constraints which may be made directly are fixing parameters at a constant value and constraining parameters to be equal. Rindskopf (1983) showed how these simple properties could be used to represent models...
Detecting bias in admissions to graduate and professional schools presents important problems to the data analyst. In this paper some traditionally used methods, such as multiple regression analysis, are compared with the newer methods of logistic regres sion and structural equations models. The problems faced in modeling decision rules in this sit...
Current computer programs for analyzing linear structural models will apparently handle only two types of constraints: fixed
parameters, and equality of parameters. An important constraint not handled is inequality; this is particularly crucial for
preventing negative variance estimates. In this paper, a method is described for imposing several kin...
Several articles in the past fifteen years have suggested various models for analyzing dichotomous test or questionnaire items
which were constructed to reflect an assumed underlying structure. This paper shows that many models are special cases of
latent class analysis. A currently available computer program for latent class analysis allows parame...
Most researchers who analyze data from studies using nonrandomized assignment to groups realize that regression artifacts plague the commonly used methods of analyzing such data (Campbell & Erlebacher, 1970). One method of dealing with nonequivalent groups is to do an analysis of covariance (ANCOVA) with correc tions for errors in the covariate. T...
The article discusses randomized experiments for evaluating and planning local programs in the U.S. The simplest justification for assigning people randomly to alternative programs is tied to the idea that estimates of a programs costs and effects ought to be as fair and as unequivocal as possible. By unequivocal it is meant that any post-program d...
With perfect information from flawlessly designed and executed evaluations of social programs in short supply, evaluators are urged to look to gathering many kinds of evidence and analyzing it by multiple methods to reduce the incidence of erroneous conclusions.