Scott L Zeger

Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, United States

Are you Scott L Zeger?

Claim your profile

Publications (207)845.79 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward.
    International journal of environmental research and public health. 01/2014; 11(6):6400-6416.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Increasing evidence, including publication of the Transfusion Requirements in Critical Care trial in 1999, supports a lower hemoglobin threshold for RBC transfusion in ICU patients. However, little is known regarding the influence of this evidence on clinical practice over time in a large population-based cohort. Retrospective population-based cohort study. Thirty-five Maryland hospitals. Seventy-three thousand three hundred eighty-five nonsurgical adults with an ICU stay greater than 1 day between 1994 and 2007. None. The unadjusted odds of patients receiving an RBC transfusion increased from 7.9% during the pre-Transfusion Requirements in Critical Care baseline period (1994-1998) to 14.7% during the post-Transfusion Requirements in Critical Care period (1999-2007). A logistic regression model, including 40 relevant patient and hospital characteristics, compared the annual trend in the adjusted odds of RBC transfusion during the pre- versus post-Transfusion Requirements in Critical Care periods. During the pre-Transfusion Requirements in Critical Care period, the trend in the adjusted odds of RBC transfusion did not differ between hospitals averaging > 200 annual ICU discharges and hospitals averaging ≤ 200 annual ICU discharges (odds ratio, 1.07 [95% CI, 1.01-1.13] annually and 1.03 [95% CI, 0.99-1.07] annually, respectively; p = 0.401). However, during the post-Transfusion Requirements in Critical Care period, the adjusted odds of RBC transfusion decreased over time in higher ICU volume hospitals (odds ratio, 0.96 [95% CI, 0.93-0.98] annually) but continued to increase in lower ICU volume hospitals (odds ratio, 1.10 [95% CI, 1.08-1.13] annually), p < 0.001. In this population-based cohort of ICU patients, the unadjusted odds of RBC transfusion increased in both higher and lower ICU volume hospitals both before and after Transfusion Requirements in Critical Care publication. After adjusting for relevant characteristics, the odds continued to increase in lower ICU volume hospitals in the post-Transfusion Requirements in Critical Care period, but it decreased in higher ICU volume hospitals. This suggests that evidence supporting restrictive RBC transfusion thresholds may not be uniformly translated into practice in different hospital settings.
    Critical care medicine 08/2013; · 6.37 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: STUDY OBJECTIVE: We determine whether prescription information or services improve the medication adherence of emergency department (ED) patients. METHODS: Adult patients treated at one of 3 EDs between November 2010 and September 2011 and prescribed an antibiotic, central nervous system, gastrointestinal, cardiac, or respiratory drug at discharge were eligible. Subjects were randomly assigned to usual care or one of 3 prescription information or services intervention groups: (1) practical services to reduce barriers to prescription filling (practical prescription information or services); (2) consumer drug information from MedlinePlus (MedlinePlus prescription information or services); or (3) both services and information (combination prescription information or services). Self-reported medication adherence, measured by primary adherence (prescription filling) and persistence (receiving medicine as prescribed) rates, was determined during a telephone interview 1 week postdischarge. RESULTS: Of the 3,940 subjects enrolled and randomly allocated to treatment, 86% (N=3,386) completed the follow-up interview. Overall, primary adherence was 88% and persistence was 48%. Across the sites, primary adherence and persistence did not differ significantly between usual care and the prescription information or services groups. However, at site C, subjects who received the practical prescription information or services (odds ratio [OR]=2.4; 95% confidence interval [CI] 1.4 to 4.3) or combination prescription information or services (OR=1.8; 95% CI 1.1 to 3.1) were more likely to fill their prescription compared with usual care. Among subjects prescribed a drug that treats an underlying condition, subjects who received the practical prescription information or services were more likely to fill their prescription (OR=1.8; 95% CI 1.0 to 3.1) compared with subjects who received usual care. CONCLUSION: Prescription filling and receiving medications as prescribed was not meaningfully improved by offering patients patient-centered prescription information and services.
    Annals of emergency medicine 04/2013; · 4.33 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: STUDY OBJECTIVE: We determine the validity of self-reported prescription filling among emergency department (ED) patients. METHODS: We analyzed a subgroup of 1,026 patients enrolled in a randomized controlled trial who were prescribed at least 1 medication at ED discharge, were covered by Medicaid insurance, and completed a telephone interview 1 week after the index ED visit. We extracted all pharmacy and health care use claims information from a state Medicaid database for all subjects within 30 days of their index ED visit. We used the pharmacy claims as the criterion standard and evaluated the accuracy of self-reported prescription filling obtained during the follow-up interview by estimating its sensitivity, specificity, positive likelihood ratio and negative likelihood ratio tests. We also examined whether the accuracy of self-reported prescription filling varied significantly by patient and clinical characteristics. RESULTS: Of the 1,635 medications prescribed, 74% were filled according to the pharmacy claims. Subjects reported filling 90% of prescriptions for a difference of 16% (95% confidence interval [CI] 14% to 18%). The self-reported data had high sensitivity (0.96; 95% CI 0.95 to 0.97) but low specificity (0.30; 95% CI 0.26 to 0.34). The positive likelihood ratio (1.37; 95% CI 1.29 to 2.46) and negative likelihood ratio (0.13; 95% CI 0.09 to 0.17) tests indicate that self-reported data are not a good indicator of prescription filling but are a moderately good indicator of nonfulfillment. Several factors were significantly associated with lower sensitivity (drug class and over-the-counter medications) and specificity (drug class, as needed, site and previous ED use). CONCLUSION: Self-reported prescription filling is overestimated and associated with few factors.
    Annals of emergency medicine 03/2013; · 4.33 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials.
    Stat 01/2013; 2(1).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE:: To develop a model to produce real-time, updated forecasts of patients' intensive care unit length of stay using naturally generated provider orders. The model was designed to be integrated within a computerized decision support system to improve patient flow management. DESIGN:: Retrospective cohort study. SETTING:: Twenty-six bed pediatric intensive care unit within an urban, academic children's hospital using a computerized order entry system. PATIENTS:: A total of 2,178 consecutive pediatric intensive care unit admissions during a 16-month time period. MEASUREMENTS AND MAIN RESULTS:: We obtained unit length of stay measurements, time-stamped provider orders, age, admission source, and readmission status. A joint discrete-time logistic regression model was developed to produce probabilistic length of stay forecasts from continuously updated provider orders. Accuracy was assessed by comparing forecasted expected discharge time with observed discharge time, rank probability scoring, and calibration curves. Cross-validation procedures were conducted. The distribution of length of stay was heavily right-skewed with a mean of 3.5 days (95% confidence interval 0.3-19.1). Provider orders were predictive of length of stay in real-time accurately forecasting discharge within a 12-hr window: 46% for patients within 1 day of discharge, 34% for patients within 2 days of discharge, and 27% for patients within 3 days of discharge. The forecast model incorporating predictive orders demonstrated significant improvements in accuracy compared with forecasts based solely on empirical and temporal information. Seventeen predictive orders were found, grouped by medication, ventilation, laboratory, diet, activity, foreign body, and extracorporeal membrane oxygenation. CONCLUSIONS:: Provider orders reflect dynamic changes in patients' conditions, making them useful for real-time length of stay prediction and patient flow management. Patients' length of stay represent a major source of variability in intensive care unit resource utilization and if accurately predicted and communicated, may lead to proactive bed management with more efficient patient flow.
    Critical care medicine 07/2012; 40(11):3058-3064. · 6.37 Impact Factor
  • Source
    New England Journal of Medicine 01/2012; 366(3):250-7. · 54.42 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: To examine the degree to which fast track (FT) treatment time varies among providers. A retrospective cohort study that included 105,783 FT visits at 3 emergency departments (EDs) during a 3-year period. We calculated the median treatment time for 80 primary providers (physicians and physician extenders) and 109 nurses (2 sites only). We used a hierarchical linear regression model that accounted for the clustering of patient visits to the same provider to estimate each provider's median treatment time controlling for patient, clinical, temporal, and ED demand (ie, number of arrivals) characteristics. Median FT treatment time across the 3 sites ranged from 48 to 134 minutes. Adjusted for other factors, the median FT treatment time of providers at the 90th versus 10th percentiles was 1.4 to 2.6 times longer across the 3 sites. The variation by FT nurses was also large. The median FT treatment time of nurses at the 90th versus 10th percentiles was 1.5 and 1.4 times longer at sites A and C, respectively. At all sites, provider and clinical factors explained more variation in FT treatment time than patient, ED demand, or temporal factors. There were clinically meaningful differences in FT treatment time among the providers at all sites. Given that the providers share the same environment and patient population, understanding why such large provider variation in FT treatment time exists warrants further investigation.
    Medical care 01/2012; 50(1):43-9. · 3.24 Impact Factor
  • Scott L Zeger
    [Show abstract] [Hide abstract]
    ABSTRACT: In this issue of the Journal, two different articles present epidemiologic evidence supporting the hypotheses that environmental exposures to particulate air pollution or higher temperatures modestly increase the risk of preterm birth. In this commentary, the author discusses environmental epidemiologic methods through the lens of these two papers with respect to the causal question, measurements, and quantification and interpretation of the evidence. Both groups of investigators present results from exploratory analyses that are at the hypothesis-generating end of the research spectrum as opposed to the confirmatory end. The present author describes in qualitative terms a method for decomposing evidence about the association of environmental exposures with prematurity into components representing different temporal and spatial scales. Finally, reproducible epidemiologic research methodology for studies like these is offered as one way to speed the transition from exploratory studies to confirmatory studies.
    American journal of epidemiology 12/2011; 175(2):108-10; discussion 111-3. · 5.59 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This consensus conference presentation article focuses on methods of measuring crowding. The authors compare daily versus hourly measures, static versus dynamic measures, and the use of linear or logistic regression models versus survival analysis models to estimate the effect of crowding on an outcome. Emergency department (ED) visit data were used to measure crowding and completion of waiting room time, treatment time, and boarding time for all patients treated and released or admitted to a single ED during 2010 (excluding patients who left without being seen). Crowding was characterized according to total ED census. First, total ED census on a daily and hourly basis throughout the 1-year study period was measured, and the ratios of daily and hourly census to the ED's median daily and hourly census were computed. Second, the person-based ED visit data set was transposed to person-period data. Multiple records per patient were created, whereby each record represented a consecutive 15-minute interval during each patient's ED length of stay (LOS). The variation in crowding measured statically (i.e., crowding at arrival or mean crowding throughout the shift in which the patient arrived) or dynamically (every 15 minutes throughout each patient's ED LOS) were compared. Within each phase of care, the authors divided each individual crowding value by the median crowding value of all 15-minute intervals to create a time-varying ED census ratio. For the two static measures, the ratio between each patient's ED census at arrival and the overall median ED census at arrival was computed, as well as the ratio between the mean shift ED census (based on the shift in which the patient arrived) and the study ED's overall mean shift ED census. Finally, the effect of crowding on the probability of completing different phases of emergency care was compared when estimated using a log-linear regression model versus a discrete time survival analysis model. During the 1-year study period, for 9% of the hours, total ED census was at least 50% greater than the median hourly census (median, 36). In contrast, on none of the days was total ED census at least 50% greater than the median daily census (median, 161). ED census at arrival and time-varying ED census yielded greater variation in crowding exposure compared to mean shift census for all three phases of emergency care. When estimating the effect of crowding on the completion of care, the discrete time survival analysis model fit the observed data better than the log-linear regression models. The discrete time survival analysis model also determined that the effect of crowding on care completion varied during patients' ED LOS. Crowding measured at the daily level will mask much of the variation in crowding that occurs within a 24-hour period. ED census at arrival demonstrated similar variation in crowding exposure as time-varying ED census. Discrete time survival analysis is a more appropriate approach for estimating the effect of crowding on an outcome.
    Academic Emergency Medicine 12/2011; 18(12):1269-77. · 2.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective was to determine the effect on patient satisfaction of providing patients with predicted service completion times. A randomized controlled trial was conducted in an urban, community teaching hospital. Emergency department (ED) patients triaged to fast track on weekdays between October 26, 2009, and December 30, 2009, from 9 am to 5 pm were eligible. Patients were randomized to: 1) usual care (n = 342), 2) provided ED process information (n = 336), or 3) provided ED process information plus predicted service delivery times (n = 333). Patients in group 3 were given an "average" and "upper range" estimate of their waiting room times and treatment times. The average and upper range predictions were calculated from quantile regression models that estimated the 50th and 90th percentiles of the waiting room time and treatment time distributions for fast track patients at the study site based on 2.5 years of historical data. Trained research assistants administered the interventions after triage. Patients completed a brief survey at discharge that measured their satisfaction with overall care, the quality of the information they received, and the timeliness of care. Satisfaction ratings of very good versus good, fair, poor, and very poor were modeled using logistic regression as a function of study group; actual service delivery times; and other patient, clinical, and temporal covariates. The study also modeled satisfaction ratings of fair, poor, and very poor compared to good and very good ratings as a function of the same covariates. Survey completion rates and patient, clinical, and temporal characteristics were similar by study group. Median waiting room time was 70 minutes (interquartile range [IQR] = 40 to 114 minutes), and median treatment time was 52 minutes (IQR = 31 to 81 minutes). Neither intervention affected any of the satisfaction outcomes. Satisfaction was significantly associated with actual waiting room time, individual providers, and patient age. Every 10-minute increase in waiting room time corresponded with an 8% decrease (odds ratio [OR] = 0.92; 95% confidence interval [CI] = 0.89 to 0.95) in the odds of reporting very good satisfaction with overall care. The odds of reporting very good satisfaction with care were lower for several triage nurses and fast track nurses, compared to the triage nurse and fast track nurse who treated the most study patients. Each 10-minute increase in waiting room time was also associated with a 10% increase in the odds of reporting very poor, poor, or fair satisfaction with overall care (OR = 1.10; 95% CI = 1.06 to 1.14). The odds of reporting very poor, poor, or fair satisfaction with overall care also varied significantly among the triage nurses, fast track doctors, and fast track nurses. The odds of reporting very poor, poor, or fair satisfaction with overall care were significantly lower among patients aged 35 years and older compared to patients aged 18 to 34 years. Satisfaction with overall care was influenced by waiting room time and the clinicians who treated them and not by service completion time estimates provided at triage.
    Academic Emergency Medicine 07/2011; 18(7):674-85. · 2.20 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Triage standing orders are used in emergency departments (EDs) to initiate evaluation when there is no bed available. This study evaluates the effect of diagnostic triage standing orders on ED treatment time of adult patients who presented with a chief complaint for which triage standing orders had been developed. We conducted a retrospective nested cohort study of patients treated in one academic ED between January 2007 and August 2009. In this ED, triage nurses can initiate full or partial triage standing orders for patients with chest pain, shortness of breath, abdominal pain, or genitourinary complaints. We matched patients who received triage standing orders to those who received room orders with respect to clinical and temporal factors, using a propensity score. We compared the median treatment time of patients with triage standing orders (partial or full) to those with room orders, using multivariate linear regression. Of the 15,188 eligible patients, 25% received full triage standing orders, 56% partial triage standing orders, and 19% room orders. The unadjusted median ED treatment time for patients who did not receive triage standing orders was 282 minutes versus 230 minutes for those who received a partial triage standing order or full triage standing orders (18% decrease). Controlling for other factors, triage standing orders were associated with a 16% reduction (95% confidence interval -18% to -13%) in the median treatment time, regardless of chief complaint. Diagnostic testing at triage was associated with a substantial reduction in ED treatment time for 4 common chief complaints. This intervention warrants further evaluation in other EDs and with different clinical conditions and tests.
    Annals of emergency medicine 02/2011; 57(2):89-99.e2. · 4.33 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: There is substantial observational evidence that long-term exposure to particulate air pollution is associated with premature death in urban populations. Estimates of the magnitude of these effects derive largely from cross-sectional comparisons of adjusted mortality rates among cities with varying pollution levels. Such estimates are potentially confounded by other differences among the populations correlated with air pollution, for example, socioeconomic factors. An alternative approach is to study covariation of particulate matter and mortality across time within a city, as has been done in investigations of short-term exposures. In either event, observational studies like these are subject to confounding by unmeasured variables. Therefore the ability to detect such confounding and to derive estimates less affected by confounding are a high priority. In this article, we describe and apply a method of decomposing the exposure variable into components with variation at distinct temporal, spatial, and time by space scales, here focusing on the components involving time. Starting from a proportional hazard model, we derive a Poisson regression model and estimate two regression coefficients: the “global” coefficient that measures the association between national trends in pollution and mortality; and the “local” coefficient, derived from space by time variation, that measures the association between location-specific trends in pollution and mortality adjusted by the national trends. Absent unmeasured confounders and given valid model assumptions, the scale-specific coefficients should be similar; substantial differences in these coefficients constitute a basis for questioning the model. We derive a backfitting algorithm to fit our model to very large spatio-temporal datasets. We apply our methods to the Medicare Cohort Air Pollution Study (MCAPS), which includes individual-level information on time of death and age on a population of 18.2 million for the period 2000–2006. Results based on the global coefficient indicate a large increase in the national life expectancy for reductions in the yearly national average of PM2.5. However, this coefficient based on national trends in PM2.5 and mortality is likely to be confounded by other variables trending on the national level. Confounding of the local coefficient by unmeasured factors is less likely, although it cannot be ruled out. Based on the local coefficient alone, we are not able to demonstrate any change in life expectancy for a reduction in PM2.5. We use additional survey data available for a subset of the data to investigate sensitivity of results to the inclusion of additional covariates, but both coefficients remain largely unchanged.
    Journal of the American Statistical Association 01/2011; 106(494):396-406. · 1.83 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The objective was to characterize service completion times by patient, clinical, temporal, and crowding factors for different phases of emergency care using quantile regression (QR). A retrospective cohort study was conducted on 1-year visit data from four academic emergency departments (EDs; N = 48,896-58,316). From each ED's clinical information system, the authors extracted electronic service information (date and time of registration; bed placement, initial contact with physician, disposition decision, ED discharge, and disposition status; inpatient medicine bed occupancy rate); patient demographics (age, sex, insurance status, and mode of arrival); and clinical characteristics (acuity level and chief complaint) and then used the service information to calculate patients' waiting room time, treatment time, and boarding time, as well as the ED occupancy rate. The 10th, 50th, and 90th percentiles of each phase of care were estimated as a function of patient, clinical, temporal, and crowding factors using multivariate QR. Accuracy of models was assessed by comparing observed and predicted service completion times and the proportion of observations that fell below the predicted 10th, 50th, and 90th percentiles. At the 90th percentile, patients experienced long waiting room times (105-222 minutes), treatment times (393-616 minutes), and boarding times (381-1,228 minutes) across the EDs. We observed a strong interaction effect between acuity level and temporal factors (i.e., time of day and day of week) on waiting room time at all four sites. Acuity level 3 patients waited the longest across the four sites, and their waiting room times were most influenced by temporal factors compared to other acuity level patients. Acuity level and chief complaint were important predictors of all phases of care, and there was a significant interaction effect between acuity and chief complaint. Patients with a psychiatric problem experienced the longest treatment times, regardless of acuity level. Patients who presented with an injury did not wait as long for an ED or inpatient bed. Temporal factors were strong predictors of service completion time, particularly waiting room time. Mode of arrival was the only patient characteristic that substantially affected waiting room time and treatment time. Patients who arrived by ambulance had shorter wait times but longer treatment times compared to those who did not arrive by ambulance. There was close agreement between observed and predicted service completion times at the 10th, 50th, and 90th percentile distributions across the four EDs. Service completion times varied significantly across the four academic EDs. QR proved to be a useful method for estimating the service completion experience of not only typical ED patients, but also the experience of those who waited much shorter or longer. Building accurate models of ED service completion times is a critical first step needed to identify barriers to patient flow, begin the process of reengineering the system to reduce variability, and improve the timeliness of care provided.
    Academic Emergency Medicine 08/2010; 17(8):813-23. · 2.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When estimating the association between an exposure and outcome, a simple approach to quantifying the amount of confounding by a factor, Z, is to compare estimates of the exposure-outcome association with and without adjustment for Z. This approach is widely believed to be problematic due to the nonlinearity of some exposure-effect measures. When the expected value of the outcome is modeled as a nonlinear function of the exposure, the adjusted and unadjusted exposure effects can differ even in the absence of confounding (Greenland , Robins, and Pearl, 1999); we call this the nonlinearity effect. In this paper, we propose a corrected measure of confounding that does not include the nonlinearity effect. The performances of the simple and corrected estimates of confounding are assessed in simulations and illustrated using a study of risk factors for low birth-weight infants. We conclude that the simple estimate of confounding is adequate or even preferred in settings where the nonlinearity effect is very small. In settings with a sizable nonlinearity effect, the corrected estimate of confounding has improved performance.
    Biostatistics 03/2010; 11(3):572-82. · 2.43 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Delirium is common after cardiac surgery, although under-recognized, and its long-term consequences are likely underestimated. The primary goal of this study was to determine whether patients with delirium after coronary artery bypass graft (CABG) surgery have higher long-term out-of-hospital mortality when compared with CABG patients without delirium. We studied 5,034 consecutive patients undergoing CABG surgery at a single institution from 1997 to 2007. Presence or absence of neurologic complications, including delirium, was assessed prospectively. Survival analysis was performed to determine the role of delirium in the hazard of death, including a propensity score to adjust for potential confounders. These analyses were repeated to determine the association between postoperative stroke and long-term mortality. Individuals with delirium had an increased hazard of death (adjusted hazard ratio [HR], 1.65; 95% confidence interval [CI], 1.38-1.97) up to 10 years postoperatively, after adjustment for perioperative and vascular risk factors. Patients with postoperative stroke had a HR of 2.34 (95% CI, 1.87-2.92). The effect of delirium on subsequent mortality was the strongest among those without a prior stroke (HR 1.83 vs HR 1.11 [with a prior stroke] [p-interaction = 0.02]) or who were younger (HR 2.42 [<65 years old] vs HR 1.49 [>/=65 years old] [p-interaction = 0.04]). Delirium after cardiac surgery is a strong independent predictor of mortality up to 10 years postoperatively, especially in younger individuals and in those without prior stroke. Future studies are needed to determine the impact of delirium prevention and/or treatment in long-term patient mortality.
    Annals of Neurology 03/2010; 67(3):338-44. · 11.19 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of the paper is to determine inter-rater reliability of trained examiners performing standardized strength assessments using manual muscle testing (MMT). DESIGN, SUBJECTS, AND SETTING: The authors report on 19 trainees undergoing quality assurance within a multi-site prospective cohort study. Inter-rater reliability for specially trained evaluators ("trainees") and a reference rater, performing MMT using both simulated and actual patients recovering from critical illness was evaluated. Across 26 muscle groups tested by 19 trainee-reference rater pairs, the median (interquartile range) percent agreement and intraclass correlation coefficient (ICC; 95% CI) were: 96% (91, 98%) and 0.98 (0.95, 1.00), respectively. Across all 19 pairs, the ICC (95% CI) for the overall composite MMT score was 0.99 (0.98-1.00). When limited to actual patients, the ICC was 1.00 (95% CI 0.99-1.00). The agreement (kappa; 95% CI) in detecting clinically significant weakness was 0.88 (0.44-1.00). MMT has excellent inter-rater reliability in trained examiners and is a reliable method of comprehensively assessing muscle strength.
    European Journal of Intensive Care Medicine 03/2010; 36(6):1038-43. · 5.17 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Impaired cardiac function can adversely affect the brain via decreased perfusion. The purpose of this study was to determine if cardiac ejection fraction (EF) is associated with cognitive performance, and whether this is modified by low blood pressure. Neuropsychological testing evaluating multiple cognitive domains, measurement of mean arterial pressure (MAP), and measurement of EF were performed in 234 individuals with coronary artery disease. The association between level of EF and performance within each cognitive domain was explored, as was the interaction between low MAP and EF. Adjusted global cognitive performance, as well as performance in visuoconstruction and motor speed, was significantly directly associated with cardiac EF. This relationship was not entirely linear, with a steeper association between EF and cognition at lower levels of EF than at higher levels. Patients with low EF and low MAP at the time of testing had worse cognitive performance than either of these alone, particularly for the global and motor speed cognitive scores. Low EF may be associated with worse cognitive performance, particularly among individuals with low MAP and for cognitive domains typically associated with vascular cognitive impairment. Further care should be paid to hypotension in the setting of heart failure, as this may exacerbate cerebral hypoperfusion.
    Behavioural neurology 01/2010; 22(1-2):63-71. · 1.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Statin use before surgery has been associated with reduced morbidity and mortality after vascular surgery. The effect of preoperative statin use on stroke and encephalopathy after coronary artery bypass grafting (CABG) is unclear. A post hoc analysis was undertaken of a prospectively collected cohort of isolated CABG patients over a 10-year period at a single institution. Primary outcomes were stroke and encephalopathy. Univariable analyses identified risk factors for statin use, which were applied to a propensity score model using logistic regression and patients were divided into quintiles of propensity for statin use. Controlling for propensity score quintile, the odds ratio (OR) of combined stroke and encephalopathy (primary endpoint), cardiovascular mortality, myocardial infarction, and length of stay were compared between statin users and nonusers. There were 5,121 CABG patients, of whom 2,788 (54%) were taking statin medications preoperatively. Stroke occurred in 166 (3.2%) and encephalopathy in 438 (8.6%), contributing to 604 patients (11.8%) who met the primary endpoint. The unadjusted OR of stroke/encephalopathy in statin users was 1.053 (95% confidence interval [CI] 0.888-1.248, p = 0.582). Adjustment based on propensity score resulted in balance of stroke risk factors among quintiles. The propensity score-adjusted OR of stroke/encephalopathy in statin users was 0.958 (95% CI 0.784-1.170, p = 0.674). There were no significant differences in cardiovascular mortality, myocardial infarction, or length of stay between statin users and otherwise similar nonusers. In this large data cohort study, preoperative statin use was not associated with a decreased incidence of stroke and encephalopathy after coronary artery bypass grafting.
    Neurology 11/2009; 73(24):2099-106. · 8.30 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Previous uncontrolled studies have suggested that there is late cognitive decline after coronary artery bypass grafting that may be attributable to use of the cardiopulmonary bypass pump. In this prospective, nonrandomized, longitudinal study, we compared cognitive outcomes after on-pump coronary artery bypass surgery (n = 152) with off-pump bypass surgery patients (n = 75); nonsurgical cardiac comparison subjects (n = 99); and 69 heart-healthy comparison (HHC) subjects. The primary outcome measure was change from baseline to 72 months in the following cognitive domains: verbal memory, visual memory, visuoconstruction, language, motor speed, psychomotor speed, attention, executive function, and a composite global score. There were no consistent differences in 72-month cognitive outcomes among the three groups with coronary artery disease (CAD). The CAD groups had lower baseline performance, and a greater degree of decline compared with HHC. The degree of change was small, with none of the groups having more than 0.5 SD decline. None of the groups was substantially worse at 72 months compared with baseline. Compared with subjects with no vascular disease risk factors, the CAD patients had lower baseline cognitive performance and greater degrees of decline over 72 months, suggesting that in these patients, vascular disease may have an impact on cognitive performance. We found no significant differences in the long-term cognitive outcomes among patients with various CAD therapies, indicating that management strategy for CAD is not an important determinant of long-term cognitive outcomes.
    The Annals of thoracic surgery 09/2009; 88(2):445-454. · 3.45 Impact Factor

Publication Stats

24k Citations
845.79 Total Impact Points

Institutions

  • 1994–2013
    • Johns Hopkins Bloomberg School of Public Health
      • • Department of Biostatistics
      • • Department of International Health
      • • Department of Epidemiology
      Baltimore, Maryland, United States
  • 2011–2012
    • George Washington University
      • • Department of Health Policy
      • • Center for Health Care Quality
      Washington, D. C., DC, United States
  • 1989–2012
    • Johns Hopkins University
      • • Department of Neurology
      • • Department of Biostatistics
      • • Department of Medicine
      Baltimore, MD, United States
  • 1995–2010
    • Fred Hutchinson Cancer Research Center
      Seattle, Washington, United States
  • 2008
    • Mayo Foundation for Medical Education and Research
      • Department of Medicine
      Scottsdale, AZ, United States
  • 2004–2008
    • Yale University
      • School of Forestry and Environmental Studies
      New Haven, CT, United States
    • University of Florida
      • Department of Statistics
      Gainesville, FL, United States
  • 2007
    • Mayo Clinic - Rochester
      • Department of General Internal Medicine
      Rochester, Minnesota, United States
  • 1996–2005
    • Johns Hopkins Medicine
      • • Department of Biostatistics
      • • Division of Geriatric Medicine and Gerontology
      • • The Kelly Gynecologic Oncology Service
      • • Department of Medicine
      Baltimore, MD, United States
    • INRCA Istituto Nazionale di Ricovero e Cura per Anziani
      Ancona, The Marches, Italy
  • 2002
    • Kennedy Krieger Institute
      • Department of Neurology
      Baltimore, MD, United States
  • 2001
    • Harvard University
      Cambridge, Massachusetts, United States
  • 2000
    • Iowa State University
      • Department of Statistics
      Ames, IA, United States
    • University of Washington Seattle
      • Department of Biostatistics
      Seattle, WA, United States
  • 1998
    • University of Chicago
      • Department of Medicine
      Chicago, IL, United States
  • 1990
    • The Ohio Environmental Protection Agency
      Columbus, Ohio, United States