[Show abstract][Hide abstract] ABSTRACT: Acute lower respiratory infections (ALRI) are a leading cause of death among young children in low and middle income countries. Low birthweight is highly prevalent in South Asia and is associated with increased risks of mortality, morbidity, and poor motor and cognitive development. High levels of indoor household air pollution caused by open burning of biomass fuels such as wood, animal dung, and crop waste are common in these settings and are associated with high rates of ALRI and low birthweight. Alternative stove designs that burn biomass fuel more efficiently have been proposed as one method for reducing these high exposures and lowering the rates of these disorders. We designed two randomized trials to test this hypothesis.Methods/design: We conducted a pair of community-based, randomized trials of alternative cookstove installation a rural district in southern Nepal. Phase one was a cluster randomized, modified step-wedge design using an alternative biomass stove with a chimney to vent smoke to the exterior. A pre-installation period of morbidity assessment and household environmental assessment was conducted for six months in all households. This was followed by a one year step-wedge phase with 12 monthly steps for clusters of households to receive the alternative stove. The timing of alternative stove introduction was randomized. This step-wedge phase was followed in all households by another six month follow-up phase. Eligibility criteria for phase one included household informed consent, the presence of a married woman of reproductive age (15-30 yrs) or a child < 36 months. Children were followed until 36 months of age or the end of the trial and then discharged. Pregnancies were identified and followed until completion or end of the trial.Phase two was an individually randomized trial of the same alternative biomass stove versus liquid propane gas stove installation in a subset of households that participated in phase one. Follow-up for phase two was 12 months following stove installation. Eligibility criteria included the same components as phase one except children were only enrolled for morbidity follow-up if they were less than 24 months are the start.The primary outcomes included: the incidence of ALRI in children and birthweight among newborn infants.
We have presented the design and methods of two randomized trials of alternative cookstoves on rates of acute lower respiratory infection and birthweight in a rural population in southern Nepal.Trial registration: Clinicaltrials.gov (NCT00786877, Nov. 5, 2008).
BMC Public Health 12/2014; 14(1):1271. · 2.32 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In 2006, Massachusetts expanded insurance coverage to many low-income individuals.
This study aimed to estimate the change in emergency department (ED) utilization per individual among a cohort who qualified for subsidized health insurance following the Massachusetts health care reform.
We obtained Massachusetts public health insurance enrollment data for the fiscal years 2004-2008 and identified 353,515 adults who enrolled in Commonwealth Care, a program that subsidizes insurance for low-income adults. We merged the enrollment data with statewide ED visit claims and created a longitudinal file that indicated each enrollee's ED visits and insurance status each month during the preenrollment and postenrollment periods.
We estimated the ratio in an individual's odds of an ED visit during the postperiod versus preperiod by conditional logistic regression.
Among the 112,146 CommCare enrollees who made at least 1 ED visit during the study period, an individual's odds of an ED visit decreased 4% [odds ratio (OR)=0.96; 95% confidence interval (CI), 0.94, 0.98] postenrollment. However, it varied significantly depending on preenrollment insurance status. A person's odds of an ED visit was 12% higher in the postperiod among enrollees not publicly insured prior (OR=1.12; 95% CI, 1.10, 1.25), but was 18% lower among enrollees who transitioned from the Health Safety Net, a program that pays for limited services for low-income individuals (OR=0.82; 95% CI, 0.78, 0.85).
Expanding subsidized health insurance did not uniformly change ED utilization for all newly insured low-income adults in Massachusetts.
[Show abstract][Hide abstract] ABSTRACT: Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward.
International Journal of Environmental Research and Public Health 06/2014; 11(6):6400-6416. · 1.99 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials.
[Show abstract][Hide abstract] ABSTRACT: Increasing evidence, including publication of the Transfusion Requirements in Critical Care trial in 1999, supports a lower hemoglobin threshold for RBC transfusion in ICU patients. However, little is known regarding the influence of this evidence on clinical practice over time in a large population-based cohort.
Retrospective population-based cohort study.
Thirty-five Maryland hospitals.
Seventy-three thousand three hundred eighty-five nonsurgical adults with an ICU stay greater than 1 day between 1994 and 2007.
The unadjusted odds of patients receiving an RBC transfusion increased from 7.9% during the pre-Transfusion Requirements in Critical Care baseline period (1994-1998) to 14.7% during the post-Transfusion Requirements in Critical Care period (1999-2007). A logistic regression model, including 40 relevant patient and hospital characteristics, compared the annual trend in the adjusted odds of RBC transfusion during the pre- versus post-Transfusion Requirements in Critical Care periods. During the pre-Transfusion Requirements in Critical Care period, the trend in the adjusted odds of RBC transfusion did not differ between hospitals averaging > 200 annual ICU discharges and hospitals averaging ≤ 200 annual ICU discharges (odds ratio, 1.07 [95% CI, 1.01-1.13] annually and 1.03 [95% CI, 0.99-1.07] annually, respectively; p = 0.401). However, during the post-Transfusion Requirements in Critical Care period, the adjusted odds of RBC transfusion decreased over time in higher ICU volume hospitals (odds ratio, 0.96 [95% CI, 0.93-0.98] annually) but continued to increase in lower ICU volume hospitals (odds ratio, 1.10 [95% CI, 1.08-1.13] annually), p < 0.001.
In this population-based cohort of ICU patients, the unadjusted odds of RBC transfusion increased in both higher and lower ICU volume hospitals both before and after Transfusion Requirements in Critical Care publication. After adjusting for relevant characteristics, the odds continued to increase in lower ICU volume hospitals in the post-Transfusion Requirements in Critical Care period, but it decreased in higher ICU volume hospitals. This suggests that evidence supporting restrictive RBC transfusion thresholds may not be uniformly translated into practice in different hospital settings.
Critical care medicine 08/2013; · 6.15 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: STUDY OBJECTIVE: We determine whether prescription information or services improve the medication adherence of emergency department (ED) patients. METHODS: Adult patients treated at one of 3 EDs between November 2010 and September 2011 and prescribed an antibiotic, central nervous system, gastrointestinal, cardiac, or respiratory drug at discharge were eligible. Subjects were randomly assigned to usual care or one of 3 prescription information or services intervention groups: (1) practical services to reduce barriers to prescription filling (practical prescription information or services); (2) consumer drug information from MedlinePlus (MedlinePlus prescription information or services); or (3) both services and information (combination prescription information or services). Self-reported medication adherence, measured by primary adherence (prescription filling) and persistence (receiving medicine as prescribed) rates, was determined during a telephone interview 1 week postdischarge. RESULTS: Of the 3,940 subjects enrolled and randomly allocated to treatment, 86% (N=3,386) completed the follow-up interview. Overall, primary adherence was 88% and persistence was 48%. Across the sites, primary adherence and persistence did not differ significantly between usual care and the prescription information or services groups. However, at site C, subjects who received the practical prescription information or services (odds ratio [OR]=2.4; 95% confidence interval [CI] 1.4 to 4.3) or combination prescription information or services (OR=1.8; 95% CI 1.1 to 3.1) were more likely to fill their prescription compared with usual care. Among subjects prescribed a drug that treats an underlying condition, subjects who received the practical prescription information or services were more likely to fill their prescription (OR=1.8; 95% CI 1.0 to 3.1) compared with subjects who received usual care. CONCLUSION: Prescription filling and receiving medications as prescribed was not meaningfully improved by offering patients patient-centered prescription information and services.
Annals of emergency medicine 04/2013; · 4.33 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: STUDY OBJECTIVE: We determine the validity of self-reported prescription filling among emergency department (ED) patients. METHODS: We analyzed a subgroup of 1,026 patients enrolled in a randomized controlled trial who were prescribed at least 1 medication at ED discharge, were covered by Medicaid insurance, and completed a telephone interview 1 week after the index ED visit. We extracted all pharmacy and health care use claims information from a state Medicaid database for all subjects within 30 days of their index ED visit. We used the pharmacy claims as the criterion standard and evaluated the accuracy of self-reported prescription filling obtained during the follow-up interview by estimating its sensitivity, specificity, positive likelihood ratio and negative likelihood ratio tests. We also examined whether the accuracy of self-reported prescription filling varied significantly by patient and clinical characteristics. RESULTS: Of the 1,635 medications prescribed, 74% were filled according to the pharmacy claims. Subjects reported filling 90% of prescriptions for a difference of 16% (95% confidence interval [CI] 14% to 18%). The self-reported data had high sensitivity (0.96; 95% CI 0.95 to 0.97) but low specificity (0.30; 95% CI 0.26 to 0.34). The positive likelihood ratio (1.37; 95% CI 1.29 to 2.46) and negative likelihood ratio (0.13; 95% CI 0.09 to 0.17) tests indicate that self-reported data are not a good indicator of prescription filling but are a moderately good indicator of nonfulfillment. Several factors were significantly associated with lower sensitivity (drug class and over-the-counter medications) and specificity (drug class, as needed, site and previous ED use). CONCLUSION: Self-reported prescription filling is overestimated and associated with few factors.
Annals of emergency medicine 03/2013; · 4.33 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: OBJECTIVE:: To develop a model to produce real-time, updated forecasts of patients' intensive care unit length of stay using naturally generated provider orders. The model was designed to be integrated within a computerized decision support system to improve patient flow management. DESIGN:: Retrospective cohort study. SETTING:: Twenty-six bed pediatric intensive care unit within an urban, academic children's hospital using a computerized order entry system. PATIENTS:: A total of 2,178 consecutive pediatric intensive care unit admissions during a 16-month time period. MEASUREMENTS AND MAIN RESULTS:: We obtained unit length of stay measurements, time-stamped provider orders, age, admission source, and readmission status. A joint discrete-time logistic regression model was developed to produce probabilistic length of stay forecasts from continuously updated provider orders. Accuracy was assessed by comparing forecasted expected discharge time with observed discharge time, rank probability scoring, and calibration curves. Cross-validation procedures were conducted. The distribution of length of stay was heavily right-skewed with a mean of 3.5 days (95% confidence interval 0.3-19.1). Provider orders were predictive of length of stay in real-time accurately forecasting discharge within a 12-hr window: 46% for patients within 1 day of discharge, 34% for patients within 2 days of discharge, and 27% for patients within 3 days of discharge. The forecast model incorporating predictive orders demonstrated significant improvements in accuracy compared with forecasts based solely on empirical and temporal information. Seventeen predictive orders were found, grouped by medication, ventilation, laboratory, diet, activity, foreign body, and extracorporeal membrane oxygenation. CONCLUSIONS:: Provider orders reflect dynamic changes in patients' conditions, making them useful for real-time length of stay prediction and patient flow management. Patients' length of stay represent a major source of variability in intensive care unit resource utilization and if accurately predicted and communicated, may lead to proactive bed management with more efficient patient flow.
Critical care medicine 07/2012; 40(11):3058-3064. · 6.15 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: To examine the degree to which fast track (FT) treatment time varies among providers.
A retrospective cohort study that included 105,783 FT visits at 3 emergency departments (EDs) during a 3-year period. We calculated the median treatment time for 80 primary providers (physicians and physician extenders) and 109 nurses (2 sites only). We used a hierarchical linear regression model that accounted for the clustering of patient visits to the same provider to estimate each provider's median treatment time controlling for patient, clinical, temporal, and ED demand (ie, number of arrivals) characteristics.
Median FT treatment time across the 3 sites ranged from 48 to 134 minutes. Adjusted for other factors, the median FT treatment time of providers at the 90th versus 10th percentiles was 1.4 to 2.6 times longer across the 3 sites. The variation by FT nurses was also large. The median FT treatment time of nurses at the 90th versus 10th percentiles was 1.5 and 1.4 times longer at sites A and C, respectively. At all sites, provider and clinical factors explained more variation in FT treatment time than patient, ED demand, or temporal factors.
There were clinically meaningful differences in FT treatment time among the providers at all sites. Given that the providers share the same environment and patient population, understanding why such large provider variation in FT treatment time exists warrants further investigation.
Medical care 01/2012; 50(1):43-9. · 2.94 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In this issue of the Journal, two different articles present epidemiologic evidence supporting the hypotheses that environmental exposures to particulate air pollution or higher temperatures modestly increase the risk of preterm birth. In this commentary, the author discusses environmental epidemiologic methods through the lens of these two papers with respect to the causal question, measurements, and quantification and interpretation of the evidence. Both groups of investigators present results from exploratory analyses that are at the hypothesis-generating end of the research spectrum as opposed to the confirmatory end. The present author describes in qualitative terms a method for decomposing evidence about the association of environmental exposures with prematurity into components representing different temporal and spatial scales. Finally, reproducible epidemiologic research methodology for studies like these is offered as one way to speed the transition from exploratory studies to confirmatory studies.
American journal of epidemiology 12/2011; 175(2):108-10; discussion 111-3. · 4.98 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: This consensus conference presentation article focuses on methods of measuring crowding. The authors compare daily versus hourly measures, static versus dynamic measures, and the use of linear or logistic regression models versus survival analysis models to estimate the effect of crowding on an outcome.
Emergency department (ED) visit data were used to measure crowding and completion of waiting room time, treatment time, and boarding time for all patients treated and released or admitted to a single ED during 2010 (excluding patients who left without being seen). Crowding was characterized according to total ED census. First, total ED census on a daily and hourly basis throughout the 1-year study period was measured, and the ratios of daily and hourly census to the ED's median daily and hourly census were computed. Second, the person-based ED visit data set was transposed to person-period data. Multiple records per patient were created, whereby each record represented a consecutive 15-minute interval during each patient's ED length of stay (LOS). The variation in crowding measured statically (i.e., crowding at arrival or mean crowding throughout the shift in which the patient arrived) or dynamically (every 15 minutes throughout each patient's ED LOS) were compared. Within each phase of care, the authors divided each individual crowding value by the median crowding value of all 15-minute intervals to create a time-varying ED census ratio. For the two static measures, the ratio between each patient's ED census at arrival and the overall median ED census at arrival was computed, as well as the ratio between the mean shift ED census (based on the shift in which the patient arrived) and the study ED's overall mean shift ED census. Finally, the effect of crowding on the probability of completing different phases of emergency care was compared when estimated using a log-linear regression model versus a discrete time survival analysis model.
During the 1-year study period, for 9% of the hours, total ED census was at least 50% greater than the median hourly census (median, 36). In contrast, on none of the days was total ED census at least 50% greater than the median daily census (median, 161). ED census at arrival and time-varying ED census yielded greater variation in crowding exposure compared to mean shift census for all three phases of emergency care. When estimating the effect of crowding on the completion of care, the discrete time survival analysis model fit the observed data better than the log-linear regression models. The discrete time survival analysis model also determined that the effect of crowding on care completion varied during patients' ED LOS.
Crowding measured at the daily level will mask much of the variation in crowding that occurs within a 24-hour period. ED census at arrival demonstrated similar variation in crowding exposure as time-varying ED census. Discrete time survival analysis is a more appropriate approach for estimating the effect of crowding on an outcome.
Academic Emergency Medicine 12/2011; 18(12):1269-77. · 2.20 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The objective was to determine the effect on patient satisfaction of providing patients with predicted service completion times.
A randomized controlled trial was conducted in an urban, community teaching hospital. Emergency department (ED) patients triaged to fast track on weekdays between October 26, 2009, and December 30, 2009, from 9 am to 5 pm were eligible. Patients were randomized to: 1) usual care (n = 342), 2) provided ED process information (n = 336), or 3) provided ED process information plus predicted service delivery times (n = 333). Patients in group 3 were given an "average" and "upper range" estimate of their waiting room times and treatment times. The average and upper range predictions were calculated from quantile regression models that estimated the 50th and 90th percentiles of the waiting room time and treatment time distributions for fast track patients at the study site based on 2.5 years of historical data. Trained research assistants administered the interventions after triage. Patients completed a brief survey at discharge that measured their satisfaction with overall care, the quality of the information they received, and the timeliness of care. Satisfaction ratings of very good versus good, fair, poor, and very poor were modeled using logistic regression as a function of study group; actual service delivery times; and other patient, clinical, and temporal covariates. The study also modeled satisfaction ratings of fair, poor, and very poor compared to good and very good ratings as a function of the same covariates.
Survey completion rates and patient, clinical, and temporal characteristics were similar by study group. Median waiting room time was 70 minutes (interquartile range [IQR] = 40 to 114 minutes), and median treatment time was 52 minutes (IQR = 31 to 81 minutes). Neither intervention affected any of the satisfaction outcomes. Satisfaction was significantly associated with actual waiting room time, individual providers, and patient age. Every 10-minute increase in waiting room time corresponded with an 8% decrease (odds ratio [OR] = 0.92; 95% confidence interval [CI] = 0.89 to 0.95) in the odds of reporting very good satisfaction with overall care. The odds of reporting very good satisfaction with care were lower for several triage nurses and fast track nurses, compared to the triage nurse and fast track nurse who treated the most study patients. Each 10-minute increase in waiting room time was also associated with a 10% increase in the odds of reporting very poor, poor, or fair satisfaction with overall care (OR = 1.10; 95% CI = 1.06 to 1.14). The odds of reporting very poor, poor, or fair satisfaction with overall care also varied significantly among the triage nurses, fast track doctors, and fast track nurses. The odds of reporting very poor, poor, or fair satisfaction with overall care were significantly lower among patients aged 35 years and older compared to patients aged 18 to 34 years.
Satisfaction with overall care was influenced by waiting room time and the clinicians who treated them and not by service completion time estimates provided at triage.
Academic Emergency Medicine 07/2011; 18(7):674-85. · 2.20 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: There is substantial observational evidence that long-term exposure to particulate air pollution is associated with premature death in urban populations. Estimates of the magnitude of these effects derive largely from cross-sectional comparisons of adjusted mortality rates among cities with varying pollution levels. Such estimates are potentially confounded by other differences among the populations correlated with air pollution, for example, socioeconomic factors. An alternative approach is to study covariation of particulate matter and mortality across time within a city, as has been done in investigations of short-term exposures. In either event, observational studies like these are subject to confounding by unmeasured variables. Therefore the ability to detect such confounding and to derive estimates less affected by confounding are a high priority. In this article, we describe and apply a method of decomposing the exposure variable into components with variation at distinct temporal, spatial, and time by space scales, here focusing on the components involving time. Starting from a proportional hazard model, we derive a Poisson regression model and estimate two regression coefficients: the “global” coefficient that measures the association between national trends in pollution and mortality; and the “local” coefficient, derived from space by time variation, that measures the association between location-specific trends in pollution and mortality adjusted by the national trends. Absent unmeasured confounders and given valid model assumptions, the scale-specific coefficients should be similar; substantial differences in these coefficients constitute a basis for questioning the model. We derive a backfitting algorithm to fit our model to very large spatio-temporal datasets. We apply our methods to the Medicare Cohort Air Pollution Study (MCAPS), which includes individual-level information on time of death and age on a population of 18.2 million for the period 2000–2006. Results based on the global coefficient indicate a large increase in the national life expectancy for reductions in the yearly national average of PM2.5. However, this coefficient based on national trends in PM2.5 and mortality is likely to be confounded by other variables trending on the national level. Confounding of the local coefficient by unmeasured factors is less likely, although it cannot be ruled out. Based on the local coefficient alone, we are not able to demonstrate any change in life expectancy for a reduction in PM2.5. We use additional survey data available for a subset of the data to investigate sensitivity of results to the inclusion of additional covariates, but both coefficients remain largely unchanged.
Journal of the American Statistical Association 06/2011; 106(494):396-406. · 2.11 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Triage standing orders are used in emergency departments (EDs) to initiate evaluation when there is no bed available. This study evaluates the effect of diagnostic triage standing orders on ED treatment time of adult patients who presented with a chief complaint for which triage standing orders had been developed.
We conducted a retrospective nested cohort study of patients treated in one academic ED between January 2007 and August 2009. In this ED, triage nurses can initiate full or partial triage standing orders for patients with chest pain, shortness of breath, abdominal pain, or genitourinary complaints. We matched patients who received triage standing orders to those who received room orders with respect to clinical and temporal factors, using a propensity score. We compared the median treatment time of patients with triage standing orders (partial or full) to those with room orders, using multivariate linear regression.
Of the 15,188 eligible patients, 25% received full triage standing orders, 56% partial triage standing orders, and 19% room orders. The unadjusted median ED treatment time for patients who did not receive triage standing orders was 282 minutes versus 230 minutes for those who received a partial triage standing order or full triage standing orders (18% decrease). Controlling for other factors, triage standing orders were associated with a 16% reduction (95% confidence interval -18% to -13%) in the median treatment time, regardless of chief complaint.
Diagnostic testing at triage was associated with a substantial reduction in ED treatment time for 4 common chief complaints. This intervention warrants further evaluation in other EDs and with different clinical conditions and tests.
Annals of emergency medicine 02/2011; 57(2):89-99.e2. · 4.33 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The objective was to characterize service completion times by patient, clinical, temporal, and crowding factors for different phases of emergency care using quantile regression (QR).
A retrospective cohort study was conducted on 1-year visit data from four academic emergency departments (EDs; N = 48,896-58,316). From each ED's clinical information system, the authors extracted electronic service information (date and time of registration; bed placement, initial contact with physician, disposition decision, ED discharge, and disposition status; inpatient medicine bed occupancy rate); patient demographics (age, sex, insurance status, and mode of arrival); and clinical characteristics (acuity level and chief complaint) and then used the service information to calculate patients' waiting room time, treatment time, and boarding time, as well as the ED occupancy rate. The 10th, 50th, and 90th percentiles of each phase of care were estimated as a function of patient, clinical, temporal, and crowding factors using multivariate QR. Accuracy of models was assessed by comparing observed and predicted service completion times and the proportion of observations that fell below the predicted 10th, 50th, and 90th percentiles.
At the 90th percentile, patients experienced long waiting room times (105-222 minutes), treatment times (393-616 minutes), and boarding times (381-1,228 minutes) across the EDs. We observed a strong interaction effect between acuity level and temporal factors (i.e., time of day and day of week) on waiting room time at all four sites. Acuity level 3 patients waited the longest across the four sites, and their waiting room times were most influenced by temporal factors compared to other acuity level patients. Acuity level and chief complaint were important predictors of all phases of care, and there was a significant interaction effect between acuity and chief complaint. Patients with a psychiatric problem experienced the longest treatment times, regardless of acuity level. Patients who presented with an injury did not wait as long for an ED or inpatient bed. Temporal factors were strong predictors of service completion time, particularly waiting room time. Mode of arrival was the only patient characteristic that substantially affected waiting room time and treatment time. Patients who arrived by ambulance had shorter wait times but longer treatment times compared to those who did not arrive by ambulance. There was close agreement between observed and predicted service completion times at the 10th, 50th, and 90th percentile distributions across the four EDs.
Service completion times varied significantly across the four academic EDs. QR proved to be a useful method for estimating the service completion experience of not only typical ED patients, but also the experience of those who waited much shorter or longer. Building accurate models of ED service completion times is a critical first step needed to identify barriers to patient flow, begin the process of reengineering the system to reduce variability, and improve the timeliness of care provided.
Academic Emergency Medicine 08/2010; 17(8):813-23. · 2.20 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The goal of the paper is to determine inter-rater reliability of trained examiners performing standardized strength assessments using manual muscle testing (MMT). DESIGN, SUBJECTS, AND SETTING: The authors report on 19 trainees undergoing quality assurance within a multi-site prospective cohort study.
Inter-rater reliability for specially trained evaluators ("trainees") and a reference rater, performing MMT using both simulated and actual patients recovering from critical illness was evaluated.
Across 26 muscle groups tested by 19 trainee-reference rater pairs, the median (interquartile range) percent agreement and intraclass correlation coefficient (ICC; 95% CI) were: 96% (91, 98%) and 0.98 (0.95, 1.00), respectively. Across all 19 pairs, the ICC (95% CI) for the overall composite MMT score was 0.99 (0.98-1.00). When limited to actual patients, the ICC was 1.00 (95% CI 0.99-1.00). The agreement (kappa; 95% CI) in detecting clinically significant weakness was 0.88 (0.44-1.00).
MMT has excellent inter-rater reliability in trained examiners and is a reliable method of comprehensively assessing muscle strength.
European Journal of Intensive Care Medicine 03/2010; 36(6):1038-43. · 5.17 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: When estimating the association between an exposure and outcome, a simple approach to quantifying the amount of confounding by a factor, Z, is to compare estimates of the exposure-outcome association with and without adjustment for Z. This approach is widely believed to be problematic due to the nonlinearity of some exposure-effect measures. When the expected value of the outcome is modeled as a nonlinear function of the exposure, the adjusted and unadjusted exposure effects can differ even in the absence of confounding (Greenland , Robins, and Pearl, 1999); we call this the nonlinearity effect. In this paper, we propose a corrected measure of confounding that does not include the nonlinearity effect. The performances of the simple and corrected estimates of confounding are assessed in simulations and illustrated using a study of risk factors for low birth-weight infants. We conclude that the simple estimate of confounding is adequate or even preferred in settings where the nonlinearity effect is very small. In settings with a sizable nonlinearity effect, the corrected estimate of confounding has improved performance.