Academic Season Does Not Influence Cardiac Surgical Outcomes at US Academic Medical Centers

Department of Surgery, University of Virginia Health System, Charlottesville, VA, USA.
Journal of the American College of Surgeons (Impact Factor: 5.12). 06/2011; 212(6):1000-7. DOI: 10.1016/j.jamcollsurg.2011.03.012
Source: PubMed


Previous studies have demonstrated the influence of academic season on outcomes in select surgical populations. However, the influence of academic season has not been evaluated nationwide in cardiac surgery. We hypothesized that cardiac surgical outcomes were not significantly influenced by time of year at both cardiothoracic teaching hospitals and non-cardiothoracic teaching hospitals nationwide.
From 2003 to 2007, a weighted 1,614,394 cardiac operations were evaluated using the Nationwide Inpatient Sample database. Patients undergoing cardiac operations at cardiothoracic teaching and non-cardiothoracic teaching hospitals were identified using the Association of American Medical College's Graduate Medical Education Tracking System. Hierarchic multivariable logistic regression analyses were used to estimate the effect of academic quarter on risk-adjusted outcomes.
Mean patient age was 65.9 ± 10.9 years. Women accounted for 32.8% of patients. Isolated coronary artery bypass grafting was the most common operation performed (64.7%), followed by isolated valve replacement (19.3%). The overall incidence of operative mortality and composite postoperative complication rate were 2.9% and 27.9%, respectively. After accounting for potentially confounding risk factors, timing of operation by academic quarter did not independently increase risk-adjusted mortality (p = 0.12) or morbidity (p = 0.24) at academic medical centers.
Risk-adjusted mortality and morbidity for cardiac operations were not associated with time of year in the US at teaching and nonteaching hospitals. Patients should be reassured of the safety of performance of cardiac operations at academic medical centers throughout a given academic year.

1 Follower
12 Reads
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the beginning of the academic year, medical errors are often attributed to inexperienced medical staff. This potential seasonal influence on health care outcomes is termed the "July effect." No study has demonstrated the July effect in liver transplantation. We reviewed retrospectively collected data from the United Network for Organ Sharing for patients who underwent liver transplantation from October 1987 to June 2011 to determine if surgical outcomes were worse in July compared with rest of the year. We found no clinical difference in early graft survival (91.11% vs 90.72%, p = 0.045) and no difference in early patient survival (94.71% vs 94.42%, p = 0.057). Survival at 1 year, 3 years, and 5 years was also compared and no notable differences were detected. Because the Model for End-stage Liver Disease (MELD) score implementation in 2002 affected the acuity of liver transplant recipients, we further stratified our data to compare pre- and post-MELD survival to remove subjectivity as a confounding factor. MELD stratification revealed no seasonal difference in outcomes. There was no difference in rate of graft failure and acute and chronic rejection between groups. Our findings show no evidence of the July effect in liver transplantation. Each July, thousands of medical residents take on new responsibilities in patient care. It has been suggested that these new practitioners may produce errors that contribute to worse patient outcomes in the beginning of the academic year-a phenomenon called the "July effect." Currently, there are few research studies with controversial evidence of poorer outcomes in July, and no articles address the effect of new medical staff in the setting of liver transplantation. Our study compares short-, medium-, and long-term graft and patient survival between July and August and the remaining months using national data. We also examine survival before and after the implementation of the MELD scoring system to determine its effect on outcomes in the beginning of the academic year.
    Journal of Surgical Education 09/2013; 70(5):669-79. DOI:10.1016/j.jsurg.2013.04.012 · 1.38 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Studies on the rate of adverse events in hospitalized patients seldom examine temporal patterns. This study presents evidence of both weekly and annual cycles. The study is based on a large and diverse data set, with nearly 5 yrs of data from a voluntary staff-incident reporting system of a large public health care provider in rural southeastern Australia. The data of 63 health care facilities were included, ranging from large non-metropolitan hospitals to small community and aged health care facilities. Poisson regression incorporating an observation-driven autoregressive effect using the GLARMA framework was used to explain daily error counts with respect to long-term trend and weekly and annual effects, with procedural volume as an offset. The annual pattern was modeled using a first-order sinusoidal effect. The rate of errors reported demonstrated an increasing annual trend of 13.4% (95% confidence interval [CI] 10.6% to 16.3%); however, this trend was only significant for errors of minor or no harm to the patient. A strong "weekend effect" was observed. The incident rate ratio for the weekend versus weekdays was 2.74 (95% CI 2.55 to 2.93). The weekly pattern was consistent for incidents of all levels of severity, but it was more pronounced for less severe incidents. There was an annual cycle in the rate of incidents, the number of incidents peaking in October, on the 282 nd day of the year (spring in Australia), with an incident rate ratio 1.09 (95% CI 1.05 to 1.14) compared to the annual mean. There was no so-called "killing season" or "July effect," as the peak in incident rate was not related to the commencement of work by new medical school graduates. The major finding of this study is the rate of adverse events is greater on weekends and during spring. The annual pattern appears to be unrelated to the commencement of new graduates and potentially results from seasonal variation in the case mix of patients or the health of the medical workforce that alters health care performance. These mechanisms will need to be elucidated with further research.
    Chronobiology International 06/2012; 29(7):947-54. DOI:10.3109/07420528.2012.672265 · 3.34 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The July/August Phenomenon is a period when the quality of care in hospitals is thought to decrease due to summer vacation stand-ins and new staff. The results of studies on the veracity of this claim have been conflicting. This study investigates the situation in internal medicine. Registry data of patients treated in internal medicine wards between 1 July 2000 and 30 November 2009 were obtained and analysed. There were no differences in mortality during the July admissions compared with those in November when adjusting for age, diagnosis, gender and year [for the overall data risk ratio (RR) = 1.10, 95% confidence interval (CI) 1.00-1.23, P = 0.06; for the university hospitals RR = 1.10, 95% CI 0.91-1.33, P = 0.34; for the non-university hospitals RR = 1.10, 95% CI 0.97-1.26, P = 0.13]. The duration of admission (overall mean 4.5, standard deviation 6.0) was equal between July and November when adjusted for age, diagnosis, gender and year in all groups (overall data: RR = 1.00, 95% CI 0.99-1.02, P = 0.83; university hospitals RR = 1.02, 95% CI 0.99-1.04, P = 0.13; non-university hospitals RR = 1.00, 95% CI 0.98-1.01, P = 0.67). The quality of care in Finnish internal medicine wards in July seems to equal November. Our results do not support the existence of a July Phenomenon in Finland.
    Journal of Evaluation in Clinical Practice 04/2014; 20(4). DOI:10.1111/jep.12130 · 1.08 Impact Factor


12 Reads
Available from