Educational Epidemiology: Applying Population-Based Design and Analytic Approaches to Study Medical Education

Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
JAMA The Journal of the American Medical Association (Impact Factor: 35.29). 10/2004; 292(9):1044-50. DOI: 10.1001/jama.292.9.1044
Source: PubMed


Conducting educational research in medical schools is challenging partly because interventional controlled research designs are difficult to apply. In addition, strict accreditation requirements and student/faculty concerns about educational inequality reduce the flexibility needed to plan and execute educational experiments. Consequently, there is a paucity of rigorous and generalizable educational research to provide an evidence-guided foundation to support educational effectiveness. "Educational epidemiology," ie, the application across the physician education continuum of observational designs (eg, cross-sectional, longitudinal, cohort, and case-control studies) and randomized experimental designs (eg, randomized controlled trials, randomized crossover designs), could revolutionize the conduct of research in medical education. Furthermore, the creation of a comprehensive national network of educational epidemiologists could enhance collaboration and the development of a strong educational research foundation.

1 Follower
8 Reads
  • Source
    • "Rather, the Framework should guide training program implementers and others in thinking about available resources, existing data and the rationale for evaluating outcomes at particular points along the continuum. Once this has been determined, a variety of evaluation research designs (including but not limited to randomized controlled trials) and methods can be developed and implemented to answer specific evaluation questions [7,26]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In-service training is a key strategic approach to addressing the severe shortage of health care workers in many countries. However, there is a lack of evidence linking these health care worker trainings to improved health outcomes. In response, the United States President's Emergency Plan for AIDS Relief's Human Resources for Health Technical Working Group initiated a project to develop an outcome-focused training evaluation framework. This paper presents the methods and results of that project. A general inductive methodology was used for the conceptualization and development of the framework. Fifteen key informant interviews were conducted to explore contextual factors, perceived needs, barriers and facilitators affecting the evaluation of training outcomes. In addition, a thematic analysis of 70 published articles reporting health care worker training outcomes identified key themes and categories. These were integrated, synthesized and compared to several existing training evaluation models. This formed an overall typology which was used to draft a new framework. Finally, the framework was refined and validated through an iterative process of feedback, pilot testing and revision. The inductive process resulted in identification of themes and categories, as well as relationships among several levels and types of outcomes. The resulting framework includes nine distinct types of outcomes that can be evaluated, which are organized within three nested levels: individual, organizational and health system/population. The outcome types are: (1) individual knowledge, attitudes and skills; (2) individual performance; (3) individual patient health; (4) organizational systems; (5) organizational performance; (6) organizational-level patient health; (7) health systems; (8) population-level performance; and (9) population-level health. The framework also addresses contextual factors which may influence the outcomes of training, as well as the ability of evaluators to determine training outcomes. In addition, a group of user-friendly resources, the Training Evaluation Framework and Tools (TEFT) were created to help evaluators and stakeholders understand and apply the framework. Feedback from pilot users suggests that using the framework and accompanying tools may support outcome evaluation planning. Further assessment will assist in strengthening guidelines and tools for operationalization.
    Human Resources for Health 10/2013; 11(1):50. DOI:10.1186/1478-4491-11-50 · 1.83 Impact Factor
  • Source
    • "Thus the first cohort acted as the active interventional arm (TBL) in NE while also serving as the control arm (PL) in NL for the second cohort, and vice versa for the second cohort. This design aims for both TBL and PL to be administered simultaneously rather than sequentially as would normally occur in a typical crossover study, avoiding a carryover effect of TBL learning principles to the control group [20]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Team-based learning (TBL), a new active learning method, has not been reported for neurology education. We aimed to determine if TBL was more effective than passive learning (PL) in improving knowledge outcomes in two key neurology topics - neurological localization and neurological emergencies. We conducted a modified crossover study during a nine-week internal medicine posting involving 49 third-year medical undergraduates, using TBL as the active intervention, compared against self-reading as a PL control, for teaching the two topics. Primary outcome was the mean percentage change in test scores immediately after (post-test 1) and 48 hours after TBL (post-test 2), compared to a baseline pre-test. Student engagement was the secondary outcome. Mean percentage change in scores was greater in the TBL versus the PL group in post-test 1 (8.8% vs 4.3%, p = 0.023) and post-test 2 (11.4% vs 3.4%, p = 0.001). After adjustment for gender and second year examination grades, mean percentage change in scores remained greater in the TBL versus the PL group for post-test 1 (10.3% vs 5.8%, mean difference 4.5%,95% CI 0.7 - 8.3%, p = 0.021) and post-test 2 (13.0% vs 4.9%, mean difference 8.1%,95% CI 3.7 - 12.5%, p = 0.001), indicating further score improvement 48 hours post-TBL. Academically weaker students, identified by poorer examination grades, showed a greater increase in scores with TBL versus strong students (p < 0.02). Measures of engagement were high in the TBL group, suggesting that continued improvements in scores 48 hours post-TBL may result from self-directed learning. Compared to PL, TBL showed greater improvement in knowledge scores, with continued improvement up to 48 hours later. This effect is larger in academically weaker students. TBL is an effective method for improving knowledge in neurological localization and neurological emergencies in undergraduates.
    BMC Medical Education 10/2011; 11(1):91. DOI:10.1186/1472-6920-11-91 · 1.22 Impact Factor
  • Source
    • "We recognize the increasing use of qualitative research methods (Bordage 2007; Harris 2003; Shea et al. 2004), and calls for the use of case–control and cohort study designs (Carney et al. 2004). These methods can help answer questions that experiments cannot (Callahan et al. 2007; Kennedy and Lingard 2007; Papadakis et al. 2005; Tamblyn et al. 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest-posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest-posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.
    Advances in Health Sciences Education 05/2008; 15(3):455-64. DOI:10.1007/s10459-008-9117-3 · 2.12 Impact Factor
Show more

Similar Publications