Article

Utility of the AAMC's Graduation Questionnaire to study behavioral and social sciences domains in undergraduate medical education.

Department of Family Medicine, Oregon Health and Science University, Portland, Oregon 97239, USA.
Academic medicine: journal of the Association of American Medical Colleges (Impact Factor: 2.34). 01/2010; 85(1):169-76. DOI: 10.1097/ACM.0b013e3181c464c0
Source: PubMed

ABSTRACT The Institute of Medicine (IOM) report on social and behavioral sciences (SBS) indicated that 50% of morbidity and mortality in the United States is associated with SBS factors, which the report also found were inadequately taught in medical school. A multischool collaborative explored whether the Association of American Medical Colleges Graduation Questionnaire (GQ) could be used to study changes in the six SBS domains identified in the IOM report.
A content analysis conducted with the GQ identified 30 SBS variables, which were narrowed to 24 using a modified Delphi approach. Summary data were pooled from nine medical schools for 2006 and 2007, representing 1,126 students. Data were generated on students' perceptions of curricular experiences, attitudes related to SBS curricula, and confidence with relevant clinical knowledge and skills. The authors determined the sample sizes required for various effect sizes to assess the utility of the GQ.
The 24 variables were classified into five of six IOM domains representing a total of nine analytic categories with cumulative scale means ranging from 60.8 to 93.4. Taking into account the correlations among measures over time, and assuming a two-sided test, 80% power, alpha at .05, and standard deviation of 4.1, the authors found that 34 medical schools would be required for inclusion to attain an estimated effect size of 0.50 (50%). With a sample size of nine schools, the ability to detect changes would require a very high effect size of 107%.
Detecting SBS changes associated with curricular innovations would require a large collaborative of medical schools. Using a national measure (the GQ) to assess curricular innovations in most areas of SBS is possible if enough medical schools were involved in such an effort.

0 Bookmarks
 · 
77 Views
  • JAMA The Journal of the American Medical Association 04/1994; 271(9):660; author reply 660-1. · 29.98 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Conducting educational research in medical schools is challenging partly because interventional controlled research designs are difficult to apply. In addition, strict accreditation requirements and student/faculty concerns about educational inequality reduce the flexibility needed to plan and execute educational experiments. Consequently, there is a paucity of rigorous and generalizable educational research to provide an evidence-guided foundation to support educational effectiveness. "Educational epidemiology," ie, the application across the physician education continuum of observational designs (eg, cross-sectional, longitudinal, cohort, and case-control studies) and randomized experimental designs (eg, randomized controlled trials, randomized crossover designs), could revolutionize the conduct of research in medical education. Furthermore, the creation of a comprehensive national network of educational epidemiologists could enhance collaboration and the development of a strong educational research foundation.
    JAMA The Journal of the American Medical Association 10/2004; 292(9):1044-50. · 29.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Evidence-based medical education requires rigorous studies appraising educational efficacy. To assess trends over time in methods used to evaluate undergraduate medical education interventions and to identify whether participation of medical education departments or centers is associated with more rigorous methods. The PubMed, Cochrane Controlled Trials Registry, Campbell Collaboration, and ERIC databases (January 1966-March 2007) were searched using terms equivalent to students, medical and education, medical crossed with all relevant study designs. We selected publications in all languages from every fifth year, plus the most recent 12 months, that evaluated an educational intervention for undergraduate medical students. Four hundred seventy-two publications met criteria for review. Data were abstracted on number of participants; types of comparison groups; whether outcomes assessed were objective, subjective, and/or validated; timing of outcome assessments; funding; and participation of medical education departments and centers. Ten percent of publications were independently abstracted by 2 authors to assess validity of the data abstraction. The annual number of publications increased over time from 1 (1969-1970) to 147 (2006-2007). In the most recent year, there was a mean of 145 medical student participants; 9 (6%) recruited participants from multiple institutions; 80 (54%) used comparison groups; 37 (25%) used randomized control groups; 91 (62%) had objective outcomes; 23 (16%) had validated outcomes; 35 (24%) assessed an outcome more than 1 month later; 21 (14%) estimated statistical power; and 66 (45%) reported funding. In 2006-2007, medical education department or center participation, reported in 46 (31%) of the recent publications, was associated only with enrolling more medical student participants (P = .04); for all studies from 1969 to 2007, it was associated only with measuring an objective outcome (P = .048). Between 1969 and 2007, the percentage of publications reporting statistical power and funding increased; percentages did not change for other study features. The annual number of published studies of undergraduate medical education interventions demonstrating methodological rigor has been increasing. However, considerable opportunities for improvement remain.
    JAMA The Journal of the American Medical Association 10/2007; 298(9):1038-45. · 29.98 Impact Factor

Full-text (2 Sources)

Download
37 Downloads
Available from
May 26, 2014