Article

The assessment of emergency physicians by a regulatory authority.

Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada.
Academic Emergency Medicine (Impact Factor: 2.2). 12/2006; 13(12):1296-303. DOI: 10.1197/j.aem.2006.07.030
Source: PubMed

ABSTRACT To determine whether it is possible to develop a feasible, valid, and reliable multisource feedback program (360 degree evaluation) for emergency physicians.
Surveys with 16, 20, 30, and 31 items were developed to assess emergency physicians by 25 patients, eight coworkers, eight medical colleagues, and self, respectively, using five-point scales along with an "unable to assess" category. Items addressed key competencies related to communication skills, professionalism, collegiality, and self-management.
Data from 187 physicians who identified themselves as emergency physicians were available. The mean number of respondents per physician was 21.6 (SD +/- 3.87) (93%) for patients, 7.6 (SD +/- 0.89) (96%) for coworkers, and 7.7 (SD +/- 0.61) (95%) for medical colleagues, suggesting it was a feasible tool. Only the patient survey had four items with "unable to assess" percentages > or = 15%. The factor analysis indicated there were two factors on the patient questionnaire (communication/professionalism and patient education), two on the coworker survey (communication/collegiality and professionalism), and four on the medical colleague questionnaire (clinical performance, professionalism, self-management, and record management) that accounted for 80.0%, 62.5%, and 71.9% of the variance on the surveys, respectively. The factors were consistent with the intent of the instruments, providing empirical evidence of validity for the instruments. Reliability was established for the instruments (Cronbach's alpha > 0.94) and for each physician (generalizability coefficients were 0.68 for patients, 0.85 for coworkers, and 0.84 for medical colleagues).
The psychometric examination of the data suggests that the instruments developed to assess emergency physicians were feasible and provide evidence for validity and reliability.

1 Bookmark
 · 
159 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: The use of multisource feedback (MSF) or 360-degree evaluation has become a recognized method of assessing physician performance in practice. The purpose of the present systematic review was to investigate the reliability, generalizability, validity, and feasibility of MSF for the assessment of physicians. The authors searched the EMBASE, PsycINFO, MEDLINE, PubMed, and CINAHL databases for peer-reviewed, English-language articles published from 1975 to January, 2013. Studies were included if they met the following inclusion criteria: used one or more MSF instruments to assess physician performance in practice; reported psychometric evidence of the instrument(s) in the form of reliability, generalizability coefficients, and construct or criterion-related validity; and provided information regarding the administration or feasibility of the process in collecting the feedback data. Of the 96 full-text articles assessed for eligibility, 43 articles were included. The use of MSF has been shown to be an effective method for providing feedback to physicians from a multitude of specialties about their clinical and nonclinical (i.e., professionalism, communication, interpersonal relationship, management) performance. In general, assessment of physician performance was based on the completion of the MSF instruments by 8 medical colleagues, 8 coworkers, and 25 patients to achieve adequate reliability and generalizability coefficients of α ≥ 0.90 and Ep ≥ 0.80, respectively. The use of MSF employing medical colleagues, coworkers, and patients as a method to assess physicians in practice has been shown to have high reliability, validity, and feasibility.
    Academic medicine: journal of the Association of American Medical Colleges 01/2014; DOI:10.1097/ACM.0000000000000147 · 2.34 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The American Board of Medical Specialties Maintenance of Certification Program (ABMS MOC) is designed to provide a comprehensive approach to physician lifelong learning, self-assessment, and quality improvement (QI) through its 4-part framework and coverage of the 6 competencies previously adopted by the ABMS and the Accreditation Council for Graduate Medical Education (ACGME). In this article, the theoretical rationale and exemplary empiric data regarding the MOC program and its individual parts are reviewed. The value of each part is considered in relation to 4 criteria about the relationship of the competencies addressed within that part to (1) patient outcomes, (2) physician performance, (3) validity of the assessment or educational methods utilized, and (4) learning or improvement potential. Overall, a sound theoretical rationale and a respectable evidence base exists to support the current structure and elements of the MOC program. However, it is incumbent on the ABMS and ABMS member boards to continue to examine their programs moving forward to assure the public and the profession that they are meeting expectations, are clinically relevant, and provide value to patients and participating physicians, and to refine and improve them as ongoing research indicates.
    Journal of Continuing Education in the Health Professions 09/2013; 33(S1):S7-S19. DOI:10.1002/chp.21201 · 1.32 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The purpose of this study was to conduct a meta-analysis on the construct and criterion validity of multi-source feedback (MSF) to assess physicians and surgeons in practice. In this study, we followed the guidelines for the reporting of observational studies included in a meta-analysis. In addition to PubMed and MEDLINE databases, the CINAHL, EMBASE, and PsycINFO databases were searched from January 1975 to November 2012. All articles listed in the references of the MSF studies were reviewed to ensure that all relevant publications were identified. All 35 articles were independently coded by two authors (AA, TD), and any discrepancies (eg, effect size calculations) were reviewed by the other authors (KA, AD, CV). Physician/surgeon performance measures from 35 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in: construct validity coefficients for the MSF system on physician/surgeon performance across different levels in practice ranged from d=0.14 (95% confidence interval [CI] 0.40-0.69) to d=1.78 (95% CI 1.20-2.30); construct validity coefficients for the MSF on physician/surgeon performance on two different occasions ranged from d=0.23 (95% CI 0.13-0.33) to d=0.90 (95% CI 0.74-1.10); concurrent validity coefficients for the MSF based on differences in assessor group ratings ranged from d=0.50 (95% CI 0.47-0.52) to d=0.57 (95% CI 0.55-0.60); and predictive validity coefficients for the MSF on physician/surgeon performance across different standardized measures ranged from d=1.28 (95% CI 1.16-1.41) to d=1.43 (95% CI 0.87-2.00). The construct and criterion validity of the MSF system is supported by small to large effect size differences based on the MSF process and physician/surgeon performance across different clinical and nonclinical domain measures.
    01/2014; 5:39-51. DOI:10.2147/AMEP.S57236

Full-text (2 Sources)

Download
10 Downloads
Available from
Dec 19, 2014

Jocelyn Lockyer