Assessing communication competence: A review of current tools

Family Practice Center, Maine Medical Center, 272 Congress Street, Portland, ME 04101, USA.
Family medicine (Impact Factor: 1.17). 04/2005; 37(3):184-92.
Source: PubMed


The assessment of communication competence has become a major priority of medical educational, policy, and licensing organizations in the United States and Canada. Multiple tools are available to assess communication competence, but there are few studies that compare the tools.
A consensus panel of six family medicine educators evaluated 15 instruments measuring the physician-patient interview. The primary evaluation criteria came from the Kalamazoo Consensus Statement (KCS), which derived from a multidisciplinary panel of experts that defined seven essential elements of physician-patient communication. We evaluated psychometric properties of the instruments and other assessment criteria felt to be important to family physicians (exploring family issues, interview efficiency, and usability/practicality).
Instruments that received the highest ratings on KCS elements were designed for faculty raters and varied in their practicality/usability ratings and psychometric properties. Few instruments were rated high on psychometric properties or exploring family issues.
The process successfully reviewed and provided a framework for assessing communication skills instruments. There is a need to expand the study, including use of a larger cohort of reviewers to provide more validity to the results and minimize potential biases.

Download full-text


Available from: Forrest Lang,
  • Source
    • "There is undoubted value therefore in considering alternative forms of measurement, such as patient or learner self-report. These types of measures offer a perspective on competencies that are difficult to assess using objective measures alone such as the extent to which the clinician is mindfully aware and adaptive [63]. Indeed our teaching of agenda-mapping included specific skills as well as a cognitive cue (ask–listen–log), an approach consistent with other authors [8] [61] [64], was an aspect of potential learning that we were not able to fully assess using EAGL-I. "
    [Show abstract] [Hide abstract]
    ABSTRACT: To develop and validate the Evaluation of AGenda-mapping skilL Instrument (EAGL-I). EAGL-I was constructed after a literature review and piloting. Simulated consultation recordings were collected in a workshop with third-year medical students at three time points: once pre-teaching, twice post-teaching. Three raters used EAGL-I to assess student agenda-mapping. We examined reliability, ability to detect change and predict full expression of patients' agendas. EAGL-I scores represented reliable assessment of agenda-mapping (Ep(2)=0.832; φ=0.675). Generalizability coefficients across items (Ep(2)=0.836) and raters (φ=0.797 two raters) were acceptable. A one-way repeated measure ANOVA with post hoc analysis found a statistically significant difference between the pre-teaching occasion of measurement and each post-teaching occasion (p<0.001) and no significant difference between the two post-teaching occasions (p=0.085). Multilevel logistic regression show scores predict expression of scripted hidden agendas irrespective of occasions, or patient scenario (n=60, p=0.005). Evidence of measure validation is shown. Reliability is optimised when two or more raters use EAGL-I and agenda-mapping has been taught. EAGL-I appears sensitive to change. Higher scores predict the likelihood that a patient will disclose their full agenda in a simulated environment. A validated tool for measuring agenda-mapping in teaching and research is now available. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
    Patient Education and Counseling 07/2015; 98(10). DOI:10.1016/j.pec.2015.06.018 · 2.20 Impact Factor
  • Source
    • "These instruments, as well as the rationale for their choice, are described elsewhere [8,15,31-34]. In summary, the SEGUE framework is a 25-item (yes/no) checklist designed by Makoul to facilitate teaching and assessment of critical communication tasks [32]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Communication is important for the quality of clinical practice, and programs have been implemented to improve healthcare providers’ communication skills. However, the consistency of programs teaching communication skills has received little attention, and debate exists about the application of acquired skills to real patients. This study inspects whether (1) results from a communication program are replicated with different samples, and (2) results with standardized patients apply to interviews with real patients. Methods A structured, nine-month communication program was applied in two consecutive years to two different samples of healthcare professionals (25 in the first year, 20 in the second year). Results were assessed at four different points in time, each year, regarding participants’ confidence levels (self-rated), basic communication skills in interviews with standardized patients, and basic communication skills in interviews with real patients. Data were analyzed using GLM Repeated-Measures procedures. Results Improvements were statistically significant in both years in all measures except in simulated patients’ assessment of the 2008 group. Differences between the two samples were non-significant. Differences between interviews with standardized and with real patients were also non-significant. Conclusions The program’s positive outcomes were replicated in different samples, and acquired skills were successfully applied to real-patient interviews. This reinforces this type of program structure as a valuable training tool, with results translating into real situations. It also adds to the reliability of the assessment instruments employed, though these may need adaptation in the case of real patients.
    BMC Medical Education 05/2014; 14(1):92. DOI:10.1186/1472-6920-14-92 · 1.22 Impact Factor
  • Source
    • "The modified Calgary-Cambridge checklist, developed by Silverman et al. (2005) and Kurtz et al. (2003) with a high intra-observer reliability (Cronbach's Alpha ¼ 0.95) as reported by Schirmer et al. (2005), was used to assess the communication skills of undergraduate medical students from pre-recorded interview videos. The modification to the Calgary-Cambridge guide was performed to tailor to the assessments of communication skills competencies of the more junior medical students (Year 1 and 2), to suit the case scenario for the study better and to cater for the shorter duration of the interviews (five minutes; Appendix 1). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Introduction: The complexity of modern medicine creates more challenges for teaching and assessment of communication skills in undergraduate medical programme. This research was conducted to study the level of communication skills among undergraduate medical students and to determine the difference between simulated patients and clinical instructors' assessment of communication skills. Methods: This comparative study was conducted for three months at the Clinical Skills and Simulation Centre of the International Medical University in Malaysia. The modified Calgary-Cambridge checklist was used to assess the communication skills of 50 first year and 50 second year medical students (five-minutes pre-recorded interview videos on the scenario of sore throat). These videos were reviewed and scored by simulated patients (SPs), communication skills instructors (CSIs) and non-communication skills instructors (non-CSIs). Results: Better performance was observed among the undergraduate medical students, who had formal training in communication skills with a significant difference in overall scores detected among the first and second year medical students (p = 0.0008). A non-significant difference existed between the scores of SPs and CSIs for Year 1 (p = 0.151). Conclusions: The SPs could be trained and involved in assessment of communication skills. Formal training in communication skills is necessary in the undergraduate medical programme.
    Medical Teacher 05/2014; 36(7). DOI:10.3109/0142159X.2014.899689 · 1.68 Impact Factor
Show more