Objective structured assessment of technical competence in transthoracic echocardiography: A validity study in a standardised setting

BMC Medical Education (Impact Factor: 1.22). 03/2013; 13(1):47. DOI: 10.1186/1472-6920-13-47
Source: PubMed


Competence in transthoracic echocardiography (TTE) is unrelated to traditional measures of TTE competence, such as duration of training and number of examinations performed. This study aims to explore aspects of validity of an instrument for structured assessment of echocardiographic technical skills.

The study included 45 physicians with three different clinical levels of echocardiography competence who all scanned the same healthy male following national guidelines. An expert in echocardiography (OG) evaluated all the recorded, de-identified TTE images blindly using the developed instrument for assessment of TTE technical skills. The instrument consisted of both a global rating scale and a procedure specific checklist. Two scores were calculated for each examination: A global rating score and a total checklist score. OG rated ten examinations twice for intra-rater reliability, and another expert rated the same ten examinations for inter-rater reliability. A small pilot study was then performed with focus on content validity. This pilot study included nine physicians who scanned three patients with different pathologies as well as different technical difficulties.

Validity of the TTE technical skills assessment instrument was supported by a significant correlation found between level of expertise and both the global score (Spearman 0.76, p<0.0001) and the checklist score (Spearman 0.74, p<0.001). Both scores were able to distinguish between the three levels of competence that were represented in the physician group. Reliability was supported by acceptable inter- and intra-rater values. The pilot study showed a tendency to improved scores with increasing expertise levels, suggesting that the instrument could also be used when pathologies were present.

We designed and developed a structured assessment instrument of echocardiographic technical skills that showed evidence of validity in terms of high correlations between test scores on a normal person and the level of physician competence, as well as acceptable inter- and intra-rater reliability scores. Further studies should, however, be performed to determine the adequate number of assessments needed to ensure high content validity and reliability in a clinical setting.

Download full-text


Available from: Dorte Guldbrand Nielsen, Jun 03, 2014
  • Source
    • "Classical parameters, such as operator caseload or certifications, have proven to be insufficient to guarantee a high-quality echocardiography.25,29 A structured and objective tool can be implemented in every healthcare institution according to the local needs (ie, audits regarding indications, availability, performance, and diagnostic accuracy). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Echocardiography accounts for nearly half of all cardiac imaging techniques. It is a widely available and adaptable tool, as well as being a cost-effective and mainly a non-invasive test. In addition, echocardiography provides extensive clinical data, which is related to the presence or advent of different modalities (tissue Doppler imaging, speckle tracking imaging, three-dimensional mode, contrast echo, etc.), different approaches (transesophageal, intravascular, etc.), and different applications (ie, heart failure/resynchronization studies, ischemia/stress echo, etc.). In view of this, it is essential to conform to criteria of appropriate use and to keep standards of competence. In this study, we sought to review and discuss clinical practice of echocardiography in light of the criteria of appropriate clinical use, also we present an insight into echocardiographic technical competence and quality improvement project.
    Clinical Medicine Insights: Cardiology 01/2014; 8:1-7. DOI:10.4137/CMC.S13645
  • [Show abstract] [Hide abstract]
    ABSTRACT: There has been a recent explosion of education and training in echocardiography in the specialties of anesthesiology and critical care. These devices, by their impact on clinical management, are changing the way surgery is performed and critical care is delivered. A number of international bodies have made recommendations for training and developed examinations and accreditations.The challenge to medical educators in this area is to deliver the training needed to achieve competence into already overstretched curricula.The authors found an apparent increase in the use of simulators, with proven efficacy in improving technical skills and knowledge. There is still an absence of evidence on how it should be included in training programs and in the accreditation of certain levels.There is a conviction that this form of simulation can enhance and accelerate the understanding and practice of echocardiography by the anesthesiologist and intensivists, particularly at the beginning of the learning curve.
    Anesthesiology 11/2013; 120(1). DOI:10.1097/ALN.0000000000000072 · 5.88 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Transthoracic echocardiography (TTE) is a widely used cardiac imaging technique that all cardiologists should be able to perform competently. Traditionally, TTE competence has been assessed by unstructured observation or in test situations separated from daily clinical practice. An instrument for assessment of clinical TTE technical proficiency including a global rating score and a checklist score has previously shown reliability and validity in a standardised setting. As clinical test situations typically have several sources of error giving rise to variance in scores, a more thorough examination of the generalizability of the assessment instrument is needed.Methods Nine physicians performed a TTE scan on the same three patients. Then, two raters rated all 27 TTE scans using the TTE technical assessment instrument in a fully crossed, all random generalizability study. Estimated variance components were calculated for both the global rating and checklist scores. Finally, dependability (phi) coefficients were also calculated for both outcomes in a decision study.ResultsFor global rating scores, 66.6% of score variance can be ascribed to true differences in performance. For checklist scores this was 88.8%. The difference was primarily due to physician-rater interaction. Four random cases rated by one random rater resulted in a phi value of 0.81 for global ratings and two random cases rated by one random rater showed a phi value of 0.92 for checklist scores.Conclusions Using the TTE checklist as opposed to the TTE global rating score had the effect of minimising the largest source of error variance in test scores. Two cases rated by one rater using the TTE checklist are sufficiently reliable for high stakes examinations. As global rating is less time consuming it could be considered performing four global rating assessments in addition to the checklist assessments to account for both reliability and content validity of the assessment.
    BMC Medical Education 02/2015; 15(1):9. DOI:10.1186/s12909-015-0294-5 · 1.22 Impact Factor