Mark R Raymond

National Board of Medical Examiners, Philadelphia, Pennsylvania, United States

Are you Mark R Raymond?

Claim your profile

Publications (11)17.45 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper illustrates the utility of practice analysis for informing curriculum and assessment design in professions education. The paper accomplishes three objectives: (1) Introduces four healthcare utilization surveys administered by the National Center for Health Statistics (NCHS); (2) Summarizes selected results for the survey, the National Hospital Ambulatory Medical Care Survey – Emergency Department (NHAMCS-ED); and (3) Illustrates how the data can inform decisions regarding the design of curricula and assessments in professions education. The survey tracks over 129 million patient visits to various healthcare facilities, documenting the health problems prompting those visits, the diagnostic studies performed, and the types of services provided. While the specific examples are relevant to nursing, medicine, and other healthcare fields, the general principles apply to other professions.
    American Educational Research Association Annual Meeting; 05/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: PURPOSE: Previous studies on standardized patient (SP) exams reported score gains both across attempts when examinees failed and retook the exam and over multiple SP encounters within a single exam session. The authors analyzed the within-session score gains of examinees who repeated the United States Medical Licensing Examination Step 2 Clinical Skills to answer two questions: How much do scores increase within a session? Can the pattern of increasing first-attempt scores account for across-session score gains? METHOD: Data included encounter-level scores for 2,165 U.S. and Canadian medical students and graduates who took Step 2 Clinical Skills twice between April 1, 2005 and December 31, 2010. The authors modeled examinees' score patterns using smoothing and regression techniques and applied statistical tests to determine whether the patterns were the same or different across attempts. In addition, they tested whether any across-session score gains could be explained by the first-attempt within-session score trajectory. RESULTS: For the first and second attempts, the authors attributed examinees' within-session score gains to a pattern of score increases over the first three to six SP encounters followed by a leveling off. Model predictions revealed that the authors could not attribute the across-session score gains to the first-attempt within-session score gains. CONCLUSIONS: The within-session score gains over the first three to six SP encounters of both attempts indicate that there is a temporary "warm-up" effect on performance that "resets" between attempts. Across-session gains are not due to this warm-up effect and likely reflect true improvement in performance.
    Academic medicine: journal of the Association of American Medical Colleges 03/2013; · 2.34 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although a few studies report sizable score gains for examinees who repeat performance-based assessments, research has not yet addressed the reliability and validity of inferences based on ratings of repeat examinees on such tests. This study analyzed scores for 8,457 single-take examinees and 4,030 repeat examinees who completed a 6-hour clinical skills assessment required for physician licensure. Each examinee was rated in four skill domains: data gathering, communication-interpersonal skills, spoken English proficiency, and documentation proficiency. Conditional standard errors of measurement computed for single-take and multiple-take examinees indicated that ratings were of comparable precision for the two groups within each of the four skill domains; however, conditional errors were larger for low-scoring examinees regardless of retest status. In addition, on their first attempt multiple-take examinees exhibited less score consistency across the skill domains but on their second attempt their scores became more consistent. Further, the median correlation between scores on the four clinical skill domains and three external measures was .15 for multiple-take examinees on their first attempt but increased to .27 for their second attempt, a value, which was comparable to the median correlation of .26 for single-take examinees. The findings support the validity of inferences based on scores from the second attempt.
    Journal of Educational Measurement 12/2012; 49(4):339-361. · 1.00 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Item-level information, such as difficulty and discrimination are invaluable to the test assembly, equating, and scoring practices. Estimating these parameters within the context of large-scale performance assessments is often hindered by the use of unbalanced designs for assigning examinees to tasks and raters because such designs result in very sparse data matrices. This article addresses some of the issues using a multistage confirmatory factor analytic approach. The approach is illustrated using data from a performance test in medicine for which examinees encounter multiple patients with medical problems (tasks), with each problem portrayed by a different trained patient (rater). A series of models was fit to rating data (1) to obtain alternative task difficulty and discrimination parameters and (2) to evaluate the observed improvement in the goodness of model fit due to accounted rater and test site effects. The results suggest that availability of alternative task parameter estimates can be useful in practice for making decisions related to task banking, rater training, and test assembly.
    Applied Measurement in Education - APPL MEAS EDUC. 01/2012; 25(1):79-95.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Examinees who initially fail and later repeat an SP-based clinical skills exam typically exhibit large score gains on their second attempt, suggesting the possibility that examinees were not well measured on one of those attempts. This study evaluates score precision for examinees who repeated an SP-based clinical skills test administered as part of the US Medical Licensing Examination sequence. Generalizability theory was used as the basis for computing conditional standard errors of measurement (SEM) for individual examinees. Conditional SEMs were computed for approximately 60,000 single-take examinees and 5,000 repeat examinees who completed the Step 2 Clinical Skills Examination(®) between 2007 and 2009. The study focused exclusively on ratings of communication and interpersonal skills. Conditional SEMs for single-take and repeat examinees were nearly indistinguishable across most of the score scale. US graduates and IMGs were measured with equal levels of precision at all score levels, as were examinees with differing levels of skill speaking English. There was no evidence that examinees with the largest score changes were measured poorly on either their first or second attempt. The large score increases for repeat examinees on this SP-based exam probably cannot be attributed to unexpectedly large errors of measurement.
    Advances in Health Sciences Education 10/2011; 17(3):325-37. · 2.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Studies completed over the past decade suggest the presence of a gap between what students learn during medical school and their clinical responsibilities as first-year residents. The purpose of this survey was to verify on a large scale the responsibilities of residents during their initial months of training. Practice analysis surveys were mailed in September 2009 to 1,104 residency programs for distribution to an estimated 8,793 first-year residents. Surveys were returned by 3,003 residents from 672 programs; 2,523 surveys met inclusion criteria and were analyzed. New residents performed a wide range of activities, from routine but important communications (obtain informed consent) to complex procedures (thoracentesis), often without the attending physician present or otherwise involved. Medical school curricula and the content of competence assessments prior to residency should consider more thorough coverage of the complex knowledge and skills required early in residency.
    Academic medicine: journal of the Association of American Medical Colleges 10/2011; 86(10 Suppl):S59-62. · 2.34 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Prior studies report large score gains for examinees who fail and later repeat standardized patient (SP) assessments. Although research indicates that score gains on SP exams cannot be attributed to memorizing previous cases, no studies have investigated the empirical validity of scores for repeat examinees. This report compares single-take and repeat examinees in terms of both internal (construct) validity and external (criterion-related) validity. Data consisted of test scores for examinees who took the United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam between July 16, 2007, and September 12, 2009. The sample included 12,090 examinees who completed Step 2 CS on one occasion and another 4,030 examinees who completed the exam on two occasions. The internal measures included four separately scored performance domains of the Step 2 CS examination, whereas the external measures consisted of scores on three written assessments of medical knowledge (Step 1, Step 2 clinical knowledge, and Step 3). The authors subjected the four Step 2 CS domains to confirmatory factor analysis and evaluated correlations between Step 2 CS scores and the three written assessments for single-take and repeat examinees. The factor structure for repeat examinees on their first attempt was markedly different from the factor structure for single-take examinees, but it became more similar to that for single-take examinees by their second attempt. Scores on the second attempt correlated more highly with all three external measures. The findings support the validity of scores for repeat examinees on their second attempt.
    Academic medicine: journal of the Association of American Medical Colleges 08/2011; 86(10):1253-9. · 2.34 Impact Factor
  • Mark R. Raymond, Polina Harik, Brian E. Clauser
    [Show abstract] [Hide abstract]
    ABSTRACT: Prior research indicates that the overall reliability of performance ratings can be improved by using ordinary least squares (OLS) regression to adjust for rater effects. The present investigation extends previous work by evaluating the impact of OLS adjustment on standard errors of measurement (SEM) at specific score levels. In addition, a cross-validation (i.e., resampling) design was used to determine the extent to which any improvements in measurement precision would be realized for new samples of examinees. Conditional SEMs were largest for scores toward the low end of the score distribution and smallest for scores at the high end. Conditional SEMs for adjusted scores were consistently less than conditional SEMs for observed scores, although the reduction in error was not uniform throughout the distribution. The improvements in measurement precision held up for new samples of examinees at all score levels.
    Applied Psychological Measurement 01/2011; 35(3):235-246. · 1.49 Impact Factor
  • Source
    Mark R Raymond, Brian E Clauser, Gail E Furman
    [Show abstract] [Hide abstract]
    ABSTRACT: The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
    Advances in Health Sciences Education 10/2010; 15(4):587-600. · 2.06 Impact Factor
  • Mark R Raymond, Ulana A Luciw-Dubas
    [Show abstract] [Hide abstract]
    ABSTRACT: Years of research with high-stakes written tests indicates that although repeat examinees typically experience score gains between their first and subsequent attempts, their pass rates remain considerably lower than pass rates for first-time examinees. This outcome is consistent with expectations. Comparable studies of the performance of repeat examinees on oral examinations are lacking. The current research evaluated pass rates for more than 50,000 examinees on written and oral exams administered by six medical specialty boards for several recent years. Pass rates for first-time examinees were similar for both written and oral exams, averaging about 84% across all boards. Pass rates for repeat examinees on written exams were expectedly lower, ranging from 22% to 51%, with an average of 36%. However, pass rates for repeat examinees on oral exams were markedly higher than for written exams, ranging from 53% to 77%, with an average of 65%. Four explanations for the elevated repeat pass rates on oral exams are proposed, including an increase in examinee proficiency, construct-irrelevant variance, measurement error (score unreliability), and memorization of test content. Simulated data are used to demonstrate that roughly one third of the score increase can be explained by measurement error alone. The authors suggest that a substantial portion of the score increase can also likely be attributed to construct-irrelevant variance. Results are discussed in terms of their implications for making pass-fail decisions when retesting is allowed. The article concludes by identifying areas for future research.
    Evaluation &amp the Health Professions 09/2010; 33(3):386-403. · 1.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Previous research has shown that ratings of English proficiency on the United States Medical Licensing Examination Clinical Skills Examination are highly reliable. However, the score distributions for native and nonnative speakers of English are sufficiently different to suggest that reliability should be investigated separately for each group. Generalizability theory was used to obtain reliability indices separately for native and nonnative speakers of English (N = 29,084). Conditional standard errors of measurement were also obtained for both groups to evaluate measurement precision for each group at specific score levels. Overall indices of reliability (phi) exceeded 0.90 for both native and nonnative speakers, and both groups were measured with nearly equal precision throughout the score distribution. However, measurement precision decreased at lower levels of proficiency for all examinees. The results of this and future studies may be helpful in understanding and minimizing sources of measurement error at particular regions of the score distribution.
    Academic medicine: journal of the Association of American Medical Colleges 10/2009; 84(10 Suppl):S83-5. · 2.34 Impact Factor