Article

A Score Comparability Study for the NBDHE: Paper-Pencil Versus Computer Versions

Evaluation &amp the Health Professions (Impact Factor: 1.67). 05/2012; 36(2). DOI: 10.1177/0163278712445203
Source: PubMed

ABSTRACT This study evaluated the comparability of a paper-pencil (PP) version and two computer-based (CB) versions of the National Board Dental Hygiene Examination. Comparability was evaluated by validity and psychometric criteria. Data were collected from the following resources: (1) 4,560 candidates enrolled in accredited dental hygiene programs who took the PP version in the Spring 2009, (2) 973 and 1,033 candidates enrolled in accredited dental hygiene programs who took two separate CB versions in 2009, and (3) the survey data from 2,486 candidates who took the CB versions in 2009. The results from the PP and CB versions were found to be comparable on several criteria.

1 Follower
 · 
72 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In addition to the potential that computer-based testing (CBT) offers, empirical evidence has found that identical computerized and paper-and-pencil tests have not produced equivalent test-taker performance. Referred to as the "mode effect," previous literature has identified many factors that may be responsible for such differential performance. The aim of this review was to explore these factors, which typically fit into two categories, participant and technological issues, and highlight their potential impact on performance.
    International Journal of Testing 03/2006; 6(1):1-24. DOI:10.1207/s15327574ijt0601_1
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have expressed concern about the comparability of scores between the two test administration modes. To help provide an answer to this issue, a meta-analysis was conducted to synthesize the administration mode effects of CBTs and paper-and-pencil tests on K—12 student reading assessments. Findings indicate that the administration mode had no statistically significant effect on K—12 student reading achievement scores. Four moderator variables—study design, sample size, computer delivery algorithm, and computer practice—made statistically significant contributions to predicting effect size. Three moderator variables—grade level, type of test, and computer delivery method—did not affect the differences in reading scores between test modes.
    Educational and Psychological Measurement 01/2007; 68(3). DOI:10.1177/0013164407305592 · 1.17 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study conducted a meta-analysis of computer-based and paper-and-pencil administration mode effects on K-12 student mathematics tests. Both initial and final results based on fixed- and random-effects models are presented. The results based on the final selected studies with homogeneous effect sizes show that the administration mode had no statistically significant effect on K-12 student mathematics tests. Only the moderator variable of computer delivery algorithm contributed to predicting the effect size. The differences in scores between test modes were larger for linear tests than for adaptive tests. However, such variables as study design, grade level, sample size, type of test, computer delivery method, and computer practice did not lead to differences in student mathematics scores between computer-based and paper-and-pencil modes.
    Educational and Psychological Measurement 04/2007; 67(2):219-238. DOI:10.1177/0013164406288166 · 1.17 Impact Factor
Show more