The impact of next and back buttons on time to complete and measurement reliability in computer-based surveys

UCLA Department of Medicine, 911 Broxton Avenue, Room 110, Los Angeles, CA 90024-2801, USA.
Quality of Life Research (Impact Factor: 2.49). 10/2010; 19(8):1181-4. DOI: 10.1007/s11136-010-9682-9
Source: PubMed

ABSTRACT To assess the impact of including next and back buttons on response burden and measurement reliability of computer-based surveys.
A sample of 807 participants (mean age of 53; 64% women, 83% non-Hispanic white; 81% some college or college graduates) from the YouGov Polimetrix panel was administered 56 items assessing performance of social/role activities and 56 items measuring satisfaction with social/role activities. Participants were randomly assigned to either (1) automatic advance to the next question with no opportunity to go back (auto/no back); (2) automatic advance to the next questions with an opportunity to go back (auto/back); (3) next button to go to the next question with no opportunity to go back (next/no back); or (4) next button to go to the next question with an opportunity to go back (next/back).
We found no difference in missing data, internal consistency reliability, and domain scores by group. Time to complete the survey was about 50% longer when respondents were required to use a next button to go on.
Given the similarity in missing data, reliability and mean scale scores with or without use of the next button, we recommend automatic advancement to the next item with the option to go back to the previous item.

Download full-text


Available from: Rita Bode, Sep 26, 2015
13 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To address the need for brief, reliable, valid, and standardized quality of life (QOL) assessment applicable across neurologic conditions. Drawing from larger calibrated item banks, we developed short measures (8-9 items each) of 13 different QOL domains across physical, mental, and social health and evaluated their validity and reliability. Three samples were utilized during short form development: general population (Internet-based, n = 2,113); clinical panel (Internet-based, n = 553); and clinical outpatient (clinic-based, n = 581). All short forms are expressed as T scores with a mean of 50 and SD of 10. Internal consistency (Cronbach α) of the 13 short forms ranged from 0.85 to 0.97. Correlations between short form and full-length item bank scores ranged from 0.88 to 0.99 (0.82-0.96 after removing common items from banks). Online respondents were asked whether they had any of 19 different chronic health conditions, and whether or not those reported conditions interfered with ability to function normally. All short forms, across physical, mental, and social health, were able to separate people who reported no health condition from those who reported 1-2 or 3 or more. In addition, scores on all 13 domains were worse for people who acknowledged being limited by the health conditions they reported, compared to those who reported conditions but were not limited by them. These 13 brief measures of self-reported QOL are reliable and show preliminary evidence of concurrent validity inasmuch as they differentiate people based upon number of reported health conditions and whether those reported conditions impede normal function.
    Neurology 05/2012; 78(23):1860-7. DOI:10.1212/WNL.0b013e318258f744 · 8.29 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Purpose of review: Clinical trials to evaluate the supportive and palliative care treatments have some different missing data concerns than the other clinical trials. This study reviews the literature on missing data as it may apply to these trials. Recent findings: Prevention of missing data through study design and conduct is a recent area of focus. Missing data can be minimized by simplifying trial participation for patients, their caregivers, and trialists. Run-in periods with active drug or collecting data from observer (proxy) respondents may complicate a trial but may be used to address some specific concerns. Many analyses can accommodate data missing because of nonresponse by multiple imputation, using carefully chosen imputation models. Analysis of trials evaluating end-of-life care should distinguish between missing data and truncation because of death. Summary: Likely patterns for missing data should be discussed when planning a clinical trial, as modifications to trial design can minimize missing data while still addressing study aims. Many statistical analysis methods are available to accommodate missing data, but robustness of study conclusions to assumptions about mechanisms underlying the missingness should be evaluated by sensitivity analyses.
    Current opinion in supportive and palliative care 10/2012; 6(4). DOI:10.1097/SPC.0b013e328358441d · 1.66 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pre-Conference Workshop in conjunction with the Annual Meeting of the Geriatrics Society of America San Diego Convention Center, San Diego, CA, USA, 14 November 2012 In 2004, the NIH awarded contracts to initiate the development of high-quality psychological and neuropsychological outcome measures for the improved assessment of health-related outcomes. The workshop introduced these measurement development initiatives, the measures created and the NIH-supported resource (Assessment Center) for internet or tablet-based test administration and scoring. Presentations covered item response theory and assessment of test bias, construction of item banks and computerized adaptive testing, and the different ways in which qualitative analyses contribute to the definition of construct domains and the refinement of outcome constructs. The panel discussion included questions about representativeness of samples and the assessment of cultural bias.
    Expert Review of Pharmacoeconomics & Outcomes Research 04/2013; 13(2):183-6. DOI:10.1586/erp.13.10 · 1.67 Impact Factor
Show more