The Reliability of Survey Attitude Measurement

Source: OAI

ABSTRACT Several theoretical hypotheses are developed concerning the relation of question and respondent characteristics to the reliability of survey attitude measurement. To test these hypotheses, reliability is estimated for 96 survey attitude measures using data from five, 3-wave national reinterview surveys-three Michigan Election Panel Surveys and two reinterview studies conducted by the General Social Survey. As hypothesized, a number of attributes of questions are linked to estimated reliability. Attitude questions with more response options tended to have higher reliabilities, although there are some important exceptions. More extensive verbal labeling of numbered response options was found to be associated with higher reliability, but questions explicitly offering a “don't know” alternative were not found to be more reliable. Question characteristics were confounded to an unknown degree with topic differences of questions, which were significantly linked to reliability, leaving the influence of question characteristics on reliability somewhat ambiguous. Characteristics of respondents were also found to be related to levels of reliability. Older respondents and those with less schooling provided the least reliable attitude reports. These results are discussed within a general framework for the consideration of survey errors and their sources. Peer Reviewed

Download full-text


Available from: Duane F. Alwin, Mar 18, 2015
  • Source
    • "Over time, repeated interaction with an attitude object forms the basis of an attitude which acts as a roadmap for a response when faced with the same, or a similar, attitude object in the future (Olson and Zanna 1993). Th us, attitudes serve as a mental shortcut for the individual when evaluating an attitude object, cutting down on the costs of decision-making and possibly infl uencing behavior (Alwin and Krosnick 1991, Eagly and Chaiken 1993, Olson and Zanna 1993). Attitude patterns are assumed to be socialized early and then generally strengthened over time as a result of confi rmation bias (Eagly and Chaiken 1993, McFarlane and Boxall 2003, Heberlein and Ericsson 2005), making them stable mental structures that govern the creation of our identity, our world view and our actions (Olson and Zanna 1993). "
    Wildlife Biology 05/2015; 21(3):131-137. DOI:10.2981/wlb.00062 · 1.07 Impact Factor
  • Source
    • "Unique variance is comprised of two components, random error and systematic error. Random error may be due to situational factors, the wording and response format of the item (Alwin & Krosnick, 1991) or administration errors (Saris & Andrews, 1991). However, random measurement errors associated with each item are assumed to be independent. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A series of Monte Carlo simulations were carried out to examine the performance of Cronbach’s alpha as an index of reliability. Data were generated to be consistent with a single factor measured with six items. The magnitude of the factor loadings, systematic error and sample size were manipulated and alpha calculated from random samples. The results showed that alpha is influenced by factors other than the reliability of the items that comprise a scale. In particular the amount of systematic error, or deviation from unidimensionality, increased the estimate of alpha. The results are discussed in terms of traditional interpretation of alpha.
    Personality and Individual Differences 02/2000; 28(2-28):229-237. DOI:10.1016/S0191-8869(99)00093-8 · 1.86 Impact Factor
  • Source
    • "Recently there has been renewed interest in the issue of response scales in general and in particular about how people distribute their responses amongst the offered categories (Schwarz, et al., 1991; Alwin and Krosnick, 1991; and Greenleaf, 1992). Of particular interest has been the work of Schwarz, et al. (1991) which shows that people respond to 11-point, numerical scales differently according to the numbering convention used. "
Show more