ArticlePDF Available

Abstract

Cognitive interviewing is an important qualitative tool for the testing, development, and evaluation of survey questionnaires. Despite the widespread adoption of cognitive testing, there remain large variations in the manner in which specific procedures are implemented, and it is not clear from reports and publications that have utilized cognitive interviewing exactly what procedures have been used, as critical details are often missing. Especially for establishing the effectiveness of procedural variants, it is essential that cognitive interviewing reports contain a comprehensive description of the methods used. One approach to working toward more complete reporting would be to develop and adhere to a common framework for reporting these results. In this article we introduce the Cognitive Interviewing Reporting Framework (CIRF), which applies a checklist approach, and which is based on several existing checklists for reviewing and reporting qualitative research. We propose that researchers apply the CIRF in order to test its usability and to suggest potential adjustments. Over the longer term, the CIRF can be evaluated with respect to its utility in improving the quality of cognitive interviewing reports.
Ed
i
tor
s
Joop Hox an
d
Ne
k
ane Ba
ll
uer
ka
Sp
ecial Issu
e
C
o
g
nitive Interviewin
g
Reportin
g
Framewor
k
G
uest E
d
itors
Go
r
do
n Willi
s
Hennie Boei
je
Volume 9 / Number 3 / 2013
ISSN-L 1614-1881 · ISSN-Print 1614-1881 · ISSN-Online 1614-2241
Offi cial Organ of the European Association of Methodology
ww.hogrefe.com
journals
methodology
Methodology
European Journal of Research Methods
for the Behavioral and Social Sciences
3/13
Contents
Special Issue
Editorial
Original Articles
Cognitive Interviewing Reporting Framework
(Guest editors: Gordon Willis and Hennie Boeije)
The Survey Field Needs a Framework for the Systematic Reporting
of Questionnaire Development and Pretesting
Gordon Willis and Hennie Boeije 85
The Cognitive Interviewing Reporting Framework (CIRF): Towards
the Harmonization of Cognitive Testing Reports
Hennie Boeije and Gordon Willis 87
Examining the Personal Experience of Aging Scale With the Three Step Test
Interview
Christina Bode and Harrie Jansen 96
Evaluating the Cognitive Interviewing Reporting Framework (CIRF)
by Rewriting a Dutch Pretesting Report of a European Health Survey
Questionnaire
Rachel Vis-Visschers and Vivian Meertens 104
Obtaining Validity Evidence by Cognitive Interviewing to Interpret
Psychometric Results
Jose´-Luis Padilla, Isabel Benı´tez, and Miguel Castillo 113
Reflections on the Cognitive Interviewing Reporting Framework: Efficacy,
Expectations, and Promise for the Future
Gordon Willis and Hennie Boeije 123
Methodology 2013; Vol. 9(3) Ó2013 Hogrefe Publishing
Official Organ of the European Association of Methodology
Methodology
European Journal of Research Methods
for the Behavioral and Social Sciences
Your article has appeared in a journal published by Hogrefe Publishing.
This e-offprint is provided exclusively for the personal use of the authors. It may not be
posted on a personal or institutional website or to an institutional or disciplinary repository.
If you wish to post the article to your personal or institutional website or to archive it
in an institutional or disciplinary repository, please use either a pre-print or a post-print of
your manuscript in accordance with the publication release for your article and our
‘‘Online Rights for Journal Articles’’ (www.hogrefe.com/journals).
Special Issue: Cognitive Interviewing Reporting Framework
Original Article
The Cognitive Interviewing
Reporting Framework (CIRF)
Towards the Harmonization of Cognitive Testing Reports
Hennie Boeije
1
and Gordon Willis
2
1
Department of Methodology and Statistics, Faculty of Social and Behavioural Sciences, Utrecht
University, The Netherlands,
2
Division of Cancer Control and Population Sciences, National Cancer
Institute, National Institutes of Health, Bethesda, Maryland, USA
Abstract. Cognitive interviewing is an important qualitative tool for the testing, development, and evaluation of survey questionnaires. Despite
the widespread adoption of cognitive testing, there remain large variations in the manner in which specific procedures are implemented, and it is
not clear from reports and publications that have utilized cognitive interviewing exactly what procedures have been used, as critical details are
often missing. Especially for establishing the effectiveness of procedural variants, it is essential that cognitive interviewing reports contain a
comprehensive description of the methods used. One approach to working toward more complete reporting would be to develop and adhere to a
common framework for reporting these results. In this article we introduce the Cognitive Interviewing Reporting Framework (CIRF), which
applies a checklist approach, and which is based on several existing checklists for reviewing and reporting qualitative research. We propose that
researchers apply the CIRF in order to test its usability and to suggest potential adjustments. Over the longer term, the CIRF can be evaluated
with respect to its utility in improving the quality of cognitive interviewing reports.
Keywords: cognitive interviewing, comparative effectiveness, questionnaire pretesting, standard report format, survey research, checklist
Cognitive Interviewing and
Questionnaire Design
A commonly used tool for the development of self-report
survey items is the cognitive interview. Most generally,
cognitive interviewing (or cognitive testing) is engaged as
a means for applying qualitative research methods to the
understanding of the functioning of survey questions – as
well as of other materials, such as advance letters (Beatty,
2004; DeMaio & Rothgeb, 1996; Willis, 1999, 2005). The
premise of this approach is that intensive interviewing of a
targeted individual provides rich information that is useful
for providing the questionnaire designer with information
concerning how questionnaires, and individual survey ques-
tions, provide (or fail to provide) desired information.
Although cognitive interviewing is increasingly viewed
as a qualitative method within the sociological tradition, its
origins are explicitly psychological in nature – as the prac-
tice evolved out of the movement known as CASM (Cog-
nitive Aspects of Survey Methodology). During the 1980s,
a heavy emphasis was placed on the intersection of survey
methods and cognitive psychology, in a manner that pro-
duced cognitive interviewing as an interdisciplinary
approach to questionnaire development and evaluation
(Jabine, Straf, Tanur, & Tourangeau, 1984).
Cognitive Interviewing relies on several key premises.
First, its practitioners assert that it is effective to interview
small numbers of individuals who serve as ‘‘stand ins’’ for
survey respondents, and to conduct these interviews in such
a way that we ask about the tested survey questions – rather
than simply collecting answers to those questions. Second,
such interviews are viewed as providing a ‘‘window into the
mind’’ that in turn can be used to unearth insights about the
evaluated questions. Early cognitive interviews, such as by
Loftus (1984), examined issues such as how survey respon-
dents retrieved information from memory, when asked a
specific question (e.g., on recall of health care visits). Spe-
cifically, when counting such visits over a specific time per-
iod, do respondents engage in a forward, backward, or
idiosyncratic temporal recall order?
Despite the intrinsic variability in respondent
approaches to answering such survey questions, in order
to supply a common theoretical framework, Tourangeau
(1984) developed a simple yet elegant model of the survey
response process that has stood for over 30 years, and expli-
cates four major cognitive processes that respondents are
generally presumed to engage in when attempting to
answer survey questions: Comprehension of the question;
Retrieval of relevant information needed to answer it; a
range of Judgment or Estimation processes that are used
to integrate and edit this information; and finally, a
2013 Hogrefe Publishing Methodology 2013; Vol. 9(3):87–95
DOI: 10.1027/1614-2241/a000075
Author’s personal copy (e-offprint)
Response process in which the individuals convert their
internally constructed representation of the answer to one
that constitutes their answer to the question, either in spo-
ken or written form (e.g., saying ‘‘yes’’ rather than provid-
ing a meandering, conversational response).
Since its inception, notions surrounding the classic cog-
nitive model have been embellished in at least three ways.
First is the recognition that motivational, as well as purely
mechanistic cognitive processes, are vital to the task of
responding to survey questions. Krosnick (1991) in partic-
ular has established that respondents make an executive
decision concerning the amount of effort they will apply
to the question-answering task, and may engage in ‘‘satis-
ficing,’’ in order to produce an answer that matches the
requirements, but which may not be the result of complete
processing. Second, it has been recognized that the cogni-
tive stages may not be carried out in invariant fashion,
but are more akin to subroutines which are selectively
engaged by the respondent, and that may be missed,
repeated, or reordered (Herman, Johnson, McEvoy, Herzog,
& Hertel, 1996). Third, and perhaps most importantly, more
recent conceptualizations of the survey response process
have viewed the entire activity in a wider manner, by
appropriating concepts from disciplines other than psychol-
ogy. In particular, linguistic and sociological-anthropologi-
cal viewpoints have broadened the scope of answering
survey questions to include a range of sociocultural ele-
ments, in which the question is answered in a situational
or life context, rather than within a world that consists only
of the respondent interacting with the survey question (Col-
lins, 2007; Gerber & Wellens, 1997; Miller, 2003, 2011).
For example, questions concerning physical activity that
specify leisure-time activities of high-income Non-Hispanic
respondents in the United States (e.g., tennis, golf, weight-
lifting, running) may fail to capture the types of activities
that are carried out by low-income women within Hispanic
cultures.
Conduct of the Cognitive Interview
The conceptualization of the cognitive interview, as dis-
cussed above, impacts the manner in which the interview
is carried out. Classically – based on a purely cognitive
point of view – the investigator carries out the activity by
focusing on the tested subjects cognitive processes, to view
what is presumably happening inside the ‘‘black box’’ of
the mind (again, Comprehension, Retrieval, Decision/Judg-
ment, and Response). This is accomplished through two
operational means. The first is the ‘‘think-aloud,’’ in which
the participants are asked to report everything that comes to
mind as they are mentally processing a presented survey
item, and then answering it (Ericsson & Simon, 1980).
The second fundamental approach is for the interviewer
to administer verbal probe questions that are specifically
designed to target the key underlying cognitive processes;
that is, to metaphorically ‘‘probe the mind’’ for specific
information. Hence, the notion is that one can test the sur-
vey question – for example, ‘‘Overall, how happy are you
these days?’’ – by following up the response (‘‘Somewhat
happy’’) with the probe ‘‘And what, to you, is happiness
as it is used in this question?’’ To the extent that subjects
have access to useful information concerning their own
conceptualization of what it means to be ‘‘happy,’’ and
are able to articulate that in response to a probe question,
cognitive testing may inform the questionnaire design and
evaluation process.
The specific activities involved in collecting informa-
tion through either think-aloud or verbal probing are spec-
ified in detail in a separate book (Willis, 2005). To
summarize, critical points are as follows:
(1) Think-aloud and verbal probing can be used in con-
junction, and are in fact normally combined within
the same study, although practitioners have increas-
ingly come to rely on targeted probing, in part
because many individuals are not adept at think-
aloud. Further, probing is under the control of the
investigator and puts less demand on the subject;
(2) Verbal probing may be either concurrent or retro-
spective: The former involves probing immediately
after the subject has answered a tested survey ques-
tion, prior to administering the next evaluated ques-
tion. Retrospective probing is also known as
debriefing: The interviewer defers probing until all
evaluated survey questions are administered. Both
concurrent and retrospective probing forms persist,
as there are tradeoffs in the utility of their usage
(Willis, 2005).
(3) Probing is done with a specific purpose – normally
the aim is viewed as locating problems in survey
questions (e.g., a term, such as ‘‘abdomen,’’ is not
well understood; or a question like ‘‘how many times
have you ridden in a passenger airplane?’’ poses a
difficult recall task). However, an alternative to a
viewpoint emphasizing the identification of design
defects instead advocates the objective of under-
standing how a question works, and ‘‘what it cap-
tures’’ – without necessarily seeking to remediate
sources of error. For example, the investigator may
endeavor to capture the full range concerning what
individuals think when asked about ‘‘health in gen-
eral,’’ and to determine how this conceptualization
varies across cultural groups, without necessarily
seeking a solution for any particular deficiency with
respect to item construction.
Regardless of the approach taken to gathering data in the
cognitive interview, or the underlying objectives, this infor-
mation is overwhelmingly qualitative, in the common and
well-accepted sense of the term (Collins, 2007; Conrad &
Blair, 2004; Miller, 2011). Decisions concerning item func-
tioning mainly derive from written, descriptive information
that is gathered by the trained cognitive interviewer (e.g.,
‘‘None of the subjects I tested was able to recall how many
times they had consulted the internet for health information
within the past 12 months’’). As such, it is necessary to gain
a sufficient level of expertise, in interviewing (gathering
88 H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF)
Author’s personal copy (e-offprint)
Methodology 2013; Vol. 9(3):87–95 2013 Hogrefe Publishing
meaningful and clear qualitative data), performing
qualitative data-reduction activities, and then in interpreting
the information collected (to avoid idiosyncratic or biased
interpretation). It is the analysis of cognitive interviewing
in particular that may be the least-developed aspect of the
entire process. Again, the various approaches to analysis
that have been used over the past 30 years are detailed in
Willis (2005), and additional approaches that are especially
appropriate for cross-cultural or cross-national research are
suggested by Miller (2011), Miller, Mont, Maitland,
Altman, and Madans (2010), and by Fitzgerald, Widdop,
Gray, and Collins (2011).
Towards the Harmonization of
Cognitive Interviewing Reports
Variation in Cognitive Interviewing
Approaches
As mentioned previously, there are significant differences
in the nature of data collection (think-aloud vs. probing),
probing (concurrent vs. retrospective), and analysis.
Beyond this, there are also key issues concerning selection
of appropriate sample sizes, and the way in which cognitive
interviews are divided into ‘‘rounds’’ of testing. A common
approach has been to conduct a small number of interviews
(8–12), and then to stop and assess results, making modifi-
cations to tested questions before retesting through a subse-
quent round. Such testing is iterative in nature, in that the
sequence of testing-and-modification may be carried out
through three or even more such rounds. Alternatively,
however, a researcher could decide to conduct all the inter-
views without stopping to make changes – or, the complete
opposite: through modifying questions after each interview,
such that the size of the ‘‘testing round’’ is effectively one
interview. The relative benefits of these potential alterna-
tive approaches have not been well studied, and little
empirical evidence exists to suggest that common practices
represent optimal (or even minimally acceptable) solutions.
Further sources of variation in cognitive interviewing
practice exist with respect to a multitude of variables: for
example, the number of cognitive interviewers who con-
duct the interviews (e.g., one interviewer conducting all
the interviews, vs. a larger number who conduct several
interviews each); the nature of selection and training of
interviewers; or the details of the probing approach used
(with respect to what are labeled by Willis (2005) as Proac-
tive vs. Reactive probes).
In itself, variation in procedures is not necessarily a neg-
ative feature. It does, however, produce two vexing prob-
lems for practitioners: (a) Because procedures tend not to
be well described, it can be difficult to determine the meth-
odological steps that a particular investigation has taken;
and (b) It therefore follows that a determination of the effi-
cacy of any particular variant of cognitive interviewing will
be exceedingly difficult to ascertain. Given the nature of
current practices in the production of cognitive interviewing
reports, resolving these issues, and developing a set of best
practices, relies on a body of evidence that simply does not
exist at this time. However, in order to reach the point at
which comparative effectiveness of varied approaches can
be established (outcome evaluation), it is first necessary
to take the initial step of creating a means for systematically
describing the procedures that have been used in cognitive
interviewing studies, so that the requisite body of evidence
in fact exists (i.e., to support the initiation of the process
evaluation of cognitive interviewing approaches).
Benefits of a Standard Reporting Format
We argue that the lack of comprehensiveness of the
information contained in cognitive testing reports is largely
because there is currently no well-defined, standard report
format, or even a standard for specifying the minimal level
of information that should be contained within a report. As
a result, reports tend to be idiosyncratic in both the types
and ordering of information contained, certainly between
organizations, and sometimes even within. Hence, reports
may be missing crucial pieces of information (e.g., the
number of cognitive interviews, or cognitive interviewers,
involved in the testing project), or may present these in
ways that makes searching for the information difficult.
Developing a Reporting Framework
The problems mentioned above call for the development of
a unified approach to the types of information to be con-
tained in a cognitive testing report. The current article,
therefore, is intended to present a solution to this challenge,
through the introduction of a conceptual reporting frame-
work. As previously mentioned, cognitive interviewing rep-
resents a qualitative research approach. Hence, cognitive
interviewing practices can benefit from what has been
learned in qualitative research in fields other than survey
question evaluation, with regard to the reporting and eval-
uation of the conducted research project.
Procedure
We began by examining instruments that have been devel-
oped for assessing and reviewing the quality of qualitative
studies. Quality is a much-debated topic in qualitative
research, and the lack of consensus on what constitutes
quality is reflected in the large number of different instru-
ments to review qualitative investigations (Boeije, van
Wesel, & Alisic, 2011; Cohen & Crabtree, 2008). We first
selected four quality-oriented checklists as a source for
generating items for our own purposes (see Table 1). We
preferred checklists with explanation attached to the differ-
ent items as to be sure about their meaning. We sought
checklists that originated in different disciplines or
branches, that had been generated over different time
H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF) 89
Author’s personal copy (e-offprint)
2013 Hogrefe Publishing Methodology 2013; Vol. 9(3):87–95
periods, and that varied in length, in order to cover a broad
spectrum.
Second, we extracted items from the four quality check-
lists and grouped them into eight relevant clusters that
became the skeleton of the framework (analysis is available
from authors on request): (1) research objectives, (2)
research design, (3) ethics, (4) sampling, (5) data collection,
(6) data analysis, (7) results, and (8) documenting the study
(auditability). As a consequence of combining the different
checklists, some items were redundant and eliminated in
each category. The first draft of the resulting framework
was developed by the first author and checked by the
second.
Third, the framework that resulted was assessed by con-
sulting six other checklists assessing quality of qualitative
research (see Table 1). On the basis of these new materials
wedecidedtodividethecluster‘‘results’’ into (a) ‘‘find-
ings’’ and (b) ‘‘conclusions, implications, and discussion.’’
The cluster ‘‘documenting the study (auditability)’’ was
split into (a) ‘‘quality and auditability’’ and (b) ‘‘report for-
mat.’’ This operation resulted in 10 total clusters.
Fourth, in addition to consulting checklists devoted to
aspects of quality, we consulted a checklist intended for
the reporting of qualitative research (see Table 1), to
explicitly formulate criteria for providing information con-
cerning studies, using the appropriate style, and degree of
Table 1. Checklists used to generate the Cognitive Interviewing Reporting Framework (CIRF)
Number
of items
Quality checklists used as a start
1 Qualitative research checklist
British Medical Journal
http://www.bmj.com/about-bmj/resources-authors/article-types/research/editors-checklists
12
2 Critical Review Form – Qualitative studies (version 2.0)
Letts, L., Wilkins, S., Law, M., Stewart, D., Bosch, J. & Westmorland, M. (2007).
McMaster University.
http://www.srs-mcmaster.ca/Portals/20/pdf/ebp/qualreview_version2.0.pdf
20
3 Quality in qualitative evaluation: a framework for assessing research evidence
Spencer, L., Ritchie, J., Lewis, J. & Dillon, L. (2003). National Centre for Social Research.
http://collections-r.europarchive.org/tna/20070705130742/ http://www.policyhub.gov.uk/docs/qqe_rep.pdf
18
4 Criteria for the evaluation of qualitative research papers
British Sociological Association Medical Sociology Group
Seale, C. (1999). Quality in qualitative research. London: Sage.
20
Quality checklists used to check coverage
5 Critical Appraisal Skills Programme (CASP) (2011)
http://www.casp-uk.net
10
6 Step-by-step guide to critiquing research. Part 2: qualitative research
Ryan, F., Coughlan, M. & Cronin, P. (2007). British Journal Nursing; 16(12): 738–744.
16
7 Critical appraisal checklist for qualitative research studies
Treloar, C., Champness, S., Simpson, P.L. & Higginbotham, N. (2000). Indian Journal of
Pediatrics, 67(5): 347–351.
10
8 Critical appraisal of focus group research articles
Vermeire, E., Royen, P. van, Griffiths, F., Coenen, S., Peremans, L. & Hendrickx, K. (2002).
European Journal of General Practice, 8(3): 104–108.
13
9 Qualitative research review guidelines – RATS (Relevance, Appropriateness, Transparency,
Soundness)
Clark, J.P. (2003). How to peer review a qualitative manuscript. In: F. Godlee & T. Jeferson
(Eds.). Peer review in Health Sciences, pp. 219–235. London: BMJ Books.
http://www.biomedcentral.com/info/ifora/rats
10
10 Evolving guidelines for publication of qualitative research studies in psychology and related fields.
Elliot, R., Fischer, C. & Rennie, D.L. (1999). British Journal of Clinical Psychology, 38: 215–229.
14
Reporting guidelines for qualitative research
11 Consolidated criteria for reporting qualitative research (COREQ)
Tong, A., Sainsbury, P. & Craig, J. (2007). International Journal for Quality in Health Care. 19(6): 349–357.
32
12 Reading qualitative studies
Sandelowski, M. & Barroso, J. (2002). International Journal of qualitative methods. 1(1): 74–108.
Creativecommons.org/llicenses/by/2.0
15
90 H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF)
Author’s personal copy (e-offprint)
Methodology 2013; Vol. 9(3):87–95 2013 Hogrefe Publishing
specificity. Finally, we adjusted the new checklist, through
adoption of terminology that adheres to cognitive inter-
viewing methods and results. For instance, the word ‘‘sam-
pling’’ was replaced with ‘‘participant selection,’’ and
‘‘quality and auditability’’ was replaced with ‘‘strengths
and limitations of the study.’’ The resultant framework
was labeled the Cognitive Interviewing Reporting Frame-
work, or CIRF (see Table 2).
The CIRF: Cognitive Interviewing
Reporting Framework
In this section we describe the categories in the CIRF con-
ceptual framework, to facilitate its use by other researchers.
Research Objectives
As in all research projects, researchers need to formulate
and describe the research objectives of the investigation.
Cognitive testing research normally involves the pretesting
of new questions prior to fielding, or else conducting qual-
ity assessment of already developed questions. Efforts can
also be aimed at finding problem areas, and discovering
resolutions, with respect to the entire survey. They can also
be aimed at examining possible problems in only one target
group or finding problem areas in only the items that were
recently adjusted. Alternately, the objective might be to
determine if an item, or full questionnaire, is interpreted
similarly across cultural groups or languages. This initial
section is also an appropriate place (but not the only one)
in which the to-be-evaluated items can be listed or
referred to.
A justification for cognitive pretesting or evaluation
could be the identification of clearly anticipated problems,
within the initial stages of developing questionnaire. In par-
ticular, a previous expert review can indicate some antici-
pated problem areas that are subsequently dealt with in a
cognitive interviewing project. Alternatively, when an
instrument is closer to fielding, the research objectives
can be more confirmatory in nature, and intended to deter-
mine that no serious problems remain.
Concerning the context of cognitive pretesting, the
report can address questions such as: Who has decided to
evaluate the instrument (e.g., a client or sponsor, research-
ers internal to the organization, students, and so on)? What
parties are involved in the pretest? How much modification
to items will be allowed, and what other constraints exist?
In particular, are there any different agendas involved that
pose conflicting demands (e.g., pressure for maintaining
data trends which may restrict capacity for item
modification)?
A review of background literature may be relevant, for
users to know whether any previous research has been con-
ducted into the use of the current items or the questionnaire
as a whole; for instance, where the questions have been
used previously, and whether other pretesting studies were
done. Or, for newly developed items, a literature review
might reveal existing instruments that measure the phenom-
enon of interest, and identify gaps or limitations to be
redressed.
We also make reference to theoretical perspective,
because cognitive interviewing is rooted in diverse disci-
plines and it is important to report on the specific theoreti-
cal perspective that underlies the pretesting of the survey.
Again, some researchers make use of the well-known cog-
nitive model developed by Tourangeau (1984); whereas
those having a sociological perspective may make explicit
use of a Grounded Theory approach (e.g., Boeije, 2010).
Cross-cultural studies may rely on relevant classification
schemes such as those by Fitzgerald, Widdop, Grey, and
Collins (2009).
Research Design
Research design is concerned with the studys methodol-
ogy. The study might be designed as an experiment com-
paring different cognitive interviewing methods; or may
emphasize subgroup comparison. For instance, Willis and
Zahnd (2007) included groups based on both cultural group
membership (Koreans vs. non-Koreans) and level of accul-
turation to the US society (high vs. low). Also relevant to
research design is the process used for cognitive probing
(e.g., based on interviewer-participant verbal interaction
with concurrent probing, versus unprobed self-administra-
tion followed by a debriefing session). Concerning the
structural design of the pretest, the project might also
involve a clear sequencing of identifiable testing stage –
for example, the Three-Step-Test-Interview (TSTI) intro-
duced by Hak, van der Veer, and Jansen (2004). Finally,
researchers can indicate the degree to which the procedures
were fixed, as opposed to flexible and modifiable, through
the course of pretesting or evaluation. It is important that
cognitive interview researchers provide adequate descrip-
tion, and optimally justification, for their choice of each
component of the research design.
Ethics
This category describes the relevant ethical issues, and how
possible benefits and costs to participants were considered,
for example with respect to appropriate level of monetary
compensation for participation. We acknowledge that
issues of ethics and of human subjects protection may not
be pronounced in many cognitive interviewing investiga-
tions. There are, however, situations in which this topic
can be significant, where sensitive or emotional content is
involved, or where the establishment of a relationship with
participants might influence their expectations concerning
services or advice provided by the interviewer or agency
conducting the interview (e.g., for a cognitive interview
involving risk behavior or medical care, is the interviewer
precluded from providing advice or information?). In gen-
eral, any potential harm or threats posed by the interview
H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF) 91
Author’s personal copy (e-offprint)
2013 Hogrefe Publishing Methodology 2013; Vol. 9(3):87–95
Table 2. Cognitive Interviewing Reporting Framework
1. Research objectives
Define the research objectives
What are the aims of the study?
What is the context that gave rise to pretesting the instrument?
Provide a review of relevant background literature
What is the theoretical perspective for the cognitive interviewing study?
2. Research design
Describe the features of the overall research design
What was the basis for each feature of the design?
3. Ethics
Present evidence of thoughtfulness about research contexts and participants
Was the study approved by an ethics committee or IRB? (consent procedures)
How was the research project introduced to settings and participants?
How were people motivated to participate?
How was confidentiality and anonymity of participants/sources protected?
4. Participant selection
Describe the participant selection methods used
What are participants details with respect to demographics and other project-specific items of information
Did the selection of participants satisfy the study objectives?
5. Data collection
Provide information about the data collection methods
Who conducted the interviews and how many interviewers were involved?
How were the interviewers trained?
Were sessions recorded and if so, was audio or video used?
Were notes taken and how was this employed?
What type of verbal reporting method was employed, that is, think-aloud, probing, or combinations?
Was the interview protocol adjusted during the research process and if so, how?
Was saturation achieved?
6. Data analysis
Describe methods of data analysis in this research project
How were raw data transformed into categories representing problem areas and solutions?
What software programs were used?
Has reliability been considered, including the repetition of (parts of) the analysis by more than one researcher?
How did researchers work together and how were systematic analysis procedures encouraged, especially between laboratories or
testing locations?
Were there any efforts for seeking diverse observations, that is, triangulation?
Was quantitative evidence used to supplement qualitative evidence?
7. Findings
Present findings in a systematic and clear way, either per-item, per meaningful part of the questionnaire, or per entire questionnaire
What was observed concerning subject behavior with respect to each evaluated item?
To what extent did results differ as a function of subject characteristics, behaviors, or status?
8. Conclusions, implications, and discussion
Address the realization of the objectives
If possible, include a copy of the modified questions if one was produced as a product of testing.
How do findings and solutions relate to previous evidence?
9. Strengths and limitations of the study
Discuss strengths and limitations of the design and employment of the study and how these could have affected the findings
What were relevant a priori expectations or previous experiences?
What are the implications of findings for generalization to the wider population from which the participants were drawn, or
applicability to other settings?
What is the studys contribution to methodological development and future practice?
10. Report format
Use a structured and accepted format for organizing the report
Include main study documents that are relevant for independent inspection by others as appendix or online materials.
92 H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF)
Author’s personal copy (e-offprint)
Methodology 2013; Vol. 9(3):87–95 2013 Hogrefe Publishing
should be discussed. Further, if the research was reviewed
by an Institutional Review Board (IRB) or other body, that
should be stated. Note that information on Ethics need not
constitute a separate section, but should be included where
it is appropriate and readily identifiable.
Participant Selection
Reports should identify the target population for the ques-
tionnaire and in particular the (sub) populations that were
involved in pretesting the questionnaire. Describe how set-
ting (e.g., organization providing facilities or space; physi-
cal interviewing location) was chosen and how participants
were selected. Describe recruitment strategies used (adver-
tisement, word-of-mouth, etc.). In particular, if critical sub-
groups were included, these should be indicated (e.g., for a
project that tests tobacco questions, indication of how many
participants were current smokers, former smokers, and
never smokers).
Data Collection
Details of data collection are sometimes seen as mundane
and unworthy of mention, yet several key issues are vital
for purposes of full disclosure of methods, and enabling
replication of the processes used. Reports should make
clear details such as whether a single cognitive interviewer,
as opposed to a team, was used, and their relevant training
and backgrounds. Data collection varies significantly,
depending on whether this involved simple note-taking dur-
ing the interview, as opposed to audio/video recordings
which are reviewed later. Procedural variables include the
nature of cognitive probes used, and whether they were
standardized or administered flexibly. Because best prac-
tices have yet to be developed with respect to all of these
areas, it behooves investigators to indicate which practices
they have made use of, as the field works toward identify-
ing those that prove most effective.
Critically, indicate the degree to which saturation was
achieved. Saturation is in theory obtained when no new
information is gathered once additional participants are
recruited and interviewed. It is possible that researchers
decide to stop data collection when the most serious
problems seem to have been detected and/or resolved.
There might be other reasons to stop data collection as well,
such as time constraints and lack of funds or staff. In
particular, determinants of the sizes of the participant
groups, and number of testing rounds, should be well
explicated.
Data Analysis
Analysis may be the single most serious ‘‘black box’’ in
cognitive interviewing reports, as investigators rarely
describe how they moved from data collection to the pro-
duction of results and recommendations. As such, reports
must be careful to describe how the analysis took place
within the current project. Overall, investigators should
indicate the method of data management and data reduction
they used to transform a series of individual comments, per-
taining to separate survey items across multiple interviews,
into a coherent set of summary findings that transcend the
individual interview level. This may have involved the use
of a coding scheme, or else the use of thematic reduction
intended to capture major themes. For investigations that
involve multiple investigators, or groups of investigators,
data review and summary steps are sometimes done inde-
pendently by each researcher subgroup, before being com-
pared and further combined. In other cases, original
comments consisting of raw data from interviews are
reviewed by all investigators – that is, at the lowest possible
level – before being further processed. Again, current
guides to cognitive interviewing provide little guidance in
these areas, so a series of reports that chronicle the
approaches commonly taken would at least provide a rich
description of the strategies that are currently in use.
Reports should also describe the ways in which discus-
sions with those not directly involved in the testing, such as
clients, stakeholders, or other researchers, were used to
inform analysis and interpretation. Finally, given the
increased application of mixed-method approaches for pre-
testing and question evaluation, if quantitative methods
were used in association with qualitative ones (e.g., psycho-
metric evaluation of responses), these should also be
described.
Findings
Findings need to be presented in a systematic and clear
way, either per-item, per meaningful part of the question-
naire, or per the entire questionnaire. Preferably, findings
not only include descriptions of problem areas, but also
of insights into what caused these problem areas, and indi-
cations of potential solutions. If the intent of the investiga-
tion is not to identify problems, but simply to capture the
range of interpretation concerning each evaluated item, then
this full range should be described, for every target item
(that is, reporting should not be selective or arbitrary).
Reports of findings may involve examples, but these should
not be ‘‘cherry picked’’ to support the researchersapriori
hypothesis – rather, the totality of the relevant findings
should be presented.
Conclusion, Implications, and Discussion
Conclusions can be narrow – that is, restricted to the use of
the current set of questions within the context of the current
survey; or they can be wide, to the extent that they address
the use of the evaluated items more generally, whether in
terms of other surveys, or of measurement objectives other
than those within the current investigation. Often the over-
all conclusions can be made prominent through their inclu-
sion in an executive summary; whereas more detailed,
H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF) 93
Author’s personal copy (e-offprint)
2013 Hogrefe Publishing Methodology 2013; Vol. 9(3):87–95
question-by-question conclusions can be listed under each
evaluated survey item.
Strengths and Limitations of the Study
In all cases, researchers should indicate their level of con-
fidence in the results they have obtained, based partly on
the clarity of the results, and on the extensiveness of the
investigation. If sample sizes were small, testing until satu-
ration not done, and the reports by different interviewers in
conflict, it would be imperative to state these limitations.
Report Format
The CIRF does not imply a strict sequence of elements: our
suggested section ordering may be appropriate for some
reports but not others. It may be especially useful to include
an executive summary at the beginning of the report for
readers uninterested in methodological details.
Discussion
Our overall assertion is that cognitive interviews can be
considered to be a type of qualitative research, and that cog-
nitive interviewing practice can benefit from what has been
learned and realized in qualitative research in other fields.
On the basis of 12 existing quality and reporting checklists
that have been developed for qualitative research reports,
we generated the CIRF as a framework for reporting cogni-
tive face-to-face interviewing studies. The ten-category
CIRF checklist is meant to encourage researchers to report
on their cognitive interview projects, and to do so in a clear
and comprehensive way. In our experience, many research-
ers refrain from disseminating cognitive interview reports
in a wide manner.
However, within other areas of research, there is evi-
dence that the quality of reporting is improved when an
organizing scheme is used, and results are shared. As a
precedent, the use of a statement of reporting randomized
trials (RCTs), the CONSORT statement, has been evaluated
for its effects on the quality of reports of RCTs (Moher,
Jones, & Lepage, 2001). These authors found a decrease
of unclear reporting, and an improvement in the quality
of reports. This finding supports the use of a reporting
framework to encourage the development of best practices
within the field of cognitive interviewing. As a conse-
quence, surveys in turn will benefit from these systematic
reports, as our cognitive pretesting processes become more
reliable and effective.
We have refrained from being prescriptive and rigid
with respect to best practices for the conduct of cognitive
interviewing. Rather, we strive for clear reporting of cogni-
tive interviewing studies, in order for the users to be able to
assess the value of these studies. However, the framework
offers the users flexibility. This is deemed necessary, as not
all studies contain all elements that the checklist covers (or,
they may include additional elements).
A logical first step is the use of the CIRF framework by
authors to report their work, and then to reflect upon the
framework, in terms of its clarity and usefulness. Within
this special issue, several authors take that step. First is
‘‘Examining the personal experience of Aging Scale with
the Three Step Test Interview,’’ by Bode and Jansen
(2013). The investigators apply a particular pretesting var-
iant that includes cognitive interviewing – the TSTI – and
attempt to fashion their write-up in a manner that utilizes
the CIRF framework. Following this, they consider poten-
tial positive and negative effects of following the CIRF.
Next, Vis-Visschers and Meertens (2013) present
‘‘Evaluating the CIRF by rewriting a Dutch pretesting
report of a European Health Survey Questionnaire.’’ The
investigators reformatted an existing cognitive testing
report so that it matches the CIRF format, and upon reflec-
tion, make conclusions concerning the comprehensiveness
of the CIRF in providing relevant information.
The subsequent case study, ‘‘Obtaining validity evi-
dence by cognitive interviewing to interpret psychometric
results,’’ by Padilla, Bentez, and Castillo (2013), broadens
the potential application of the CIRF, by including both
cognitive interviewing and psychometric approaches
according to a mixed-method pretesting and evaluation
model. The authors consider the extent to which a CIRF-
type checklist can be expanded to include pretesting proce-
dures other than cognitive testing.
To round out the Special Issue, Willis and Boeije (2013)
summarize the research reported, in ‘‘Reflections on the
Cognitive Interviewing Reporting Framework: Efficacy,
expectations, and promise for the future.’’ The authors pres-
ent common themes from the three case studies, in terms of
how the CIRF can be applied across a range of survey ques-
tionnaire pretesting environments. Then, they suggest
further directions that can be taken for purposes of
(a) facilitating the future use of the CIRF by a range of
researchers; (b) evaluating and modifying the CIRF further;
and (c) proposing specific points of entry within existing
systems (e.g., the extant Q-Bank database system) into
which the CIRF may be incorporated.
Following this set of enhancements, we expect to fur-
ther the objective of producing clear reports of cognitive
interviewing studies, through further elaboration of the
CIRF checklist. Eventually, we hope to produce a clear,
logical, and easy-to-use standard that is also effective in
enhancing the quality of cognitive testing research.
References
Beatty, P. (2004). The dynamics of cognitive interviewing. In S.
Presser, et al. (Ed.), Questionnaire development evaluation
and testing methods (pp. 45–66). Hoboken, NJ: Wiley.
Bode, C., & Jansen, H. (2013). Examining the personal
experience of Aging Scale with the Three Step Test
Interview. Methodology, 9, 96–103.
Boeije, H. R. (2010). Analysis in qualitative research. London,
UK: Sage.
94 H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF)
Author’s personal copy (e-offprint)
Methodology 2013; Vol. 9(3):87–95 2013 Hogrefe Publishing
Boeije, H. R., van Wesel, F., & Alisic, E. (2011). Making a
difference: Towards a method for weighing the evidence in a
qualitative synthesis. Journal of Evaluation in Clinical
Practice, 17, 657–663.
Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for
qualitative research in health care: Controversies and
recommendations. Annals of Family Medicine, 6, 331–339.
Collins, D. (2007). Analysing and interpreting cognitive inter-
view data: A qualitative approach. In Proceedings of the 6th
Questionnaire Evaluation Standard for Testing Conference
(pp. 64–73). Ottawa: Statistics Canada.
Conrad, F., & Blair, J. (2004). Data quality in cognitive
interviews: The case for verbal reports. In S. Presser, et al.
(Ed.), Questionnaire development evaluation and testing
methods (pp. 67–87). Hoboken, NJ: Wiley.
DeMaio, T. J., & Rothgeb, J. M. (1996). Cognitive interviewing
techniques: In the lab and in the field. In N. Schwarz & S.
Sudman (Eds.), Answering questions: Methodology for
determining cognitive and communicative processes in
survey research (pp. 175–195). San Francisco, CA: Jossey-
Bass.
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data.
Psychological Review, 87, 215–251.
Fitzgerald, R., Widdop, S., Gray, M., & Collins, D. (2011).
Identifying sources of error in cross-national questionnaires:
Application of an error source typology to cognitive
interview data. Journal of Official Statistics, 27, 1–32.
Fitzgerald, R., Widdop, S., Gray, M., & Collins, D. (2009).
Testing for equivalence using cross-national cognitive
interviewing. Center for comparative social surveys. Work-
ing Paper Series, No 01.
Gerber, E. R., & Wellens, T. R. (1997). Perspectives on
pretesting: Cognition in the cognitive interview? Bulletin
de Methodologie Sociologique, 55, 18–39.
Hak, T., van der Veer, K., & Jansen, H. (2004). The three-step
test-interview (TSTI): An observational instrument for
pretesting self-completion questionnaires. In ERIM Report
ERS-2004-029-ORG. Rotterdam, The Netherlands: Erasmus
Research Institute of Management. Retrieved from http://
hdl.handle.net/1765/1265
Herman, D., Johnson, M., McEvoy, C., Herzog, C., & Hertel, P.
(1996). Basic and applied memory research: Vol. 2.
Practical application. Hillsdale, NJ: Erlbaum.
Jabine, T. B., Straf, M. L., Tanur, J. M., & Tourangeau, R.
(Eds.). (1984). Cognitive aspects of survey methodology:
Building a bridge between disciplines. Washington, DC:
National Academy Press.
Krosnick, J. A. (1991). Response strategies for coping with the
cognitive demands of attitude measures in surveys. Applied
Cognitive Psychology, 5, 213–236.
Loftus, E. (1984). Protocol analysis of responses to survey recall
questions. In T. B. Jabine, M. L. Straf, J. M. Tanur, & R.
Tourangeau (Eds.), Cognitive aspects of survey methodol-
ogy: Building a bridge between disciplines (pp. 61–64).
Washington, DC: National Academy Press.
Miller, K. (2003). Conducting cognitive interviews to under-
stand question-response limitations. American Journal of
Health Behavior, 27(Suppl. 3), S264–S272.
Miller, K. (2011). Cognitive interviewing. In K. Miller, J.
Madans, A. Maitland, & G. Willis (Eds.), Question evalu-
ation methods: Contributing to the science of data quality.
New York, NY: Wiley and Sons.
Miller, K., Mont, D., Maitland, A., Altman, B., & Madans, J.
(2010). Results of a cross-national structured cognitive
interviewing protocol to test measures of disability. Quality
& Quantity, 4, 801–815.
Moher, D., Jones, A., & Lepage, L. (2001). Use of the
CONSORT statement and quality of reports of randomized
trials: A comparative before-and-after evaluation. JAMA,
285, 1992–1995.
Padilla, J.-L., Bentez, I., & Castillo, M. (2013). Obtaining
validity evidence by cognitive interviewing to interpret
psychometric results. Methodology, 9, 113–122.
Tourangeau, R. (1984). Cognitive science and survey methods:
A cognitive perspective. In T. Jabine, M. Straf, J. Tanur, &
R. Tourangeau (Eds.), Cognitive aspects of survey design:
Building a bridge between disciplines (pp. 73–100).
Washington, DC: National Academy Press.
Vis-Visschers, R., & Meertens, V. (2013). Evaluating the
Cognitive Interviewing Reporting Framework (CIRF) by
rewriting a Dutch pretesting report of a European Health
Survey Questionnaire. Methodology, 9, 104–112.
Willis, G. B. (1999). Cognitive interviewing: A how-to guide.
Retrieved from http://appliedresearch.cancer.gov/areas/cog-
nitive/interview.pdf
Willis, G. B. (2005). Cognitive interviewing: A tool for
improving questionnaire design. Thousand Oaks, CA: Sage.
Willis, G., & Boeije, H. (2013). Reflections on the Cognitive
Interviewing Reporting Framework: Efficacy, expectations,
and promise for the future. Methodology, 9, 123–128.
Willis, G., & Zahnd, E. (2007). Questionnaire design from a
cross-cultural perspective: An empirical investigation of
Koreans and Non-Koreans. Journal of Health Care for the
Poor and Underserved, 18, 197–217.
Received August 9, 2012
Accepted April 22, 2013
Published online August 2, 2013
Gordon Willis, PhD
National Cancer Institute
6130 Executive Blvd., MSC 7344, EPN 4005
Bethesda, MD 20892
USA
Tel. 001-301-594-6652
E-mail willisg@mail.nih.gov
H. Boeije & G. Willis: The Cognitive Interviewing Reporting Framework (CIRF) 95
Author’s personal copy (e-offprint)
2013 Hogrefe Publishing Methodology 2013; Vol. 9(3):87–95
... Trained interviewers (KO, MS, SL) conducted cognitive interviews using a semi-structured interview guide (Additional file 1). Cognitive interviewing is a qualitative research method that is used to understand whether questionnaires and survey questions work as intended [13]. There are two approaches that can be used to conduct a cognitive interview-think-aloud and verbal probing. ...
... Think-aloud is a technique in which participants are instructed to say anything that comes to mind as they go through the survey. Verbal probing is another technique in which the interviewer follows up with another question to elicit a more detailed response from the participant [13,14]. Our team used both the think-aloud and verbal probing methods concurrently [13]. ...
... Verbal probing is another technique in which the interviewer follows up with another question to elicit a more detailed response from the participant [13,14]. Our team used both the think-aloud and verbal probing methods concurrently [13]. The interviews took between 60 and 90 min. ...
Article
Full-text available
Background The Kansas City Cardiomyopathy Questionnaire (KCCQ) is a Patient-Reported Outcome Measure (PROM) used to evaluate the health status of patients with heart failure (HF) but has predominantly been tested in settings serving predominately white, male, and economically well-resourced populations. We sought to examine the acceptability of the shorter version of the KCCQ (KCCQ-12) among racially and ethnically diverse patients receiving care in an urban, safety-net setting. Methods We conducted cognitive interviews with a diverse population of patients with heart failure in a safety net system to assess their perceptions of the KCCQ-12. We conducted a thematic analysis of the qualitative data then mapped themes to the Capability, Opportunity, Motivation Model of Behavior framework. Results We interviewed 18 patients with heart failure and found that patients broadly endorsed the concepts of the KCCQ-12 with minor suggestions to improve the instrument’s content and appearance. Although patients accepted the KCCQ-12, we found that the instrument did not adequately measure aspects of health care and quality of life that patients identified as being important components of managing their heart failure. Patient-important factors of heart failure management coalesced into three main themes: social support, health care environment, and mental health. Conclusions Patients from this diverse, low-income, majority non-white population experience unique challenges and circumstances that impact their ability to manage disease. In this study, patients were receptive to the KCCQ-12 as a tool but perceived that it did not adequately capture key health components such as mental health and social relationships that deeply impact their ability to manage HF. Further study on the incorporation of social determinants of health into PROMs could make them more useful tools in evaluating and managing HF in diverse, underserved populations.
... Therefore, given the cognitive difficulties in the ABI sample in the current study, verbal probing was selected as the most appropriate method. The design and reporting of the study was guided by the Cognitive Interviewing Reporting Framework (CIFR; Boeije & Willis, 2013) and is included in Appendix A. ...
... The Cognitive Interviewing Reporting Framework (CIFR; Boeije & Willis, 2013 ...
Article
Background The accurate evaluation of valued living in people with acquired brain injury (ABI) is important for measuring the outcome of interventions targeting valued living. The Valued Living Questionnaire (VLQ) is one of the most widely used measures, however its validity in an ABI cohort may be affected by the cognitive demands associated with evaluating the value-consistency of actions in the past week. Objectives We aimed to systematically identify common difficulties or errors associated with the comprehension and completion of the VLQ in people with ABI in order to guide a potential adaptation of the measure. Methods Adults with an ABI (traumatic brain injury, stroke, tumour), experiencing cognitive difficulties and/or emotional distress impacting participation in valued activities, were invited to participate in a cognitive interview which probed their understanding of the VLQ. Concurrent verbal probing was used, whereby scripted verbal probes were asked alongside each questionnaire item as it was being rated by participants. Interviews were transcribed and analysed by combining data pertaining to each item and aggregating these across interviews to highlight common comprehension errors or difficulties. Results There were 11 participants (mean age = 59.55 years, SD = 12.84; mean education = 14.73 years, SD = 2.87) with a range of ABI aetiologies (7 stroke, 2 TBI, 2 tumour). Common difficulties with the VLQ included confusion caused by question phrasing and structure of the measure, errors due to the cognitive demands associated with rating the importance of abstract values and value-consistency of actions in the last week, and problems with the rating scale. Conclusions Key problems with the validity of the VLQ within an ABI sample were identified due to comprehension errors relating to its structure and content. Findings will inform an adapted version, suited to the needs of individuals with ABI-associated cognitive difficulties.
... Parmi les 27 items présentés au tour 2, 14 ont obtenu un consensus supérieur à 80% pour leur niveau d'importance et la qualité de leur formulation (items 1, 5, 6, 7, 13, 18, 19, 20, 26, 27, 34, 36, 40, 41) (Montreuil, 2009 ;Sprangers et al., 1993), de 10 à 50 patients selon l'instrument (Falissard, 2008 (Falissard, 2008). recueil des entretiens n'est en effet recommandée en particulier (Sprangers et al., 1993), les meilleures pratiques en la matière n'ayant pas encore été développées (Boeije et Willis, 2013). ...
Thesis
Contexte : La qualité de vie (QdV) des personnes autistes devrait être la cible ultime des interventions. Cette thématique de recherche reste peu développée, particulièrement chez les enfants autistes d’âge préscolaire.Objectifs : Cette étude vise à (a) développer un module adapté aux enfants autistes d’âge préscolaire destiné à être passé avec l’échelle de QdV Pediatric Quality of Life Inventory (PedsQLTM 4.0, version 2-4 ans), (b) évaluer les qualités métrologiques du PedsQLTM 4.0 (version 2-4 ans), dont la traduction française n’a pas été validée, et du module « autisme », et c) explorer les facteurs pouvant influencer la QdV des enfants autistes de ce groupe d’âge. Méthodes : Dix adultes autistes verbaux ont participé à un entretien semi-directif questionnant les critères qu’ils estimaient importants pour que leur vie soit satisfaisante lorsqu’ils étaient enfants. Une analyse de contenu thématique a fourni une première banque d’items pour le module « autisme ». Celle-ci a ensuite été évaluée par un panel d’experts et pré-testée auprès de dix parents d’enfants autistes. 279 parents d’enfants au développement typique d’âge préscolaire ont complété le PedsQLTM 4.0, et 157 parents d’enfants autistes du même âge ont rempli le PedsQLTM 4.0 ainsi que le module « autisme ». L’âge et le genre du parent participant et de leur enfant, l’état civil, le niveau d’éducation et la profession du parent, le lieu de résidence et la composition de la fratrie ont été récoltés auprès des deux échantillons. Le niveau de flexibilité psychologique des parents d’enfants autistes, ainsi que le tempérament de leur enfant ont été respectivement mesurés à l’aide du questionnaire d’acceptation et d’action (AAQ-II) et de l’outil « Émotivité, Activité et Sociabilité » (EAS).Résultats : L’analyse de contenu des entretiens a révélé quatre thèmes majeurs : intérêts, régularité de l’environnement, perception sensorielle et relations sociales. Ce dernier a été subdivisé en deux thèmes (interactions sociales et communication) et une première banque de 44 items découpés en cinq dimensions a pu être constituée. Suite à l’évaluation du panel d’experts et au pré-test, les 27 items retenus constituent le module opérationnel d’évaluation de la QdV adaptée à l’enfant autiste d’âge préscolaire perçue par le parent, et s’utilise conjointement avec le PedsQLTM 4.0 (version 2-4 ans). L’étude psychométrique (a) a montré que le PedsQLTM 4.0 pouvait être utilisé de façon fiable auprès des enfants français autistes ou ayant un développement typique, (b) a conduit à remanier la version opérationnelle du module « autisme » constitué en définitive de 24 items répartis en trois dimensions. L’analyse des facteurs a principalement révélé que la QdV des enfants autistes d’âge préscolaire est associée négativement à son émotivité, cette relation étant influencée par la flexibilité psychologique du parent.Conclusion : Cette étude renseigne sur la QdV des enfants autistes d’âge préscolaire. Elle fournit un outil de mesure de la QdV adaptée à cette population. Celui-ci pourra être utilisé par les cliniciens pour évaluer les interventions précoces qu’ils mettent en œuvre. Enfin, les résultats de cette recherche permettent de mieux comprendre les facteurs d’influence de la QdV des jeunes enfants autistes, en ouvrant notamment des pistes d’intervention auprès de leurs parents.
... Cognitive interviewing is a qualitative method used to refine and improve the clarity of questions in self-reported questionnaires. One goal of cognitive interviewing is to identify misalignment between how respondents interpret questions in comparison to the developer's intent and how to improve those items based on participant feedback (Boeije & Willis, 2013;Patrick et al., 2011;K. Ryan et al., 2012). ...
Article
Purpose: Preference assessment is integral to person-centered treatment planning for older adults with communication impairments. There is a need to validate photographs used in preference assessment for this population. Therefore, this study aimed to establish preliminary face validity of photographs selected to enhance comprehension of questions from the Preferences for Everyday Living Inventory-Nursing Home (PELI-NH) and describe themes in older adults' recommendations for revising photographic stimuli. Method: This qualitative, cognitive interviewing study included 21 participants with an average age of 75 years and no known cognitive or communication deficits. Photographic stimuli were randomized and evaluated across one to two interview sessions. Participants were asked to describe what the preference stimuli represented to them. Responses were scored to assess face validity. Participants were then shown the PELI-NH written prompt and asked to evaluate how well the photograph(s) represented the preference. A semideductive thematic analysis was conducted on interview transcripts to summarize themes in participant feedback. Results: Forty-six (64%) stimuli achieved face validity criteria without revisions. Six (8%) stimuli achieved face validity after one partial revision. Twenty (28%) stimuli required multiple revisions and reached feedback saturation, requiring team review for finalization. Thematic analysis revealed challenges interpreting stimuli (e.g., multiple meanings) and participant preferences for improving photographs (e.g., aesthetics). Conclusions: Cognitive interviewing was useful for improving face validity of stimuli pertaining to personal care topics. Abstract and subjective preferences (e.g., cultural traditions) may be more challenging to represent. This study provides a framework for further testing with older adults with cognitive, communication, and hearing impairments.
... Alterations to the questionnaire were made either during or immediately after each interview, resulting in a new draft before the next interview. All modifications to items were allowed without any predetermined constraints [25]. Phase II ended after saturation in feedback [26]. ...
Article
Full-text available
Background A validated questionnaire to assess medication management of hip fracture patients within and outside the hospital setting was lacking. The study aims were to describe the hip fracture patient pathway, and develop a valid and feasible questionnaire to assess clinicians’ experience with medication management of hip fracture patients in different care settings throughout the patient pathway. Methods This qualitative, descriptive methodological study used strategic and snowball sampling. The questionnaire was developed, and face and content validity explored through interviews with stakeholders. Phase I described the hip fracture patient pathway, and identified questionnaire dimensions in semi-structured interviews with management and clinicians ( n = 37). The patient pathway was also discussed in six meetings ( n = 70). Phase II refined a first draft of the questionnaire through cognitive interviews with future respondents ( n = 23). The draft was modified after each interview. Post hoc, cognitive interview data were analysed using matrix analysis to condense problems and solutions into themes and subthemes. Phase III, converted the final version to a digital format, and tested its feasibility with a subset of the cognitive interview participants ( n = 21) who completed the questionnaire and provided feedback. Results Phase I: Hip fracture patients were cared for in at least three different care settings, and went through at least four handovers between and within primary and secondary care. Three questionnaire dimensions were identified: 1) Medication reconciliation and review, 2) Communication of key information, and 3) Profession and setting. Phase II: The MedHipPro-Q was representative of how the different professions experienced medication management in all settings, and hence showed face and content validity. Post hoc analysis: Problem themes (with sub-themes) were Representativeness ( -of patient pathway and - of respondent reality ) and Presentation ( Language and Appearance ) . Solution themes (with sub-themes) were: Content ( added or deleted ) and Presentation ( modified appearance or corrected language ). Phase III: Participants did not identify technical, linguistic or content flaws in the questionnaire, and the digital version was considered feasible for use. Conclusion The novel MedHipPro-Q showed good face and content validity, and was feasible for use throughout the hip fracture patient pathway. The rigorous development process supports its construct validity and reliability.
... Researchers within these laboratories codified their cognitive interview methods in protocols and training manuals (e.g. [48,49]) that became the basis for current paradigms [43,50]; recent scholarship puts emphasis on qualitative aspects of the method [51,52]. ...
Article
To design and operate energy efficient and comfortable buildings it is important to know what the occupants’ preferences for indoor environmental quality would be. These preferences are related to a range of personal characteristics that occupants may or may not be willing to share. Preparing materials for a forthcoming stated preference discrete choice experiment (SPDCE) investigating willingness of building occupants to share information, we conducted cognitive-interview pretesting with 12 participants to find out whether these materials were interpretable and meaningful. Qualitative analysis identified seven important limitations, including misinterpretations and uncertainties arising from language and difficulties imagining the situation and options being described. Most participants expressed some desire for a deeper understanding and were not satisfied with the choices they were asked to make. We discuss how identifying these limitations assisted in refining these SPDCE materials, the potential cognitive interviewing has for enhancing the validity of study materials and the importance of better understanding when researching occupant behaviours.
... Willis proposes writing cognitive testing reports using a standardized reporting format called the Cognitive Interviewing Reporting Framework (CIRF). The CIRF applies a 10-category checklist to make clear what was done during the cognitive interviews and how conclusions were made based on procedures and results of those interviews (Boeije & Willis, 2013). He also recommends archiving the reports in the widely accessible Q-Bank database developed by a group of U.S. Federal interagency researchers. ...
Technical Report
Full-text available
Comparative surveys are surveys that study more than one population with the purpose of comparing various characteristics of the populations. The purpose of these types of surveys is to facilitate research of social phenomena across populations, and, frequently, over time. Researchers often refer to comparative surveys that take place in multinational, multiregional, and multicultural contexts as “3MC” surveys. To achieve comparability, these surveys need to be carefully designed according to state-of-the-art principles and standards. The main purposes of this task force report, commissioned jointly by the American Association for Public Opinion Research (AAPOR) and the World Association for Public Opinion Research (WAPOR) are to identify the most pressing challenges concerning data quality, promote best practices, recommend priorities for future study, and foster dialogue and collaboration on 3MC methodology. The intended audience for this report includes those involved in all aspects of 3MC surveys including data producers, data archivists, data users, funders and other stakeholders, and those who wish to know more about this discipline.
Article
Background: Cognitive interviewing is a technique that can be used to improve and refine questionnaire items. We describe the basic methodology of cognitive interviewing and illustrate its utility through our experience using cognitive interviews to refine a questionnaire assessing parental understanding of concepts related to preterm birth. Methods: Cognitive interviews were conducted using current best practices. Results were analyzed by the multidisciplinary research team and questionnaire items that were revealed to be problematic were revised. Results: Revisions to the questionnaire items were made to improve clarity and to elicit responses that truly reflected the participants understanding of the concept. Conclusion: Cognitive interviewing is a useful methodology for improving validity of questionnaire items, we recommend researchers developing new questionnaire items design and complete cognitive interviews to improve their items and increase confidence in study conclusions.
Article
Full-text available
This article evaluates a Cross National Error Source Typology that was developed as a tool for making cross-national questionnaire design more effective. Cross-national questionnaire design has a number of potential error sources that are either not present or are less common in single nation studies. Tools that help to identify these error sources better inform the survey researcher when improving a source questionnaire that serves as the basis for translation. This article outlines the theoretical and practical development of the typology and evaluates an attempt to apply it to cross-national cognitive interviewing findings from the European Social Survey.
Article
Full-text available
Based on the experiences of three research groups using and evaluating the Cognitive Interviewing Reporting Framework (CIRF), we draw conclusions about the utility of the CIRF as a guide to creating cognitive testing reports. Authors generally found the CIRF checklist to be usable, and that it led to a more complete description of key steps involved. However, despite the explicit direction by the CIRF to include a full explanation of major steps and features (e.g., research objectives and research design), the three cognitive testing reports tended to simply state what was done, without further justification. Authors varied in their judgments concerning whether the CIRF requires the appropriate level of detail. Overall, we believe that current cognitive interviewing practice will benefit from including, within cognitive testing reports, the 10 categories of information specified by the CIRF. Future use of the CIRF may serve to direct the overall research project from the start, and to further the goal of evaluation of specific cognitive interviewing procedures.
Article
Full-text available
The latest edition of the Standards for Educational and Psychological Testing (APA, 1999) promotes the analysis of respondents' response processes in order to obtain evidence about the fit between the intended construct and the response process produced. The aim of this paper was twofold. First, we assess whether cognitive interviewing can be used to gather such validity evidence, and secondly, to analyze the usefulness of the evidence provided for interpreting the results from traditional psychometric analysis. The usefulness of the Cognitive Interviewing Reporting Framework (Boeije & Willis, 2013) for reporting the cognitive interviewing findings was also evaluated. As an empirical example, we tested the (Spanish language) APGAR family function scale. A total of 21 pretest cognitive interviews were conducted, and psychometric analyzes were conducted of data from 28,371 respondents who were administered the APGAR scale. Results and utility of CIRF as a reporting framework are discussed.
Article
We used the Cognitive Interviewing Reporting Framework (CIRF) to restructure the report of a pretest on a European health survey questionnaire. This pretest was conducted by the Questionnaire Laboratory of Statistics Netherlands, and the original report was written according to a standard Statistics Netherlands format for pretesting reports. This article contains the rewritten report with highlights from the case study. The authors reflect on the process of rewriting and the usefulness of the CIRF. We conclude that expanded use of the CIRF as a reporting format for articles on cognitive pretests would enhance international comparability, completeness, and uniformity of research designs, terminology, and reporting. A limitation of the CIRF is that it does not provide an exhaustive list of items that could be included in a report, but it is more a "minimal standard'': that is a report on how a cognitive pretest was conducted should at least contain a description of the CIRF items.
Article
A study on aging experience in patients with rheumatic diseases showed low statistical consistency (Cronbach's alpha) for the Physical Decline subscale of the Personal Experience of Aging Scale whereas the scale functioned very well in preceding surveys. This discrepancy led to the decision to examine the contextual validity of the subscale with a cognitive interview methodology. In the current study we applied the Three Step Test Interview as a cognitive interviewing device to explore misinterpretations and other problems in answering the PD questionnaire items, with the aim of identifying item improvements that might also be beneficial to the statistical quality of the PD subscale. Some problems were identified that were not related to the statistical inter-item correlation rates, nor to the arthritis context of this sample, but were rather due to faulty item formulation. Reformulations were designed to improve the items. The report of this study follows the Cognitive Interviewing Reporting Framework. Experiences with this new guideline are discussed.
Book
data quality evaluation, sample design - the way designs being carried out, measurement characteristics of estimates;evaluation results, providing user with critical information - whether a data source, appropriate for a given objective;Question Evaluation Methods (QEM) - development and use of methods, questions on surveys and censuses
Article
Acknowledgements 1. Introduction Part One: Interactional Analysis 2. Using Behavioral Coding to Indentify Cognitive Problems with Survey Questions(Floyd Jackson Fowler Hr., and Charles F. Cannell) 3. Questionnaire Pretesting: Computer-Assisted Coding of Concurrent Protocols(Ruth N. Bolton and Tima M, Bronkhorst) 4. From Paradigm to Prototype and Back Again: Interactive Aspects of Cognitive Processing in Standardized Survey Interviews(Nora Cate Schaffer and Douglas W. Maynard) Part Two: Verbal Protocols 5. The Validity and Consequnces of Verbal Reports About Attitudes (Timothy D. Wilson, Suzanne J. LaFleur, and D. Eric Amderson) 6. Expanding and Enhancing the Use of Verbal Protocols in Survey Research(Barbara Bickart and E. Marla Felcher) 7. Integrating Questionnaire Design with a Cognitive Computational Model of Human Question Answering(Arthur C. Graesser. Sailaja Bommareddy, Shane Swamer, and Jonathon M. Golding) Part Three: Other Methods for Determining Cognitive Processes 8. Cognitive Interviewing Techniques: In the Lab and in the Field(Theresa J. DeMaio and Jennifer M. Rothgeb) 9. Cognitve Techniques in Interviewing Older People(Jared B. Jobe, Donald M. Kellerm, and Albert F. Smith) 10. An Individual Differences Perspective in Assessing Cognitive Processes(Richard E. Petty and W. Blair G. Jarvis) 11. A Coding System for Appraising Questionnaires(Judith T, Lessler and Barbara H. Forsyth) 12. Exemplar Generation: Assessing How Respondents Give Meaning to Rating Scales(Thomas M. Ostrom and Katherine M. Gannon) 13. The How and Why of Response Latency Measurement in Telephone Surveys(John N. Bassili) 14. Implicit Memory and Survey Measurement(Mahzarin R. Banji, Irene V. Blair, and Norbert Schwarz) 15. Use of Sorting Tasks to Assess Cognitive Structures(Marilynn B. Brewer and Layton N. Lui) Part Four: Conclusion 16. How Do We Know What We Think They Think Is Really What They Think?(Robert M. Groves).
Book
The design and evaluation of questionnaires—and of other written and oral materials—is a challenging endeavor, fraught with potential pitfalls. Cognitive Interviewing: A Tool for Improving Questionnaire Design describes a means of systematically developing survey questions through investigations that intensively probe the thought processes of individuals who are presented with those inquiries. The work provides general guidance about questionnaire design, development, and pre-testing sequence, with an emphasis on the cognitive interview. In particular, the book gives detailed instructions about the use of verbal probing techniques, and how one can elicit additional information from subjects about their thinking and about the manner in which they react to tested questions. These tools help researchers discover how well their questions are working, where they are failing, and determine what they can do to rectify the wide variety of problems that may surface while working with questionnaires.