Content uploaded by Patrick Dunleavy
Author content
All content in this area was uploaded by Patrick Dunleavy on Jan 09, 2023
Content may be subject to copyright.
Patrick Dunleavy (LSE) and Timothy Monteath (UCL/LSE)
1 December 2022
How can interview-based social science
become more open?
Part of The Open Social Science Handbook project
What are research interviews?
•Structured conversations, normally face-to-face, between
•The researcher(s) who (mostly) pose (initial) questions and give follow-up
responses to guide the conversation, and
•The interviewee(s) who (mostly) provide answers or responses to questions
that may extend into conversational or narrative threads
•Commonly both (sets of) actors are present in person in the same location,
but interviews also increasingly take place remotely
•Structured (or semi-structured) interviews follow a pre-set sequence of pre-
notified questions (but less rigid than a mass-survey session)
•Elite and specialized interviewing focuses on a common spine of topic
questions but tackled in variable and responsive sequences and depth,
depending on the interviewees and the purposes of the research
What interviews are good for
Helping to grasp people’s complex
meanings, understandings, emotions,
colour
Selectively recovering memorable details
and quotes, especially those
subsequently rehearsed by the
interviewee
Understanding post hoc interpretations
(rationalizations) of events or actions
Revealing things about an interviewee’s
personality via quotations
Ostensively demonstrating the
‘organizational culture’ of firms,
agencies, NGOs etc via quoting
interviewees’ discourse and reactions to
questions
Achieving a ‘first draft of history’ by quickly
recording how participants explain and
represent what they were doing for major
events or policy shifts
Surfacing different individuals’ (or groups’)
perspectives and accounts of the same
events –especially for policy histories,
and traumatic events
Understanding the ‘life world’ of people
whose experiences and meanings could
otherwise be difficult or impossible to
appreciate
Creating an interaction that may lead to
accessing documentary evidence
Getting leads to further potential
interviewees (e.g. cascades/waterfalls)
What interviews are bad at
Producing quantitative response
datapoints, because of variations in the
detailed phrasing and sequencing of
questions
Surfacing detailed information on dates or
details of arguments or actions or
levels of funding or regulation decisions
- unless interview statements can be
backed up by public documentation
- or interview leads to access to
hitherto private documents
Getting past ‘public relations’ answers
Accessing ‘unsanitized’ quotations
Interviewees may stray into controversies
creating risks for themselves or defame
others –making editing inevitable
Keeping elite interviewees on track in the
conversation so as to avoid
- digressions into experiences off topic
- irrelevance, not addressing questions
-re-hashing well known facts only
- over-compliance, interviewee tells you
what they think you want to hear
- over-extension or ‘boosterism’ only on
issues the interviewee knows about
Readers or re-users must rely a lot on the
interviewer’s view of what was said and
why –v. hard to gainsay
Similarly, interviewer summaries and
comparisons \across people talked to are
very hard to check or counteract
Key uncertainties around interview-based
studies
•Can we be sure what interviews took place?
•Who did you speak to? Maybe different people would have said other
things?
•Were the people interviewed in a position to credibly answer the
questions asked? –in terms of their level of seniority in an
organization, involvement in key events, the time period of their
involvement, other situational factors
•Did the questions asked, or your sequencing of topics, ‘lead’
respondents to answer in a particular (desired) way?
•Is your interpretation or analysis legitimate or credible, given what
interviewees actually said?
E.g two scandals around interview-based
research
•An article on gay marriage in Science by PhD student and top Yale political
scientist claimed rare evidence that persuasion of interviewees to become
more tolerant was feasible across repeat interviews, especially where the
interviewer revealed their gay sexuality. It later emerged that the estimated
costs of the interview fieldwork was $2 million and that PhD person could not
provide any transcript evidence of interviews to back claimed quantitative
coded evidence. Article retracted. See here
•Amazon’s ‘mechanical Turk’ system for spot contracting between survey
sponsors and ‘gig’ interviewees (used by businesses and a lot by impoverished
social scientists) has suffered from gig people using VPNs to fake their
locations, roles and qualifications. See here
Three ways of using interviews to produce
codes and quantitative metrics
•Pre-coding frame for short ‘open text’ comments or write-in boxes in surveys
•Very strong interview frames that spark an extended in-person or remote qualitative
discussion with an expert interviewee/informant, on each topic point in the frame.
The range of variation is restricted to a consistent three or five point scale between
two poles across each topic. Each informant’s response is post-classified onto the
frame on every topic. See
•Free text responses (normally) transcribed and then analysed quantitatively for
vocabulary, concepts, conjunctures, memes and phrasing bits using NVIVO-type text
analysis software. Hand coding of interest-items is also feasible with small N
interviews.
7
8
Main types of social science interviews
Purpose of
interview
1. Off the
record
2. Non
-
attributable
3. On the record
May not be cited
or quoted at all
May be quoted verbatim,
but only in a fully
anonymized way
May be quoted verbatim and
attributed to the named
individual
Understanding
a whole ‘life
world’
Anthropology of
indigenous
people
Anthropology, sociology Ethnographic studies
Capturing one
stream/aspect
of experience
Quantitative
survey open-
ended questions
Medical and health studies,
social work, consumer
studies
Some of column 2 types
Understanding
interviewee’s
specific or
historical role
Biography Dominant form in elite
political science/ IR/ public
policy, social policy,
business, management
Management case studies, plus
some of column 2 types
GAINS AND LIMITS OF RECORDING MODES
9
Mode of recording
Information lost
Compression
Practical issues
Written notes,
made
in (or near) interview
Any full or reliable record of
discourse
Very high
Taking notes while
listening/ talking
Typescript of audio
Character of interviewee’s
voice and conversation
Medium
Easy to deposit. Maybe
helps anonymization if
NON
-ATTRIBUTABLE.
Full audio
Facial expressions, body
language. Visual cues to
interview context
Low
Here and all rows
below
: Only feasible if
ON THE RECORD
Video
Interviewer or 2
nd person not
covered
Low
Covers only person(s)
in focus
Zoom, Teams online
Not much. Transcripts
generated also
Very low
Few
–
easy, works well
with multiple people
Podcast
Facial expressions, body
language. Any edits made
Low
Can be more complex
to set up. Editing skills
maybe needed
Videocast
Any edits made
Very low
Practical steps towards greater openness
with off-the-record interviews
•Pre-register –core questions (especially prompts that might elicit documentation)
•Send interviewees the core questions – basic ‘no surprises’ courtesy
•Privately outline interviewee selection strategy in private document, and update
narratively as interviewing unfolds – e.g. record ‘cascade’ interviewing links
•Publicize a general outline in a blog or on social media covering the main focus of
study questions and rough interviewee sub-set.
•Invite comments and suggestions from other researchers and potential interviewees
•Systematically ‘stand up’ key points from an OTR interview from documentary
sources or other intervieweed
•Try to move interviewees to accept on the record or non-attributable status
Achieving greater openness
with non-attributable interviews 1
•Take all six previous steps recommended for off-the-record interviews
•Notify non-attributability safeguards and ultimate disposition of data planned to all
interviewees –and record them agreeing to it
•Privately outline your interviewee selection strategy in a non-public but date-
stamped document. Update narratively as interviewing unfolds.
•Record ‘cascade’ interviewing links –who recommended whom
•Record biases or perspectives as interviewees accumulate, and seek balance
Achieving greater openness
with non-attributable interviews 2
•Use long quotations to convey meanings where it’s feasible to preserve source
anonymity –(Such discourse is checkable)
•Use multiple short quotations more widely with clear conventions
•Use interviews to create codings that can be summarized quantitatively
•Use text analysis software to fragment memes and text discourse in anonymized
ways to yield quantifications
•Use text analysis to fragment memes and discourse in anonymized ways that yield
new summary perspectives on meanings
•Try to move interviewees to accept on the record status
Enhancing the openness
of on-the-record interviews
•Take previous applicable steps from off-the-record and non-attributable interviews
•Record full discourse systematically and digitally in easily archived forms
•Speedily complete documentation for interviews as soon as conducted, and link to
your recorded subjective impressions of the person and content involved.
•Move towards podcasting or video-casting all interviews, without substantive
editing, and potentially generating some immediate open commentary or debate
•Reactions and comments may be addable to the initial record? “With many
eyeballs, all bugs are shallow” open source rationale
•Deposit the full-discourse record in a permanent archive with good documentation,
and publicize DB’s location in open access articles, books, and university repository
•Respond positively and quickly to other researchers’ enquiries and requests
Single study benefits of greater openness with
interview-based research
•PROVENANCE –Researchers and (and readers generally) gain extra assurance that
your interviews took place, with appropriate people, and were undertaken with due
care and appropriate methods
•REPRODUCEABILITY –Applying the same approach as your analysis, other
researchers can check the accuracy and validity of your interpretation against the
complete context of what interviewees said, or at least a wider context of discourse
during the interview (Turing Way criterion)
•ENHANCED ROBUSTNESS –Other researchers can apply different methods to the
same interview database, hopefully reaching similar conclusions (Turing Way
criterion)
•REPUTATION –Analysis clearly supported by open data evidence will be better
regarded in the discipline and by outside readers
Wider benefits of greater openness with
interview-evidence
•REPLICABILITY –Researchers can use the same method (i.e. initial questions,
concepts/ vocabularies and follow-on prompts) with other (but validly comparable or
similar) sets of interviewees, and find if the same results emerge (Turing Way
criterion)
•GENERALIZABILITY –Applying different interview approaches or methods, with
different sets of interviewees, across different contexts and perhaps time periods,
other researchers can find if the same results emerge (Turing Way criterion)
•LEARNING GAINS with positive results –Replicated results and greater
generalizability boost sub-field learning, consensus and authoritativeness
•LEARNING GAINS with negative results –Establishing the reasons for non-
replication with different methods can be faster and more definite. Similarly, firming
up the reasons for non-generalization to other samples of interviewees is easier.
15
References
•Nick Bloom and John Van Reenen (2006) ‘Measuring and Explaining Management Practices Across Firms and Countries’,
CEP Discussion Paper No 716, March 2006. https://cep.lse.ac.uk/_new/publications/abstract.asp?index=7411
•David Broockman and Joshua Kalla (2016) ‘Durably reducing transphobia: A field experiment on door-to-door canvassing’,
Science, 8 April . Vol 352 ,Issue 6282.
https://www.ocf.berkeley.edu/~broockma/broockman_kalla_transphobia_canvassing_experiment.pdf
•Maria Konnikova (2015), ‘How a Gay-Marriage Study Went Wrong’, New Yorker, 22 May,
https://www.newyorker.com/science/maria-konnikova/how-a-gay-marriage-study-went-wrong
•Douglas J. Ahler, Carolyn E. Roush, Gaurav Sood, (2020) ‘The Micro-Task Market for Lemons: Data Quality on Amazon’s
Mechanical Turk’ , https://gsood.com/research/papers/turk.pdf
16
Patrick Dunleavy (LSE) and Timothy Monteath (UCL/LSE)
1 December 2022
How can interview-based social science
become more open?
Part of The Open Social Science Handbook project
Thanks for listening