ArticlePDF Available

Avoiding cognitive biases: promoting good decision making in research methods courses

Authors:
  • academicproofreadingplus.com

Abstract

The content of research methods courses in tertiary education, as revealed by popular textbooks, tends to focus largely on procedural aspects of research investigations. Although following proper procedures is essential for reaching valid and reliable outcomes, there is another attribute of good quality research that is often missing from research methods courses and textbooks: appropriate decision making and the avoidance of cognitive biases, such as confirmation bias and base rate neglect. In this essay, some of the more common cognitive biases are highlighted with examples from everyday life in order to illustrate how these biases can influence decision-making processes when reading about or conducting research investigations.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=cthe20
Teaching in Higher Education
ISSN: 1356-2517 (Print) 1470-1294 (Online) Journal homepage: http://www.tandfonline.com/loi/cthe20
Avoiding cognitive biases: promoting good
decision making in research methods courses
Paul Stapleton
To cite this article: Paul Stapleton (2018): Avoiding cognitive biases: promoting good
decision making in research methods courses, Teaching in Higher Education, DOI:
10.1080/13562517.2018.1557137
To link to this article: https://doi.org/10.1080/13562517.2018.1557137
Published online: 12 Dec 2018.
Submit your article to this journal
View Crossmark data
Avoiding cognitive biases: promoting good decision making
in research methods courses
Paul Stapleton
Department of English Language Education, The Education University of Hong Kong, Taipo, Hong Kong
ABSTRACT
The content of research methods courses in tertiary education, as
revealed by popular textbooks, tends to focus largely on
procedural aspects of research investigations. Although following
proper procedures is essential for reaching valid and reliable
outcomes, there is another attribute of good quality research that
is often missing from research methods courses and textbooks:
appropriate decision making and the avoidance of cognitive
biases, such as conrmation bias and base rate neglect. In this
essay, some of the more common cognitive biases are highlighted
with examples from everyday life in order to illustrate how these
biases can inuence decision-making processes when reading
about or conducting research investigations.
ARTICLE HISTORY
Received 16 January 2018
Accepted 5 December 2018
KEYWORDS
Research methods courses;
cognitive biases; decision
making; conrmation bias
Introduction
I recently read about a study reported in the newspaper whose results indicated that
school-aged children who eat breakfast receive higher grades than those who dont. I
then showed this news story to a group of rst-year doctoral students (in education)
and asked them to try to explain the causes behind the research results. Unsurprisingly,
most students focused on the nutritional value of breakfast and the need for calories to
maintain blood sugar levels, and by association the ability to concentrate in class. So far
so good. But what the media did not report was the fact that children who do not eat
breakfast are more likely to come from single-parent families where the parent has less
time to help with homework, or aord after-school tutoring and all of the other family,
and socio-economic-related elements that lead to good school performance among chil-
dren. In other words, eating breakfast is more of a proxy for a caring and economically
stable family environment which leads to better academic results, rather than a nutritional
benet.
Although in this case most of my students went in the right direction towards supplying
a causal reason, they appeared to be missing the forest for the trees. Uncovering deeper
causes and confounding eects is a critically important part of any analysis of data as illus-
trated through simple examples such as this one. However, jumping to the most obvious
causes, referred to as the availability heuristic,(Tversky and Kahneman 1974)or
© 2018 Informa UK Limited, trading as Taylor & Francis Group
CONTACT Paul Stapleton paulstapleton@gmail.com Department of English Language Education, The Education
University of Hong Kong, 10 Lo Ping Road, Taipo, Hong Kong
TEACHING IN HIGHER EDUCATION
https://doi.org/10.1080/13562517.2018.1557137
confusing correlation with causation are common cognitive errors in research studies as
well as everyday life. Thus, cognitive biases, i.e. mistakes in reasoning related to holding
onto personal preferences and beliefs despite contrary information, can have an important
inuence on data analysis when conducting research, but they tend to be overlooked by
research methods textbooks and courses that use these books. And while it is understood
that research methods textbooks and courses cannot possibly cover the myriad elements
involved in research processes, such as dealing with unique social contexts, or the special
interests sometimes served by research, this essay takes the position that a discussion of
cognitive biases should become an integral part of research methods courses in higher
education.
Research methods courses are a common xture in master-and doctoral-level pro-
grams. One recent survey, for example, reveals that among masters programs in language
teaching, research methods courses are the fth most common among core courses (Sta-
pleton 2018). These courses are perceived as necessary for graduate students entering into
a community of scholarship where they must understand and critically assess the course
readings, which often consist of journal articles describing empirical studies. At the doc-
toral level or in masters programs that require a dissertation, the content of research
methods courses also provides a grounding for those about to embark on their own scho-
larly investigations. Thus, master and doctoral-level programs typically include research
methods courses as part of their degree requirements. In this paper, it is the content of
these courses that is of interest because, as I will argue, the methods used in scientic
inquiry are multifaceted involving not only the common operational elements (such as
how to design surveys or perform statistical tests) focused on in these courses, but also
cognitive aspects related to decision making that are not always given the attention they
deserve.
One way to uncover the content taught in these courses is to review popular research
methods textbooks. A preliminary search of a few frequently cited textbooks (based on
Google Scholar (GS) metrics) identied in the broad area of research methods in edu-
cation (Cohen, Manion and Morrison 2011 (GS37k); Creswell 2012 (GS29k); Denzin
and Lincoln 2003 (GS32k) Miles and Huberman 2014 (GS83k)) and in my own area,
applied linguistics (Dornyei 2007 (GS 4k); MacKay and Gass 2005 (GS2.4k)), reveals
none have chapters or sections directly addressing cognitive biases. Rather, in each case,
following an introductory chapter or two discussing broad elements of research, the
focus of each of these textbooks is on procedural elements of research investigations.
And while this search is not meant to be exhaustive, these may well be characteristic of
most research methods textbooks in lacking discussions about how to make good
decisions while avoiding cognitive biases common to all humans.
Research methods courses and their associated textbooks certainly do not neglect dis-
cussions about the importance of rigor. These courses, at least as reected by their text-
books, typically introduce, and have extended explanations about the many threats to
reliability and validity in both quantitative and qualitative studies to ensure students
understand that procedures are subject to many pitfalls when research does not follow
proper protocols. These cannot be learned overnight because there are so many ways
things can go wrong if a study is not performed properly according to established
norms, and these appear to be getting increasingly rigorous. For example, in quantitative
studies, eect size, not often mentioned in social science-related publications a decade or
2P. STAPLETON
two ago, is demanded by more reviewers now (Lindstromberg 2016). Similarly, in quali-
tative studies involving coding, reviewers increasingly want to see independent coding
with inter-rater reliability scores, member checking and audit trails.
Although these textbooks, which presumably reect the content of research methods
courses, contain the necessary elements and procedures for graduate students to apply
to their own critical reading and investigations, curiously, often missing are elements of
human decision making that play an essential role in analyzing data. It is true that
some textbooks, and presumably courses, do have discussions about research traditions
explaining about the nature of inquiry (see Seligar and Shohamy 2013). Cohen,
Manion, and Morrison (2011) have an excellent chapter on searching for causation,
which has parallels with the issues discussed in this paper; Miles and Huberman (2014)
also have a section on causal chains and networks; however, these chapters and sections
are not framed in terms of cognitive biases. The reason for this exclusion, or perhaps it
is better described as an absence, is that our cognitive biases may be viewed as falling
outside of the domain of what is typically considered research methods.A deeper look
at some of these cognitive biases, however, suggests that research methods courses
could benet from an inclusion of some common thinking biases and errors because
they are so closely tied to scientic inquiry.
In the past decade, a number of books written for a general audience focusing on
decision making and the avoidance of cognitive errors have been published, perhaps
the most well-known of which is Thinking fast and slow,by Nobel laureate (Economics)
Daniel Kahneman (2011), largely based on his work with Amos Tversky in cognitive psy-
chology. In their collaborative research over about 14 years starting in the early 1970s,
richly described in biographical terms by Michael Lewis recently in The Undoing
Project,(Lewis 2016) via various experiments, they demonstrated numerous types of
irrational behavior exhibited by a majority of people. The connection that their work,
along with others in the eld of behavioral economics and cognitive psychology has
had on our understanding of human behavior has been profound and may be deserving
of closer attention from research methods courses, which, as noted above, tend to focus
on procedural techniques. Unlike the operational skills required for ensuring research
rigor, analytical thinking is needed to generate both accurate and innovative outcomes;
however, as revealed by Tversky and Kahneman (1974), human intuition can often lead
lay people and researchers alike into reaching awed conclusions.
Thus, I would like to suggest that research methods courses at the graduate level add
another level of content and instruction dealing with common biases and heuristics
that could overlay the current largely operationally oriented ones. Such instruction
could be represented by a chapter in a general research methods textbook, or perhaps a
lecture or two in a course, and would serve as a foundational part of research methods
courses. The purpose would be to draw attention to some very common cognitive
biases and related elements that can inhibit creativity or even lead to awed conclusions.
In a course I co-teach at the doctoral level called Conceptualizing Research,I introduce
a few of the more common cognitive biases, some of which were uncovered by Tversky
and Kahneman. I then try to link these to the studentsbroader research processes in
the hope that they will have a heightened awareness of the natural pitfalls that can lead
them down paths that are either excessively narrow or even awed. In each case, I
TEACHING IN HIGHER EDUCATION 3
begin with a story or example from the non-academic world and then attempt to draw
wider implications about the world of research.
Cognitive elements
Conrmation bias
When I was a child, for many years my mother was a nurse working the evening shift on
the psychiatric ward in a hospital. Once in a while, I remember her leaving in the early
evening and remarking, Full moon tonight; well be busy.A common belief among
nurses on that ward, and to some extent the general public, it seems, was that the lunar
cycle somehow had an eect on psychiatric hospital admissions. Although such a belief
has been completely debunked (Gorvin and Roberts 1994), its emergence most likely
had its origins in some actual occurrences when psychiatric ward admissions coincided
with a cloudless night when the moon was full. The belief may have been reinforced
over the years when a few instances of abnormally many admissions coincided with the
full moon, which further conrmed the belief. However, as Gilbert (1991) explains,
humans understand a statement by rst attempting to believe it, after which we decide
whether or not to disbelieve it. Thus, in the case of the lunar cycle belief, there is an
immediate tendency to rst recall instances from memory to support or conrm the
belief and use those as evidence. This cognitive process that selectively recalls information
that supports existing beliefs is called conrmation bias.Thus, the nurses probably
ignored the many instances when a normal number of admissions coincided with the
full moon because normalis less noticeable; plus, it would not have reinforced their exist-
ing beliefs.
To illustrate the power of conrmation bias, I have often played a little game with my
students. At the beginning of my lecture on cognitive biases, I ask students to identify a
number I have placed next to their sun sign (according to their month of birth), e.g.
#10 for Scorpio, from one to 12, and then invite them to come to the front of the class
to pick up the piece of paper based on their number. On the paper is a version of the fol-
lowing description from Forer (1949) that I have led them to believe is an individualized
depiction of their character based on their horoscope. However, in fact, all students receive
the same description on their numbered papers.
Some of your aspirations tend to be pretty unrealistic. At times you are extroverted, aable,
sociable, while at other times you are introverted, wary, and reserved. You have found it
unwise to be too frank in revealing yourself to others. You pride yourself as being an inde-
pendent thinker and do not accept others opinions without satisfactory proof. You prefer a
certain amount of change and variety and become dissatised when hemmed in by restric-
tions and limitations. At times you have serious doubts as to whether you have made the right
decision or done the right thing. Disciplined and controlled on the outside, you tend to be
worrisome and insecure on the inside. Your sexual adjustment has presented some problems
for you. While you have some personality weaknesses, you are generally able to compensate
for them. You have a great deal of unused capacity which you have not turned to your advan-
tage. You have a tendency to be critical of yourself. You have a strong need for other people to
like you and for them to admire you.
Variations of this ploy have been tried over the past few decades, often with students, and
the results tend to be uniform. Before disclosing the trick, most students, after reading the
4P. STAPLETON
description, have a strong tendency to attribute some accuracy to it based on their own
self-perceptions. I have tried this ploy over a dozen times and have found (based on the
reaction from students before revealing the trick), that most of them have attributed
some deeper meaning to astrology. When asked to debunk the exercise, most students
ascribe their belief to the fact that the explanation is so broad that it could apply to
anyone, which of course is correct. However, they miss an even more important notion,
which is related to their own cognitive biases. When reading through they have uncon-
sciously collected statements that conrm their pre-existing beliefs about themselves
and as their cherry-picking piles up, the belief is reinforced. In the meantime, character
traits that do not align with their own beliefs tend to be ignored.
Conrmation bias may be the singularly most dicult cognitive bias to overcome in
both everyday life, as well as when we analyze our research data. Normally, we have
some expectation about what story the results of our studies will tell. Upon crunching
the numbers or coding the data, we hope that our bottom line reveals statistical signi-
cance, or its qualitative equivalent, in order to conrm our preconceived beliefs; and if
it doesnt, there is a temptation to revisit the data or use a dierent test to wedge it into
those beliefs, especially if the results are not too far oof them. Even in qualitative research
where patterns are supposed to emerge from the data, it is dicult not to have precon-
ceived notions about where the data is heading. Knowledge about, and a heightened
awareness of conrmation bias can encourage graduate students to look more deeply at
their results and consider alternative explanations rather than cherry picking a choice
quote, especially when results are not aligned with their current beliefs or expectations.
A deep awareness of our tendencies towards conrmation bias can alert us to the signi-
cance of outliers, which also may be indicators of opportunities leading to new discoveries
and creative explanations.
Pattern seeking, randomness and regression to the mean
To illustrate the next cognitive bias, I ask students to imagine the gender sequence of six
babies born in a hospital (Kahneman 2011) as follows:
1. BBBGGG
2. GGGGGG
3. BGBBGB
I then ask them to rank the likelihood of each sequence. The usual response is that #3 is
the most likely (although I should disclose that my students are much warier of following
their intuition if I have deliberately misled them earlier in a lecture). Because we are all
aware that our sex at birth is random, #2 appears to be non-random, and this non-ran-
domness automatically triggers the human mind to seek a cause. We are pattern
seekers, believers in a coherent world, in which regularities (such as a sequence of six
girls) appear not by accident but as a result of mechanical causality or of someones inten-
tion(Kahneman 2011, 115).
With a little high school algebra, we realize that all three sequences are equally likely
(2X2X2X2X2X2 = 64); in other words, on average, GGGGGGrandomly happens once
in every 64 possible birth sequences of six babies, just as #1 and #3 do.
TEACHING IN HIGHER EDUCATION 5
Just as humans have proclivities to cherry-pick evidence that conrms our beliefs, we
also have a strong tendency to look for causes when we see patterns. However, seldom do
we attribute the observed pattern to random occurrences. Basically, humans are hard-
wired to seek causality because it brought evolutionary advantages. (That rustling in the
grass is probably just the wind, but best to jump away anyway in case its a snake). This
is a particularly important notion for graduate students to understand because they
tend not to be funded, and therefore work with smaller sample sizes, which of course
tend to randomly produce more patterns.
While randomness is a phenomenon that can explain some occurrences, another
common causal factor that receives less consideration than it should is regression to the
mean, or how measurements of a variable tend to uctuate around an average.
However, regression to the mean tends to be ignored as a possible cause because it pro-
vokes little excitement and provides no sexy appeal to the many pedagogy-related
studies that involve interventions. Like randomness, regression to the mean is a naturally
occurring phenomena unrelated to the independent variables that are introduced in many
studies.
In a classroom activity, in order to provide an everyday example of regression to the
mean, I ask my students to open their YouTube app and view the top tracks from Psy,
a Korean singer, and count the viewsof his top several songs. The worldwide phenom-
enon, Gangnam Style,clearly stands out as an aberration, while his later hits have
regressed to his mean.
Confusing correlation with causation while ignoring confounding eects
Closely related to our intuitive propensity to ignore randomness is confusing correlation
with causation as noted in the Introduction. Nowhere is this more common than in
medical studies that connect disease with diet. The media often reports studies that
purport to show those who consume a certain food, e.g. avocados, olive oil, walnuts, blue-
berries or whatever other superfood is in fashion, to have greater longevity. In other words,
a pattern or correlation between the consumption of a certain food and length of life is
found and causation is assumed. However, particularly regarding human health, which
is enormously complex, simple dietary choices seldom bring about specic health
eects. As an example of this complexity, the eating of avocados may also be associated
with access to fresh fruit, higher-income brackets, knowledge of nutrition and exercise,
etc, all of which have a bearing on health, which raises the issue of confounding eects
when analyzing results.
Another study similar to the one described in the Introduction (eating breakfast result-
ing in high grades) also relates to hidden causes. In this one, I ask students to explain the
deeper causes for the results of a study that found elderly participants who played mahjong
and bet on horse races had lower rates of dementia. The surface takeaway from this study
seems clear. Playing mahjong or betting on the races are intensely cognitive activities that
can help delay or avoid dementia. However, it can also be argued that those who are
attracted to these activities have certain existing biological characteristics that predispose
them to avoiding dementia. In other words, it may not be the activities that are working to
help them avoid dementia, but rather their inherent cognitive abilities that lead them to
enjoy these forms of entertainment that in turn coincide with lower rates of dementia.
6P. STAPLETON
This principle can be extended to the types of studies graduate students conduct in their
research, e.g. experiments that purport to show the ecacy of a certain treatment over a
control. Often when the treatment results in some kind of improvement, especially when
small samples are involved, primacy is attributed to the new treatment while downplaying
or ignoring confounding eects. Thus, building an awareness about correlation and cau-
sation requires repeated practice in order to develop a deep-seated sense of caution when
looking for patterns, while developing a healthy skepticism toward the seduction of what
appear to be the most available causes.
Representativeness, base rate and sample sizes
Most people have an intuitive understanding that larger sample sizes provide more accu-
rate readings of a measured criterion than smaller ones. Most also understand that
samples must be representative if they are meant to reect characteristics of a larger popu-
lation. However, representativeness can have a deceptive side. The following quiz is a
version of one described by Kahneman (2011) that I introduce to my students.
Susan is 25-years-old. Her hobbies are paragliding and scuba diving. She has had
several boyfriends and she has tattoos on her ankles and back. Choose her job?
1. A yoga instructor
2. A school teacher
3. A librarian
4. A race car driver
Generally speaking, only a small percentage of my students will choose a school
teacher, which is the correct answer. Instead, they will respond based on their perceived
representativeness of Susans character, which is more likely to match the image of the
other three professions, rather than a school teacher. However, base rate informs us that
there are most likely dozens of times more school teachers than there are yoga instruc-
tors, librarians and female race car drivers all combined, meaning there is a high prob-
ability that Susan is a school teacher. It is true that representativeness can serve as a
modest indicator, but a much more powerful predictive force is base rate. However,
ascribing greater signicance to representativeness while neglecting base rate, in other
words, failing to take statistical probabilities into account, is a very common cognitive
error that has implications during data analysis when looking to explain patterns or
reasons for certain results. This may be especially the case in studies that survey atti-
tudes; resulting analyses that invoke causative narratives based on representativeness
tend to provide much more compelling reading than an explanation that is based on
base rate.
Conclusion
The cognitive biases discussed here tend to be absent from research methods textbooks;
and while these phenomena may be deemed outside of the normal domain of coverage
in research methods courses, I argue here that they can have a signicant inuence on
both the research process as well as the outcomes of a study. Because an intuitive response
TEACHING IN HIGHER EDUCATION 7
tends to serve humans well, we can get lulled into a certain complacency even when doing
apparently rigorous research. This may occur for several reasons:
.the rigorous adherence to proper research procedures is viewed as paramount, while
good decision making has less focus;
.statistical realities such as randomness, regression to the mean and base rate are banal
and lack a much less compelling narrative than representativeness;
.conrmation bias acts as a strong force preventing any confounding eects from chal-
lenging established beliefs and positions.
Because our cognitive biases typically become entrenched by early adulthood, simple
didactic teaching, or awareness-raising about them is insucient. Repeated exposure to
and practice with various research scenarios, similar to the ones outlined throughout
this essay, are required to reduce or avoid our tendencies to be attracted to the most
obvious explanations and causes.
Finally, it should be apparent that many of these cognitive biases have connections with
each other, and collectively can have an impact on research outcomes. However, these
forces do not tend to be included in research methods courses.
They should be.
Disclosure statement
No potential conict of interest was reported by the author.
ORCID
Paul Stapleton http://orcid.org/0000-0003-0733-7944
References
Cohen, Louis, Lawrence Manion, and Keith Morrison. 2011.Research Methods in Education. 7th ed.
New York: Routledge.
Creswell, John W. 2012.Educational Research: Planning, Conducting, and Evaluating Quantitative
and Qualitative Research. 4th ed. Boston, MA: Pearson.
Denzin, Norman K., and Yvonna S. Lincoln. 2003.The Landscape of Qualitative Research. 2nd ed.
Thousand Oaks, CA: Sage.
Dornyei, Z. 2007.Research Methods in Applied Linguistics: Quantitative, Qualitative and Mixed
Methodologies. Oxford: Oxford University Press.
Forer, B. R. 1949.The Fallacy of Personal Validation: A Classroom Demonstration of Gullibility.
The Journal of Abnormal and Social Psychology 44: 118123.
Gilbert, Daniel Todd. 1991.How Mental Systems Believe.American Psychologist 46: 107119.
Gorvin, J. J., and M. S. Roberts. 1994.Lunar Phases and Psychiatric Hospital Admissions.
Psychological Reports 75: 14351440.
Kahneman, D. 2011.Thinking Fast and Slow. New York: Farrar, Straus and Giroux.
Lewis, M. 2016.The Undoing Project: A Friendship that Changed Our Minds. New York: Norton.
Lindstromberg, S. 2016.Inferential Statistics in Language Teaching Research: A Review and Ways
Forward.Language Teaching Research 20 (6): 741768.
MacKay, Alison, and Susan M Gass. 2005.Second Language Research Methods. Mahwah, NJ:
Lawrence Erlbaum Associates, Inc.
8P. STAPLETON
Miles, Matthew. B., and A. Michael Huberman. 2014.Qualitative Data Analysis: An Expanded
Sourcebook. Thousand Oaks, CA: Sage.
Seligar, H. W., and E. Shohamy. 2013.Second Language Research Methods. Oxford: Oxford
University Press.
Stapleton, Paul., and Shao Qing. 2018. A worldwide survey of MATESOL programs in 2014:
Patterns and perspectives." Language Teaching Research, 22: 10-28.
Tversky, Amos, and Daniel Kahneman. 1974.Judgment under Uncertainty: Heuristics and
Biases.Science 185: 11241131.
TEACHING IN HIGHER EDUCATION 9
... According to Encyclopedia Britannica, "[c]onfirmation bias is strong and widespread" [10] in contexts including making decisions. Concern with its negative effects on scientific research have been made in contexts of research methods courses [41] and information literacy education [25]. O'Neil [33] stated that "many [statistical] models [encode] human prejudice, misunderstanding, and bias." ...
Article
More research is needed involving middle school students' engagement in the statistical problem-solving process, particularly the beginning process steps: formulate a question and make a plan to collect data/consider the data. Further, the increased availability of large-scale electronically accessible data sets is an untapped area of study. This interpretive study examined middle school students' understanding of statistical concepts involved in making a plan to collect data to answer a statistical question within a social issue context using data available on the internet. Student artifacts, researcher notes, and audio and video recordings from nine groups of 20 seventh-grade students in two gifted education pull-out classes at a suburban middle school were used to answer the study research questions. Data were analyzed using a priori codes from previously developed frameworks and by using an inductive approach to find themes. Three themes that emerged from data related to confirmation bias. Some middle school students held preconceptions about the social issues they chose to study that biased their statistical questions. This in turn influenced the sources of data students used to answer their questions. Confirmation bias is a serious issue that is exacerbated due to endless sources of data electronically available. We argue that this type of bias should be addressed early in students' educational experiences. Based on the findings from this study, we offer recommendations for future research and implications for statistics and data science education.
... When research conceptualisation is taught, it is explained in research methods courses and textbooks. However, research methods and textbooks are necessarily simplified because they cannot cover the many elements involved in research processes (Stapleton 2018). ...
Article
Research conceptualization is challenging for doctoral and master’s writers, particularly multilingual students engaging in thesis writing or writing for publication. In doctoral and master’s student writing, research conceptualization appears in three genres: problem statements, research proposals and introduction sections or chapters. Swale’s (1990; Feak and Swales, 2011) CARS model is most often used to analyze conceptualization in these genres. While very useful as an analytical tool, the CARS model does not translate well to pedagogy. I argue that Merriam’s (2009) problem/purpose statement and questions (PPS&Q) format provides a flexible and accessible technique to make the process of research conceptualization visible and to help students focus their research throughout the writing process. Navigating problem formulation and gap spotting requires highly complex literacies and Merriam’s method allows students to begin simply and build complexity. While genre visibility provides a way for doctoral and master’s students to access high-level literacies demands, it can also be formulaic and constraining and needs to be taught with critical awareness.
Chapter
Full-text available
This chapter provides an analysis and evaluation of an institutional research project in higher education using the principles of QuantCrit (Gilborn et al., Race Ethnicity and Education, 21(2), 158–179, 2018). Mirroring the sentiments of Cross (Ethnicity and Disease, 28(1), 267–270, 2018) the chapter suggests that all those involved in institutional research and evaluation “should engage in critical self-reflection to avoid perpetuating racist narratives through data” (Cross, Ethnicity and Disease, 28(1), 268, 2018). Austen uses a typology of institutional research and evaluation (Austen, ‘It ain’t what we do, it’s the way that we do it’ – Researching student voices. WonkHE, 2018; Austen, The amplification of student voices via institutional research and evaluation. In T. Lowe & Y. El Hakim (Eds.), A handbook of student engagement: Theory into practice. Abingdon: Taylor Francis, 2020) to discuss the typical data collection approaches within an institution. Digital storytelling is used as one example of an approach and case studies are included. The focus of this chapter is primarily on student stories; however, the hidden voices of staff is acknowledged and discussed.
Chapter
A core concept taught to forensic investigators is the practice of equivocal forensic analysis which is strongly advocated by researchers and practitioners to limit investigators from reaching incorrect conclusions, either due to their own bias, or as a result of subjectivity from others. The process is however a time-consuming one and students may not see the value in doing so amidst a busy academic schedule. This paper examines how the use of the red herring plot mechanism in a game-based storytelling environment can be used in a computer forensics semester module to effectively highlight the importance of evaluating the available evidence objectively and thus encourage students to avoid falling into the trap of developing and following preconceived theories.
Article
Full-text available
This article reports on a survey of 241 Master of Arts programs in TESOL (MATESOL) in 16 countries serving as a snapshot of second language teacher education in 2014. After an initial screening by a set of criteria, these programs were first identified, and their course offerings, among other criteria, such as entrance and capstone requirements, were categorized. In total, 3,877 courses across 15 knowledge fields were coded with frequency counts taken. Our analysis revealed that the most frequently appearing course offerings tended to focus on teaching methods. However, large differences appeared among the programs with regard to the offering of courses in various knowledge fields. Differences also appeared between US and non-US programs, particularly with regard to practicum requirements. Several other patterns and themes emerged from the data including the extensive coverage of social and cultural aspects of language learning in elective courses, and the lack of focus on specific English as a foreign language (EFL) contexts in course offerings, among others.
Article
The Undoing Project examines the relationship between two psychologists, Amos Tversky and Daniel Kahneman, whose work altered how we understand the functioning of the mind. In this book, Lewis embarks on a journey to understand and explain psychological research to a popular audience. Lewis is an expert writer who knows what sells books. The Undoing Project is an informative, entertaining, and quick read. Lewis has produced a well-researched book that is accessible to a broad audience.
Article
This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997–2015) of Language Teaching Research (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence to traditional methods and practices, some of which are suboptimal. Accordingly, a number of improvements are recommended. Topics covered include the implications of small average sample sizes, the unsuitability of p values as indicators of replicability, statistical power and implications of low power, the non-robustness of the most commonly used significance tests, the benefits of reporting standardized effect sizes such as Cohen’s d, options regarding control of the familywise Type I error rate, analytic options in pretest–posttest designs, ‘meta-analytic thinking’ and its benefits, and the mistaken use of a significance test to show that treatment groups are equivalent at pretest. An online companion article elaborates on some of these topics plus a few additional ones and offers guidelines, recommendations, and additional background discussion for researchers intending to submit to LTR an article reporting a (quasi)experimental study.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Book
Preface Part I. Foundations of Research 1. Science, Schooling, and Educational Research Learning About the Educational World The Educational Research Approach Educational Research Philosophies Conclusions 2. The Process and Problems of Educational Research Educational Research Questions Educational Research Basics The Role of Educational Theory Educational Research Goals Educational Research Proposals, Part I Conclusions 3. Ethics in Research Historical Background Ethical Principles Conclusions 4. Conceptualization and Measurement Concepts Measurement Operations Levels of Measurement Evaluating Measures Conclusions 5. Sampling Sample Planning Sampling Methods Sampling Distributions Conclusions Part II. Research Design and Data Collection 6. Causation and Research Design Causal Explanation Criteria for Causal Explanations Types of Research Designs True Experimental Designs Quasi-Experimental Designs Threats to Validity in Experimental Designs Nonexperiments Conclusions 7. Evaluation Research What Is Evaluation Research? What Can an Evaluation Study Focus On? How Can the Program Be Described? Creating a Program Logic Model What Are the Alternatives in Evaluation Design? Ethical Issues in Evaluation Research Conclusions 8. Survey Research Why Is Survey Research So Popular? Errors in Survey Research Questionnaire Design Writing Questions Survey Design Alternatives Combining Methods Survey Research Design in a Diverse Society Ethical Issues in Survey Research Conclusions 9. Qualitative Methods: Observing, Participating, Listening Fundamentals of Qualitative Research Participant Observation Intensive Interviewing Focus Groups Combining Qualitative and Quantitative Methods Ethical Issues in Qualitative Research Conclusions 10. Single-Subject Design Foundations of Single-Subject Design Measuring Targets of Intervention Types of Single-Subject Designs Analyzing Single-Subject Designs Ethical Issues in Single-Subject Design Conclusions 11. Mixing and Comparing Methods and Studies Mixed Methods Comparing Reserch Designs Performing Meta-Analyses Conclusions 12. Teacher Research and Action Research Teacher Research: Three Case Studies Teacher Research: A Self-Planning Outline for Creating Your Own Project Action Research and How It Differs From Teacher Research Validity and Ethical Issues in Teacher Research and Action Research Conclusions Part III. Analyzing and Reporting Data 13. Quantitative Data Analysis Why We Need Statistics Preparing Data for Analysis Displaying Univariate Distributions Summarizing Univariate Distributions Relationships (Associations) Among Variables Presenting Data Ethically: How Not to Lie With Statistics Conclusions 14. Qualitative Data Analysis Features of Qualitative Data Analysis Techniques of Qualitative Data Analysis Alternatives in Qualitative Data Analysis Computer-Assisted Qualitative Data Analysis Ethics in Qualitative Data Analysis Conclusions 15. Proposing and Reporting Research Educational Research Proposals, Part II Reporting Research Ethics, Politics, and Research Reports Conclusions Appendix A: Questions to Ask About a Research Article Appendix B: How to Read a Research Article Appendix C: Finding Information, by Elizabeth Schneider and Russell K. Schutt Appendix D: Table of Random Numbers Glossary References Author Index Subject Index About the Authors
Article
Is there a difference between believing and merely understanding an idea? R. Descartes (e.g., 1641 [1984]) thought so. He considered the acceptance and rejection of an idea to be alternative outcomes of an effortful assessment process that occurs subsequent to the automatic comprehension of that idea. This article examined B. Spinoza's (1982) alternative suggestion that (1) the acceptance of an idea is part of the automatic comprehension of that idea and (2) the rejection of an idea occurs subsequent to, and more effortfully than, its acceptance. In this view, the mental representation of abstract ideas is quite similar to the mental representation of physical objects: People believe in the ideas they comprehend, as quickly and automatically as they believe in the objects they see. Research in social and cognitive psychology suggests that Spinoza's model may be a more accurate account of human belief than is that of Descartes. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Acceptance of the claims made by researchers in any field depends in large part on the appropriateness of the methods used to gather data. In this chapter I focus on two approaches to research in second language acquisition: (a) various types of acceptability judgments or probes aimed at assessing acquisition of syntactic structure; and (b) various types of stimulated recall designed to gather learners' accounts of their own thought processes. Both methods attempt to overcome a principal problem in psycholinguistics: the desire to describe a learner's knowledge about a language based on the incomplete evidence stemming from learner production. Refinements in acceptability judgments have come from some newer multiple-choice or truth-value story tasks that allow researchers to determine the level of learner knowledge about particular syntactic structures (in the examples here, reflexives). Stimulated recall offers some additional perspectives, but its usefulness can be greatly affected by the temporal proximity of the recall to the original task; the amount of support provided to prompt the recall; and the nature and amount of training given to both interviewer and interviewee. While these newer research methods can improve the accuracy and variety of data available to SLA investigators, research methods drawn from L1 acquisition or L1 research cannot necessarily be assumed to be equally valid when used to examine L2 acquisition.