Conference PaperPDF Available

Statistical issues that will get your manuscript rejected

Authors:

Abstract

An important part of professional development is engaging in scholarly research. However, many ELT researchers do not have sufficient training in quantitative analysis, which can be a major obstacle to successfully publishing one’s findings in high-impact journals. This presentation overviews the most common statistical problems found in second language journals (Al-Hoorie & Vitta, 2019). Each statistical issue will be described in detail with examples, followed by an explanation of how to avoid it. The statistical issues reviewed include reliability, validity, making inferences from descriptive statistics, incomplete reporting of results, not reporting effect sizes, not adjusting for multiple comparisons, and not checking assumptions. These problems are prevalent in published research in second language and applied linguistics journals. Tables and figures will be used to explain these issues, both authentic and researcher-created for pedagogical purposes. The presentation will also discuss emerging and cutting-edge issues in research design, such as questionable research practices, the replication crisis, preregistration and registered reports, multi-lab collaborations, and the importance of a cumulative science based on meta-analytic syntheses (Marsden et al., 2018). It is hoped that the attendees will gain a better understanding of the statistical requirements of academic publishing. This will allow them not only to write better manuscripts, but also to evaluate the statistical rigor of published manuscripts. Al-Hoorie, A. H., & Vitta, J. P. (2019). The seven sins of L2 research: A review of 30 journals’ statistical quality and their CiteScore, SJR, SNIP, JCR Impact Factors. Language Teaching Research, 23(6), 727–744. doi:10.1177/1362168818767191 Marsden, E., Morgan-Short, K., Trofimovich, P., & Ellis, N. C. (2018). Introducing Registered Reports at Language Learning: Promoting transparency, replication, and a synthetic ethic in the language sciences. Language Learning, 68(2), 309–320. doi:10.1111/lang.12284
STATISTICAL ISSUES
That Will Get Your Manuscript Rejected
Ali H. Al-Hoorie
ELT Saudi 2020 Conference Expo Summit
Jeddah, 29 January 2020
JOURNAL PUBLISHING
Can be frustrating
Final outcome seems random
Long process
QUANTITATIVE METHODOLGY
High-impact journal favor quantitative methods
Hard to learn, especially for ELT and applied linguistics researchers
Hire a statistician, but
Always dependent
How can you critically evaluate literature you read?
Quantitative methodology is a dynamic & evolving
Statistics
Design
Practices
OUTLINE
Part 1: Common mistakes
Part 2: Modern research practices
COMMON MISTAKES
1) No reliability
2) No validity
3) Inferences from descriptive statistics
4) Incomplete reporting, incl non-sig
5) No effect sizes
6) No adjustment for multiple comparisons
7) Not checking assumptions
PART 1: COMMON MISTAKES
EXAMPLE: COURSE SATISFACTION (OUT OF 10)
Groups
N
Female
50
Male
50
M
SD
9.0
10.0
6.0
10.0
t
df
p
1.50
98
.137
d
0.30
Groups
N
Female
150
Male
150
M
SD
9.0
10.0
6.0
10.0
t
df
p
2.50
298
.010
d
0.30
TAKE HOME POINTS
Not only mean and SD, but also a statistical test
Report in full, even if non-sig
Report the effect size
Aim for a larger sample
EXAMPLE 2: COURSE SATISFACTION CONT
Do female students like the course better?
Do female students studying Business like it better?
Do female students under the age of 20 like it better?
Do female students coming from outside Jeddah like it better?
Do female students who visited a foreign country like it better?
Once you find sig, rewrite the lit review and RQs
TEXAS SHARPSHOOTER
TAKE HOME POINTS
Determine your RQs in advance
Avoid data “fishing expeditions”
“If you torture the data long enough, it will confess to anything”
Adjust for multiple comparisons
List all analyses you did before you reached a sig result
TAKE HOME POINTS
Confirmatory vs. exploratory research? (though see Szollosi & Donkin, 2019)
Predicted a priori vs. ad hoc rationalization
HARKing (Hypothesizing after Results are Known)
p-hacking
Cherry-picking
Selective omission
Data snooping
Avoid piecemeal publication
Questionable research practices (QRPs)
PART 2: MODERN RESEARCH PRACTICES
OPEN SCIENCE
Instrument
Data
Code (syntax)
www.iris-database.org
Badges
REPLICATION
Replication crisis
Direct
Partial
Conceptual
ADVERSARIAL COLLABORATION
Competing predictions
Two parties agree on a design
Conduct the study & report the findings
Each party can interpret the results
Arbiter may co-author
PRE-REGISTRATION
Decide in advance on:
RQs
Research design
Sample size
Statistical analyses
Register it online, time-stamped
MULTI-LAB COLLABORATION
Follow pre-determined procedures
Pre-registered
Usually replication
Can be adversarial
MULTI-LAB COLLABORATION
Long list of authors
APA 7th edition changes (October 2019)
List 7 vs. 20 authors
Professorial promotions?
REGISTERED REPORTS
Write the lit review, RQs & proposed method
1st review: before conducting the study
Get “in-principle acceptance”
Conduct the study, adhere to the proposed method
2nd review: for adherence to method only
Acceptance rate reaches 100%
(Chambers, 2019)
CUMULATIVE MINDSET
Meta-analysis
Report non-sig result
Made a mistake?
Like medical studies
REFERENCES
Chambers, C. (2019). What’s next for registered reports? Nature, 573: 187–189
Szollosi, A., & Donkin, C. (2019, September 21). Arrested theory development: The misguided distinction
between exploratory and confirmatory research. https://doi.org/10.31234/osf.io/suzej
ResearchGate has not been able to resolve any citations for this publication.
Preprint
Full-text available
Starting from the view that progress in science consists of the improvement of our theories, in the current paper we ask two questions: what makes a theory good, and how much do the current method-oriented solutions to the replication crisis contribute to the development of good theories? Based on contemporary philosophy of science, we argue that good theories are hard-to-vary: they (1) explain what they are supposed to explain, (2) are consistent with other good theories, and (3) cannot easily be adapted to explain anything. Theories can be improved by identifying problems in them either by argument or by experimental test, and then correcting these problems by changing the theory. Importantly, such changes and the resultant theory should only be assessed based on whether they are hard-to-vary. An assessment of the current state of the behavioral sciences reveals that theory development is arrested by the lack of consideration for how easy it is to change theories to account for unexpected observations. Further, most of the current method-oriented solutions are unlikely to contribute much to the development of good theories, because they do not work towards eliminating this problem. Instead, they reward only temporary inflexibility in theories, and promote the assessment of theory change based on whether the theory was changed before (confirmatory) or after (exploratory) an experimental test, but not whether that change yields a hard-to-vary theory. Finally, we argue that these methodological solutions would become irrelevant if we turned our focus to the explicit aim of developing theories that are hard-to-vary.
Article
Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach. Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.