Project

The Scientific Method & Guidelines for Science (usefulsciencepapers.org)

Goal: To help improve the practice of science by developing and testing operational checklists of guidelines to help scientists to follow the scientific method in their research, and to help researchers and others to evaluate whether a research paper or report provides useful scientific findings.

The Cambridge University Press page for our book "The Scientific Method: A Guide to Finding Useful Knowledge" is at https://www.cambridge.org/core/books/scientific-method/AA207C88D913403F2D55DEB534F7DF1B#fndtn-information

Amazon's page is https://protect-au.mimecast.com/s/ynOZCvl1o9HR61vDFXbi2K?domain=amazon.com

Updates
0 new
3
Recommendations
0 new
0
Followers
0 new
26
Reads
107 new
3726

Project log

Kesten Green
added a research item
Ten checklists for Armstrong & Green's "The Scientific Method" book in the form of pdf forms. For more information on "The Scientific Method", see thescientificmethod.info.
J. Scott Armstrong
added 5 research items
In Armstrong (1982a), I examined alternative explanations to the empirical findings that supported the use of formal planning. In considering the possibility that researcher bias might lead to such results, I used Terpstra’s (1981) evaluation scheme. Based on this test, poor methodology did not seem responsible for the conclusions on the value of formal planning.
Modest support was found for the "Dr. Fox Phenomenon": Management scientists gain prestige by unintelligible writing. A positive correlation (+0.7) was found between the prestige of 10 management journals and their "fog indices" (reading difficulty). Furthermore, 32 faculty members were asked to rate the prestige of four passages from management journals. The content of the passages was held constant while readability was varied. Those passages that were more difficult to read were rated higher in research competence.
I briefly summarize prior research showing that tests of statistical significance are improperly used even in leading scholarly journals. Attempts to educate researchers to avoid pitfalls have had little success. Even when done properly, however, statistical significance tests are of no value. Other researchers have discussed reasons for these failures. I was unable to find empirical evidence to support the use of significance tests under any conditions. I then show that tests of statistical significance are harmful to the development of scientific knowledge because they distract the researcher from the use of proper methods. I illustrate the dangers of significance tests by examining a re-analysis of the M3-Competition. Although the authors of the re-analysis conducted a proper series of statistical tests, they suggested that the original M3-Competition was not justified in concluding that combined forecasts reduce errors, and that the selection of the best method is dependent on the selection of a proper error measure. I show that the original conclusions were correct. Authors should avoid tests of statistical significance; instead, they should report on effect sizes, confidence intervals, replications/extensions, and meta-analyses. Practitioners should ignore significance tests and journals should discourage them.
J. Scott Armstrong
added 6 research items
Brownlie and Saren (this issue) claim that “few innovative papers appear in the top marketing journals.” They attribute this problem to incentive structures. They ask what steps might be taken by the various stakeholders to encourage the development and transmission of useful innovative ideas. Presumably, this means findings that might contribute to better practices in marketing management. I address the first two issues (the problem and why it occurs) by using empirical search by myself and others. 1 then speculate about the third issue-procedures for improving the publication prospects for useful innovations.
When we first began publication of the Journal of Forecasting, we reviewed policies that were used by other journals and also examined the research on scientific publishing. Our findings were translated into a referee's rating form that was published in the journal [Armstrong (1982a)]. These guidelines were favorably received. Most referees used the Referee's Rating Sheet (Exhibit 1 provides an updated version) and some of them wrote to tell us that they found it helpful in communicating the aims and criteria of the journal.
Research with the potential to produce controversial findings is important to progress in the sciences. But scientific innovators often meet with resistance from the scientific community. Much anecdotal evidence has been provided about the reception accorded to researchers who have obtained controversial findings. While many of these cases occurred long ago (e.g., Copernicus and Galileo), the problem continues to the present. This problem has been addressed to some extent in that nearly all universities grant their faculty tenure to protect their right to publish their findings. Still, the right to publish one's findings does not remove the barriers to publication of controversial findings. Perhaps the major barrier to publication is peer review. Peer review serves many useful functions such as correcting errors and providing a fair way to allocate journal space and research funds. But it also suppresses innovation. Below, I discuss how peer review affects the publication of controversial findings, discuss what is currently being done, and then recommend another solution to this problem.
J. Scott Armstrong
added 2 research items
Romer (1993) suggests that universities should undertake experiments that would test the value of mandatory attendance for economics courses. He presents evidence showing that those who attended his classes received higher grades on his exams and concluded that ^San important part of the relationship [to the course grade] reflects a genuine effect of attendance.^T This conclusion is likely to be welcomed by some economics professors. In this note, I address two issues. First, what does prior research imply about a relationship between attendance and learning? Second, does Romer^Rs own evidence support his conclusion that mandatory attendance is beneficial?
In general, I thought that the Boal and Willis "Note on the Armstrong/Mitroff Debate" provided an interesting and fair discussion. The summary of the consequences of the subjective versus objective approaches (Table 1 in their paper) was helpful. It clearly outlined the dilemma faced by scientists: "Should I strive for personal gain or for scientific contributions?" It also described what is likely to happen to the theories generated from the subjective and objective approaches. For example, the authors claimed that the subjective approach will yield a fuller hearing for a theory. Given my preference for empirical evidence, I was disappointed that Boal and Willis had little evidence to report. Fortunately, recent research has been done on the above topics. This research supports some of Boal and Willis's conclusions, but it falsifies their conclusion that the subjective approach will provide a fuller hearing for theories. The evidence seems consistent with Boal and Willis's summary of the conflict between the advancement of scientists and scientific advancement. My summary of the empirical evidence on this conflict led to the "Author's Formula" (Armstrong, 1982a, p. 197). This states that scientists who are interested in career advancement should: (a) not select an important problem, (b) not challenge existing beliefs, (c) not obtain surprising results, (d) not use simple methods, (e) not provide full disclosure, and (f) not write clearly. These rules for scientists conflict with the aims of science. Unfortunately, many scientists use these rules and profit from them. Those who break the rules are often dealt with harshly by the scientific community.
J. Scott Armstrong
added 8 research items
Armstrong and Hubbard (1991), in a survey of editors of 20 psychology journals, found a bias against the publication of papers with controversial findings. The 16 editors who responded said that they received few papers with controversial findings during the last two years. When they did receive such papers, the reviewers rejected them. Some of these editors expressed dismay over this situation and said that their referees usually rejected such papers. The study encountered only one instance where the reviewers agreed that a paper with controversial findings should be published. The editor who handled this case was blunt: he picked referees who would agree to its publication.
This paper examines the additional evidence produced by the seven scientists on each of the issues.The issues were: (1) "Should econometricians use the method of multiple hypotheses rather than advocacy?" (2) "Do econometric methods provide the most accurate approach to short-range forecasting" (Table 2 of "Folklore versus Fact")? (3) "Are complex econometric methods more accurate than simple econometric methods" (Table 4 of "Folklore versus Fact")?
Problems in the use of factor analysis for deriving theory are illustrated by means of an example in which the underlying factors are known. The actual underlying model is simple and it provides a perfect explanation of the data. While the factor analysis 'explains' a large proportion of the total variance, it fails to identify the known factors in the model, The illustration is used to emphasize that factor analysis, by itself, may be misleading as far as the development of theory is concerned. The use of a comprehensive, and explicit à priori analysis is proposed so that there will be independent criteria for the evaluation of the factor analytic results.
J. Scott Armstrong
added 2 research items
Non-directive interviewing gets my vote as the most important marketing research technique. Furthermore, it can be mastered in a few hours. With the following rules and some practice, you could become a fairly good non-directive interviewer.
Excel spreadsheet checklists for comparing written and oral reports to evidence-based persuasion principles supported by Armstrong's Persuasive Advertising book, or logic.
J. Scott Armstrong
added a research item
The checklist relates to making persuasive oral presentations for problem solving. Many of the guidelines draw upon the principles in Persuasive Advertising.
J. Scott Armstrong
added a research item
When financial columnist James Surowiecki wrote The Wisdom of Crowds, he wished to explain the successes and failures of markets (an example of a "crowd") and to understand why the average opinion of a crowd is frequently more accurate than the opinions of most of its individual members. In this expanded review of the book, Scott Armstrong asks a question of immediate relevance to forecasters: Are the traditional face-to-face meetings an effective way to elicit forecasts from forecast crowds (i.e. teams)? Armstrong doesn't believe so. Quite the contrary, he explains why he considers face-to-face meetings a detriment to good forecasting practice, and he proposes several alternatives that have been tried successfully.
J. Scott Armstrong
added 14 research items
The following checklist relates to making persuasive oral presentations. Many of the guidelines draw upon research studies reported in my forthcoming book, Advertising and the Science of Persuasion (forthcoming 2005). Some of them were surprising to me. show you respect the client. This has the added advantage in that a high-status spokesperson is more credible; you can enhance status by formal dress and by wearing glasses (or harm credibility with sunglasses). ___ 4. Casting. If working in a group, select one speaker who is similar to the client (e.g., in accent and manner). Pick someone who is good at listening. If you have weak content, pick a presenter who is attractive (in the eyes of the audience); attractiveness is not needed for strong content. ___ 5. Handouts. Do not hand out material to read when you begin the talk. This may cause listeners to get out of step with the speaker.
A review of editorial policies of leading journals and of research relevant to scientific journals revealed conflicts between 'science' and 'scientists.” Owing to these conflicts, papers are often weak on objectivity and replicability. Furthermore, papers often fall short on importance, competence, intelligibility, or efficiency. Suggestions were made for editorial policies such as: (1) structured guidelines for referees, (2) open peer review, (3) blind reviews, and (4) full disclosure of data and method. Of major importance, an author's “Note to Referees” (describing the hypotheses and design, but not the results) was suggested to improve the objectivity of the ratings of importance and competence. Also, recommendations are made to authors for improving contributions to science (such as the use of multiple hypotheses) and for promoting their careers (such as using complex methods and obtuse writing).
Honesty is vital to scientific work and, clearly, most scientists are honest. However, recent publicity about cases involving cheating, including cases of falsification of data and plagiarism, raises some questions: Is cheating a problem? Does it affect management science? Should anything be done?
Kesten Green
added a research item
We examined two papers with long-range forecasts of global mean temperatures (IPCC 2007, and Green, Armstrong, & Soon 2009) that included forecasts of global mean temperatures, and one paper that used the IPCC projections to forecast a dramatic decline in the population of polar bears for compliance with the scientific method. Ratings for compliance with science by the authors and other raters found that the IPPC projections and the forecasts of polar bear endangerment were not scientific. On the other hand, Green, Armstrong, and Soon's forecast of no change in global mean temperatures, was the product of the scientific method. In particular, Green, Armstrong & Soon tested a reasonable alternative hypothesis.
Kesten Green
added an update
The latest version of "Guidelines for Science: Evidence-based Checklists" provides the following improvements over the previous version:
  1. Additional research findings, thanks to help from our reviewers.
  2. Errors corrected, thanks to our reviewers
  3. Less critical information dropped to save time for readers
  4. Improvements in the instructions for using the checklist, and in the wording of the checklist items following more reliability testing of the checklist for compliance with science. We found that inter-rater reliability is already high.
  5. Organization of the material improved.
  6. Many mistakes corrected by our copy editors.
Kesten Green
8 June, 2018
 
Kesten Green
added an update
We have revised the Checklist after some testing using several raters. In response to feedback from the raters, we changed some of the wording including the instructions, reversed the order of two of the items, and provided more space for comments by expanding the checklist to two pages. Importantly, we have required raters to provide reasons when they rate a paper as complying with a criterion.
To help raters use the checklist, we have provided it in the form of a MS-Word document as a supporting resource for the Guidelines for Science paper. Users can type in reasons next to the blue index characters in the body of the checklist and copy the blue checked box in the instructions to replace unchecked boxes to reflect their ratings.
We welcome feedback on the revised Checklist.
Kesten Green
 
Kesten Green
added an update
The “Guidelines for Science: Evidence-based Checklists” Working Paper was updated on September 13, 2017 with version #416. It replaces the previous April ResearchGate revision #395. We have made substantial changes since April:
  1. The paper’s subtitle is changed. It is now “Evidence-based Checklists” in order to put more emphasis on the use of evidence-based checklists as the way to improve practice in any field.
  2. The most important changes involved the wording of the checklists. The description of the “Criteria for Scientific Research Checklist” (Exhibit 1) is more explicit and comprehensive about how to evaluate “compliance to science.”
  3. Compliance with the checklist could be used as a condition of scientists’ contract with funders and other stakeholders, and research-based PhD candidates’ contracts with their supervisors and university.
  4. We are not aware of any other way to ensure compliance with science.
  5. Many of the “Guidelines for Scientists” checklist (Exhibit 2) items are accompanied by improved explanations of the operational steps that scientists can use to comply with science.
  6. The paper is now organized in a more logical way. It starts with the “Criteria for Scientific Research Checklist” (Exhibit 1) that can be used by any stakeholder to assess whether a research paper amounts to useful scientific research, and then provides operational steps for scientists to follow in order to meet those criteria (Exhibit 2).
  7. We sought, and received, additional replies from researchers we cited in order to ensure that we correctly summarized their findings.
  8. Thanks to our reviewers, we added much additional evidence. Despite these additions, we managed to reduce the length slightly. We will continue to improve the paper.
  9. Please send me suggestions for improvement. We are especially interested in relevant experimental findings that we might have overlooked, especially ones that conflict with our findings.
We are gratified by the interest of readers. We initially posted the paper on ResearchGate about one year ago. Since that time, there have been about 8,000 “Reads”; far more than for any other paper that either of us had previously posted on ResearchGate.
Kesten C. Green
University of South Australia
17 September 2017
 
Kesten Green
added a research item
Problem: The scientific method is unrivaled for generating useful knowledge, yet papers published in scientific journals frequently violate the scientific method. Methods: A definition of the scientific method was developed from the writings of pioneers of the scientific method including Aristotle, Newton, and Franklin. The definition was used as the basis of a checklist of eight criteria necessary for compliance with the scientific method. The extent to which research papers follow the scientific method was assessed by reviewing the literature on the practices of researchers whose papers are published in scientific journals. Findings of the review were used to develop an evidence-based checklist of 20 operational guidelines to help researchers comply with the scientific method. Findings: The natural desire to have one’s beliefs and hypotheses confirmed can tempt funders to pay for supportive research and researchers to violate scientific principles. As a result, advocacy has come to dominate publications in scientific journals, and had led funders, universities, and journals to evaluate researchers’ work using criteria that are unrelated to the discovery of useful scientific findings. The current procedure for mandatory journal review has led to censorship of useful scientific findings. We suggest alternatives, such as accepting all papers that conform with the eight critera of the scientific method. Originality: This paper provides the first comprehensive and operational evidence-based checklists for assessing compliance with the scientific method and for guiding researchers on how to comply. Usefulness: The “Criteria for Compliance with the Scientific Method” checklist could be used by journals to certify papers. Funders could insist that research projects comply with the scientific method. Universities and research institutes could hire and promote researchers whose research complies. Courts could use it to assess the quality of evidence. Governments could base policies on evidence from papers that comply, and citizens could use the checklist to evaluate evidence on public policy. Finally, scientists could ensure that their own research complies with science by designing their projects using the “Guidelines for Scientists” checklist. Keywords: advocacy; checklists; data models; experiment; incentives; knowledge models; multiple reasonable hypotheses; objectivity; regression analysis; regulation; replication; statistical significance
Kesten Green
added a project goal
To help improve the practice of science by developing and testing operational checklists of guidelines to help scientists to follow the scientific method in their research, and to help researchers and others to evaluate whether a research paper or report provides useful scientific findings.
The Cambridge University Press page for our book "The Scientific Method: A Guide to Finding Useful Knowledge" is at https://www.cambridge.org/core/books/scientific-method/AA207C88D913403F2D55DEB534F7DF1B#fndtn-information