added a research item
Project
The Scientific Method & Guidelines for Science (usefulsciencepapers.org)
Updates
0 new
3
Recommendations
0 new
0
Followers
0 new
26
Reads
107 new
3726
Project log
Ten checklists for Armstrong & Green's "The Scientific Method" book in the form of pdf forms. For more information on "The Scientific Method", see thescientificmethod.info.
In Armstrong (1982a), I examined alternative explanations to the empirical findings that supported the use of formal planning. In considering the possibility that researcher bias might lead to such results, I used Terpstra’s (1981) evaluation scheme. Based on this test, poor methodology did not seem responsible for the conclusions on the value of formal planning.
Modest support was found for the "Dr. Fox Phenomenon": Management scientists gain prestige by unintelligible writing. A positive correlation (+0.7) was found between the prestige of 10 management journals and their "fog indices" (reading difficulty). Furthermore, 32 faculty members were asked to rate the prestige of four passages from management journals. The content of the passages was held constant while readability was varied. Those passages that were more difficult to read were rated higher in research competence.
I briefly summarize prior research showing that tests of statistical significance are improperly used even in leading scholarly journals. Attempts to educate researchers to avoid pitfalls have had little success. Even when done properly, however, statistical significance tests are of no value. Other researchers have discussed reasons for these failures. I was unable to find empirical evidence to support the use of significance tests under any conditions. I then show that tests of statistical significance are harmful to the development of scientific knowledge because they distract the researcher from the use of proper methods. I illustrate the dangers of significance tests by examining a re-analysis of the M3-Competition. Although the authors of the re-analysis conducted a proper series of statistical tests, they suggested that the original M3-Competition was not justified in concluding that combined forecasts reduce errors, and that the selection of the best method is dependent on the selection of a proper error measure. I show that the original conclusions were correct. Authors should avoid tests of statistical significance; instead, they should report on effect sizes, confidence intervals, replications/extensions, and meta-analyses. Practitioners should ignore significance tests and journals should discourage them.
Brownlie and Saren (this issue) claim that “few innovative papers appear in the top marketing journals.” They attribute this problem to incentive structures. They ask what steps might be taken by the various stakeholders to encourage the development and transmission of useful innovative ideas. Presumably, this means findings that might contribute to better practices in marketing management. I address the first two issues (the problem and why it occurs) by using empirical search by myself and others. 1 then speculate about the third issue-procedures for improving the publication prospects for useful innovations.
When we first began publication of the Journal of Forecasting, we reviewed policies that were used by other journals and also examined the research on scientific publishing. Our findings were translated into a referee's rating form that was published in the journal [Armstrong (1982a)]. These guidelines were favorably received. Most referees used the Referee's Rating Sheet (Exhibit 1 provides an updated version) and some of them wrote to tell us that they found it helpful in communicating the aims and criteria of the journal.
Research with the potential to produce controversial findings is important to progress in the sciences. But scientific innovators often meet with resistance from the scientific community. Much anecdotal evidence has been provided about the reception accorded to researchers who have obtained controversial findings. While many of these cases occurred long ago (e.g., Copernicus and Galileo), the problem continues to the present. This problem has been addressed to some extent in that nearly all universities grant their faculty tenure to protect their right to publish their findings. Still, the right to publish one's findings does not remove the barriers to publication of controversial findings.
Perhaps the major barrier to publication is peer review. Peer review serves many useful functions such as correcting errors and providing a fair way to allocate journal space and research funds. But it also suppresses innovation. Below, I discuss how peer review affects the publication of controversial findings, discuss what is currently being done, and then recommend another solution to this problem.
Romer (1993) suggests that universities should undertake experiments that would test the value of mandatory attendance for economics courses. He presents evidence showing that those who attended his classes received higher grades on his exams and concluded that ^San important part of the relationship [to the course grade] reflects a genuine effect of attendance.^T This conclusion is likely to be welcomed by some economics professors. In this note, I address two issues. First, what does prior research imply about a relationship between attendance and learning? Second, does Romer^Rs own evidence support his conclusion that mandatory attendance is beneficial?
In general, I thought that the Boal and Willis "Note on the Armstrong/Mitroff Debate" provided an interesting and fair discussion. The summary of the consequences of the subjective versus objective approaches (Table 1 in their paper) was helpful. It clearly outlined the dilemma faced by scientists: "Should I strive for personal gain or for scientific contributions?" It also described what is likely to happen to the theories generated from the subjective and objective approaches. For example, the authors claimed that the subjective approach will yield a fuller hearing for a theory.
Given my preference for empirical evidence, I was disappointed that Boal and Willis had little evidence to report. Fortunately, recent research has been done on the above topics. This research supports some of Boal and Willis's conclusions, but it falsifies their conclusion that the subjective approach will provide a fuller hearing for theories.
The evidence seems consistent with Boal and Willis's summary of the conflict between the advancement of scientists and scientific advancement. My summary of the empirical evidence on this conflict led to the "Author's Formula" (Armstrong, 1982a, p. 197). This states that scientists who are interested in career advancement should: (a) not select an important problem, (b) not challenge existing beliefs, (c) not obtain surprising results, (d) not use simple methods, (e) not provide full disclosure, and (f) not write clearly. These rules for scientists conflict with the aims of science. Unfortunately, many scientists use these rules and profit from them. Those who break the rules are often dealt with harshly by the scientific community.
Armstrong and Hubbard (1991), in a survey of editors of 20 psychology journals, found a bias against the publication of papers with controversial findings. The 16 editors who responded said that they received few papers with controversial findings during the last two years. When they did receive such papers, the reviewers rejected them. Some of these editors expressed dismay over this situation and said that their referees usually rejected such papers. The study encountered only one instance where the reviewers agreed that a paper with controversial findings should be published. The editor who handled this case was blunt: he picked referees who would agree to its publication.
This paper examines the additional evidence produced by the seven scientists on each of the issues.The issues were:
(1) "Should econometricians use the method of multiple hypotheses rather than advocacy?" (2) "Do econometric methods provide the most accurate approach to short-range forecasting" (Table 2 of "Folklore versus Fact")? (3) "Are complex econometric methods more accurate than simple econometric methods" (Table 4 of "Folklore versus Fact")?
Problems in the use of factor analysis for deriving theory are illustrated by means of an example in which the underlying factors are known. The actual underlying model is simple and it provides a perfect explanation of the data. While the factor analysis 'explains' a large proportion of the total variance, it fails to identify the known factors in the model, The illustration is used to emphasize that factor analysis, by itself, may be misleading as far as the development of theory is concerned. The use of a comprehensive, and explicit Ă priori analysis is proposed so that there will be independent criteria for the evaluation of the factor analytic results.
Non-directive interviewing gets my vote as the most important marketing research technique. Furthermore, it can be mastered in a few hours. With the following rules and some practice, you could become a fairly good non-directive interviewer.
Excel spreadsheet checklists for comparing written and oral reports to evidence-based persuasion principles supported by Armstrong's Persuasive Advertising book, or logic.
The checklist relates to making persuasive oral presentations for problem solving.
Many of the guidelines draw upon the principles in Persuasive Advertising.
When financial columnist James Surowiecki wrote The Wisdom of Crowds, he wished to explain the successes and failures of markets (an example of a "crowd") and to understand why the average opinion of a crowd is frequently more accurate than the opinions of most of its individual members. In this expanded review of the book, Scott Armstrong asks a question of immediate relevance to forecasters: Are the traditional face-to-face meetings an effective way to elicit forecasts from forecast crowds (i.e. teams)? Armstrong doesn't believe so. Quite the contrary, he explains why he considers face-to-face meetings a detriment to good forecasting practice, and he proposes several alternatives that have been tried successfully.
The following checklist relates to making persuasive oral presentations. Many of the guidelines draw upon research studies reported in my forthcoming book, Advertising and the Science of Persuasion (forthcoming 2005). Some of them were surprising to me. show you respect the client. This has the added advantage in that a high-status spokesperson is more credible; you can enhance status by formal dress and by wearing glasses (or harm credibility with sunglasses). ___ 4. Casting. If working in a group, select one speaker who is similar to the client (e.g., in accent and manner). Pick someone who is good at listening. If you have weak content, pick a presenter who is attractive (in the eyes of the audience); attractiveness is not needed for strong content. ___ 5. Handouts. Do not hand out material to read when you begin the talk. This may cause listeners to get out of step with the speaker.
A review of editorial policies of leading journals and of research relevant to scientific journals revealed conflicts between 'science' and 'scientists.” Owing to these conflicts, papers are often weak on objectivity and replicability. Furthermore, papers often fall short on importance, competence, intelligibility, or efficiency. Suggestions were made for editorial policies such as: (1) structured guidelines for referees, (2) open peer review, (3) blind reviews, and (4) full disclosure of data and method. Of major importance, an author's “Note to Referees” (describing the hypotheses and design, but not the results) was suggested to improve the objectivity of the ratings of importance and competence. Also, recommendations are made to authors for improving contributions to science (such as the use of multiple hypotheses) and for promoting their careers (such as using complex methods and obtuse writing).
Honesty is vital to scientific work and, clearly, most scientists are honest. However, recent publicity about cases involving cheating, including cases of falsification of data and plagiarism, raises some questions: Is cheating a problem? Does it affect management science? Should anything be done?
We examined two papers with long-range forecasts of global mean temperatures (IPCC 2007, and Green, Armstrong, & Soon 2009) that included forecasts of global mean temperatures, and one paper that used the IPCC projections to forecast a dramatic decline in the population of polar bears for compliance with the scientific method. Ratings for compliance with science by the authors and other raters found that the IPPC projections and the forecasts of polar bear endangerment were not scientific. On the other hand, Green, Armstrong, and Soon's forecast of no change in global mean temperatures, was the product of the scientific method. In particular, Green, Armstrong & Soon tested a reasonable alternative hypothesis.
Problem: The scientific method is unrivaled for generating useful knowledge, yet papers published in scientific journals frequently violate the scientific method.
Methods: A definition of the scientific method was developed from the writings of pioneers of the scientific method including Aristotle, Newton, and Franklin. The definition was used as the basis of a checklist of eight criteria necessary for compliance with the scientific method. The extent to which research papers follow the scientific method was assessed by reviewing the literature on the practices of researchers whose papers are published in scientific journals. Findings of the review were used to develop an evidence-based checklist of 20 operational guidelines to help researchers comply with the scientific method.
Findings: The natural desire to have one’s beliefs and hypotheses confirmed can tempt funders to pay for supportive research and researchers to violate scientific principles. As a result, advocacy has come to dominate publications in scientific journals, and had led funders, universities, and journals to evaluate researchers’ work using criteria that are unrelated to the discovery of useful scientific findings. The current procedure for mandatory journal review has led to censorship of useful scientific findings. We suggest alternatives, such as accepting all papers that conform with the eight critera of the scientific method.
Originality: This paper provides the first comprehensive and operational evidence-based checklists for assessing compliance with the scientific method and for guiding researchers on how to comply.
Usefulness: The “Criteria for Compliance with the Scientific Method” checklist could be used by journals to certify papers. Funders could insist that research projects comply with the scientific method. Universities and research institutes could hire and promote researchers whose research complies. Courts could use it to assess the quality of evidence. Governments could base policies on evidence from papers that comply, and citizens could use the checklist to evaluate evidence on public policy. Finally, scientists could ensure that their own research complies with science by designing their projects using the “Guidelines for Scientists” checklist.
Keywords: advocacy; checklists; data models; experiment; incentives; knowledge models; multiple reasonable hypotheses; objectivity; regression analysis; regulation; replication; statistical significance