Chapter

Principles of Meta-Analysis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Meta-analysis is a common feature of quantitative synthesis for systematic reviews, one of the four archetypes in this book.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Firstly, the clinical problems were refined by the principle patient, intervention, contrast, outcome, study (PICOS) [16] : P: children with acute lymphoblastic leukemia; I: chemotherapy; C: 6-thioguanine vs 6-mercaptopurine; O: effectiveness and safety; S: RCTs. ...
Article
Full-text available
Background: To systematic review the efficacy and safety of 6-thioguanine (6-TG) in the substitute of 6-mercaptopurine (6-MP) in the treatment for patients with childhood acute lymphoblastic leukemia (ALL) in the maintenance phase, and to explore its clinical application value. It provides theoretical guidance for the maintenance treatment of ALL in children from the perspective of evidence-based medicine. Methods: By means of computer retrieval, Chinese databases were searched: Chinese Biomedical Database (CBM), China national knowledge internet (CNKI), Chongqing Weipu Database (VIP), and Wanfang Database; Foreign databases: PubMed, The Cochrane Library, Embase, and Web of Science were applied to find out randomized controlled trial (RCT) for 6-TG in childhood acute lymphoblastic leukemia. By manual retrieval, documents without electronic edition and related conference papers were retrieved. The retrieval time ranges from the beginning of the establishment of the databases to September 1st, 2019. According to the inclusion, and exclusion criteria by 3 researchers, the literature screening, data extraction, and research methodological quality evaluation were completed. RevMan 5.3 software was applied to evaluate the quality of the included literature, and Stata 12.0 software was used to conduct meta-analysis of the outcome indicators of the included literature. Results: This study systematically evaluated the efficacy and safety of 6-TG in the substitute of 6-MP as a maintenance drug for childhood acute lymphoblastic leukemia. Through the key outcome indicators, this study is expected to draw a scientific, practical conclusion for 6-TG in the treatment of childhood acute lymphoblastic leukemia. This conclusion will provide evidence-based medical direction for clinical treatment. Conclusion: The efficacy and safety of 6-TG in the substitute of 6-MP in the maintenance treatment of childhood acute lymphoblastic leukemia will be confirmed through this study. The conclusions will be published in relevant academic journals. Registration: PROSPERO (registration number is CRD42020150466).
Article
Full-text available
Conventional reviews of research on the efficacy of psychological, educational, and behavioral treatments often find considerable variation in outcome among studies and, as a consequence, fail to reach firm conclusions about the overall effectiveness of the interventions in question. In contrast meta-analytic reviews show a strong, dramatic pattern of positive overall effects that cannot readily be explained as artifacts of meta-analytic technique or generalized placebo effects. Moreover, the effects are not so small that they can be dismissed as lacking practical or clinical significance. Although meta-analysis has limitations, there are good reasons to believe that its results are more credible than those of conventional reviews and to conclude that well-developed psychological, educational, and behavioral treatment is generally efficacious.
Article
Full-text available
This paper reviews the use of Bayesian methods in meta-analysis. Whilst there has been an explosion in the use of meta-analysis over the last few years, driven mainly by the move towards evidence-based healthcare, so too Bayesian methods are being used increasingly within medical statistics. Whilst in many meta-analysis settings the Bayesian models used mirror those previously adopted in a frequentist formulation, there are a number of specific advantages conferred by the Bayesian approach. These include: full allowance for all parameter uncertainty in the model, the ability to include other pertinent information that would otherwise be excluded, and the ability to extend the models to accommodate more complex, but frequently occurring, scenarios. The Bayesian methods discussed are illustrated by means of a meta-analysis examining the evidence relating to electronic fetal heart rate monitoring and perinatal mortality in which evidence is available from a variety of sources.
Article
Full-text available
Standard methods for meta‐analysis are limited to pooling tasks in which a single effect size is estimated from a set of independent studies. However, this setting can be too restrictive for modern meta‐analytical applications. In this contribution, we illustrate a general framework for meta‐analysis based on linear mixed‐effects models, where potentially complex patterns of effect sizes are modeled through an extended and flexible structure of fixed and random terms. This definition includes, as special cases, a variety of meta‐analytical models that have been separately proposed in the literature, such as multivariate, network, multilevel, dose‐response, and longitudinal meta‐analysis and meta‐regression. The availability of a unified framework for meta‐analysis, complemented with the implementation in a freely available and fully documented software, will provide researchers with a flexible tool for addressing nonstandard pooling problems.
Article
Full-text available
For meta‐analysis of studies that report outcomes as binomial proportions, the most popular measure of effect is the odds ratio (OR), usually analyzed as log(OR). Many meta‐analyses use the risk ratio (RR) and its logarithm, because of its simpler interpretation. Although log(OR) and log(RR) are both unbounded, use of log(RR) must ensure that estimates are compatible with study‐level event rates in the interval (0, 1). These complications pose a particular challenge for random‐effects models, both in applications and in generating data for simulations. As background we review the conventional random‐effects model and then binomial generalized linear mixed models (GLMMs) with the logit link function, which do not have these complications. We then focus on log‐binomial models and explore implications of using them; theoretical calculations and simulation show evidence of biases. The main competitors to the binomial GLMMs use the beta‐binomial (BB) distribution, either in BB regression or by maximizing a BB likelihood; a simulation produces mixed results. Two examples and an examination of Cochrane meta‐analyses that used RR suggest bias in the results from the conventional inverse‐variance‐weighted approach. Finally, we comment on other measures of effect that have range restrictions, including risk difference, and outline further research.
Article
Full-text available
Systematic reviews often encounter primary studies that report multiple effect sizes based on data from the same participants. These have the potential to introduce statistical dependency into the meta‐analytic data set. In this paper we provide a tutorial on dealing with effect size multiplicity within studies in the context of meta‐analyses of intervention and association studies, recommending a three‐step approach. The first step is to define the research question and consider the extent to which it mainly reflects interest in mean effect sizes (which we term a ‘convergent’ approach) or an interest in exploring heterogeneity (which we term a ‘divergent’ approach). A second step is to identify the types of multiplicities that appear in the initial database of effect sizes relevant to the research question, and we propose a categorization scheme to differentiate them. The third step is to select a strategy for dealing with each type of multiplicity. The researcher can choose between a ‘reductionist’ meta‐analytic approach, which is characterized by inclusion of a single effect size per study, or an ‘integrative’ approach characterized by inclusion of multiple effect sizes per study. We present an overview of available analysis strategies for dealing with effect size multiplicity within studies, and provide recommendations intended to help researchers decide which strategy might be preferable in particular situations. Last, we offer caveats and cautions about addressing the challenges multiplicity poses for systematic reviews and meta‐analyses.
Article
Full-text available
Meta‐analyses often include only a small number of studies (≤5). Estimating between study heterogeneity is difficult in this situation. An inaccurate estimation of heterogeneity can result in biased effect estimates and too narrow confidence intervals. The beta‐binominal model has shown good statistical properties for meta‐analysis of sparse data. We compare the beta‐binominal model with different inverse variance random (e.g., DerSimonian‐Laird, modified Hartung‐Knapp, Paule‐Mandel) and fixed effect methods (Mantel‐Haenszel, Peto) in a simulation study. The underlying true parameters were obtained from empirical data of actually performed meta‐analyses to best mirror real life situations. We show that valid methods for meta‐analysis of a small number of studies are available. In fixed effect situations the Mantel‐Haenszel and Peto method performed best. In random effect situations the beta‐binominal model performed best for meta‐analysis of few studies considering the balance between coverage probability and power. We recommended the beta‐binominal model for practical application. If very strong evidence is needed, using the Paule‐Mandel heterogeneity variance estimator combined with modified Hartung‐Knapp confidence intervals might be useful to confirm the results. Notable, most inverse variance random effects models showed unsatisfactory statistical properties also if more studies (10‐50) were included in the meta‐analysis.
Article
Full-text available
Purpose Meta-regression is widely used and misused today in meta-analyses in psychology, organizational behavior, marketing, management, and other social sciences, as an approach to the identification and calibration of moderators, with most users being unaware of serious problems in its use. The purpose of this paper is to describe nine serious methodological problems that plague applications of meta-regression. Design/methodology/approach This paper is methodological in nature and is based on well-established principles of measurement and statistics. These principles are used to illuminate the potential pitfalls in typical applications of meta-regression. Findings The analysis in this paper demonstrates that many of the nine statistical and measurement pitfalls in the use of meta-regression are nearly universal in applications in the literature, leading to the conclusion that few meta-regressions in the literature today are trustworthy. A second conclusion is that in almost all cases, hierarchical subgrouping of studies is superior to meta-regression as a method of identifying and calibrating moderators. Finally, a third conclusion is that, contrary to popular belief among researchers, the process of accurately identifying and calibrating moderators, even with the best available methods, is complex, difficult, and data demanding. Practical implications This paper provides useful guidance to meta-analytic researchers that will improve the practice of moderator identification and calibration in social science research literatures. Social implications Today, many important decisions are made on the basis of the results of meta-analyses. These include decisions in medicine, pharmacology, applied psychology, management, marketing, social policy, and other social sciences. The guidance provided in this paper will improve the quality of such decisions by improving the accuracy and trustworthiness of meta-analytic results. Originality/value This paper is original and valuable in that there is no similar listing and discussion of the pitfalls in the use of meta-regression in the literature, and there is currently a widespread lack of knowledge of these problems among meta-analytic researchers in all disciplines.
Article
Full-text available
Available as Open Access through DOI 10.1002/jrsm.1260 We present a new tool for meta-analysis, Meta-Essentials, which is free-of-charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Though free-of-charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students.
Article
Full-text available
Meta-analysis is a statistical procedure for analyzing the combined data from different studies, and can be a major source of concise up-to-date information. The overall conclusions of a meta-analysis, however, depend heavily on the quality of the meta-analytic process, and an appropriate evaluation of the quality of meta-analysis (meta-evaluation) can be challenging. We outline ten questions biologists can ask to critically appraise a meta-analysis. These questions could also act as simple and accessible guidelines for the authors of meta-analyses. We focus on meta-analyses using non-human species, which we term ‘biological’ meta-analysis. Our ten questions are aimed at enabling a biologist to evaluate whether a biological meta-analysis embodies ‘mega-enlightenment’, a ‘mega-mistake’, or something in between.
Article
Full-text available
In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Article
Full-text available
A typical behavioral research paper features multiple studies of a common phenomenon that are analyzed solely in isolation. Because the studies are of a common phenomenon, this practice is inefficient and foregoes important benefits that be obtained only by analyzing them jointly in a single paper meta-analysis (SPM). To facilitate SPM, we introduce metaanalytic methodology that is user-friendly, widely applicable, and specially tailored to the SPM of the set of studies that appear in a typical behavioral research paper. Our SPM methodology provides important benefits for study summary, theory-testing, and replicability that we illustrate via three case studies that include papers recently published in the Journal of Consumer Research and the Journal of Marketing Research. We advocate that authors of typical behavioral research papers use it to supplement the single-study analyses that independently discuss the multiple studies in the body of their papers as well as the "qualitative meta-analysis" that verbally synthesizes the studies in the general discussion of their papers. When used as such, this requires only a minor modification of current practice. We provide an easy-to-use website that implements our SPM methodology.
Article
Full-text available
There is no simple method of correcting for publication bias in systematic reviews. We suggest a sensitivity analysis in which different patterns of selection bias can be tested against the fit to the funnel plot. Publication bias leads to lower values, and greater uncertainty, in treatment effect estimates. Two examples are discussed. An appendix lists the S-plus code needed for carrying out the analysis.
Article
Full-text available
As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis. Network meta-analysis (NMA), multiple outcomes analysis and analysis combining different types of data are considered in a manner that can be useful across a range of data collection programmes. The guide has been shown to be both easy to learn and useful in a small pilot study. Electronic supplementary material The online version of this article (doi:10.1186/s13643-016-0368-4) contains supplementary material, which is available to authorized users.
Article
Full-text available
Women commit sexual offenses, but the proportion of sexual offenders who are female is subject to debates. Based on 17 samples from 12 countries, the current meta-analysis found that a small proportion of sexual offenses reported to police are committed by females (fixed-effect meta-analytical average = 2.2%). In contrast, victimization surveys indicated prevalence rates of female sexual offenders that were six times higher than official data (fixed-effect meta-analytical average = 11.6%). Female sexual offenders are more common among juvenile offenders than adult offenders, with approximately 2 percentage points more female juvenile sex offenders than female adult sex offenders. We also found that males were much more likely to self-report being victimized by female sex offenders compared with females (40% vs. 4%). The current study provides a robust estimate of the prevalence of female sexual offending, using a large sample of sexual offenses across diverse countries.
Article
Full-text available
Individual participant data (IPD) meta-analysis is the gold standard of meta-analyses. This paper points out several advantages of IPD meta-analysis over classical meta-analysis, such as avoiding aggregation bias (e.g., ecological fallacy or Simpson’s paradox) and shows how its two main disadvantages (time and cost) can be overcome through Internet-based research. Ideally, we recommend carrying out IPD meta-analyses that consider online versus offline data gathering processes and examine data quality. Through a comprehensive literature search, we investigated whether IPD meta-analyses published in the field of educational psychology already follow these recommendations; this was not the case. For this reason, the paper demonstrates characteristics of ideal meta-analysis on teachers’ judgment accuracy and links it to recent meta-analyses on that topic. The recommendations are important for meta-analysis researchers and for readers and reviewers of meta-analyses. Our paper is also relevant to current discussions within the psychological community on study replication.
Article
Full-text available
The r package ecosystem is rich in tools for the statistics of meta‐analysis. However, there are few resources available to facilitate research synthesis as a whole. Here, I present the metagear package for r . It is a comprehensive, multifunctional toolbox with capabilities aimed to cover much of the research synthesis taxonomy: from applying a systematic review approach to objectively assemble and screen the literature, to extracting data from studies, and to finally summarize and analyse these data with the statistics of meta‐analysis. Current functionalities of metagear include the following: an abstract screener GUI to efficiently sieve bibliographic information from large numbers of candidate studies; tools to assign screening effort across multiple collaborators/reviewers and to assess inter‐reviewer reliability using kappa statistics; PDF downloader to automate the retrieval of journal articles from online data bases; automated data extractions from scatter‐plots, box‐plots and bar‐plots; PRISMA flow diagrams; simple imputation tools to fill gaps in incomplete or missing study parameters; generation of random‐effects sizes for Hedges' d , log response ratio, odds ratio and correlation coefficients for Monte Carlo experiments; covariance equations for modelling dependencies among multiple effect sizes (e.g. with a common control, phylogenetic correlations); and finally, summaries that replicate analyses and outputs from widely used but no longer updated meta‐analysis software. Research synthesis practices are vital to many disciplines in the sciences, including ecology and evolutionary biology, and metagear aims to enrich the scope, quality and reproducibility of what can be achieved with the systematic review and meta‐analysis of research outcomes.
Article
Full-text available
Background: Perfectionism is implicated in a range of psychiatric disorders, impedes treatment and is associated with poorer treatment outcomes. Aims: The aim of this systematic review and meta-analysis was to summarize the existing evidence for psychological interventions targeting perfectionism in individuals with psychiatric disorders associated with perfectionism and/or elevated perfectionism. Method: Eight studies were identified and were analysed in meta-analyses. Meta-analyses were carried out for the Personal Standards and Concern over Mistakes subscales of the Frost Multi-Dimensional Perfectionism Scale (FMPS) and the Self Orientated Perfectionism and Socially Prescribed Perfectionism subscales of the Hewitt and Flett MPS (HMPS) in order to investigate change between pre and postintervention. Results: Large pooled effect sizes were found for the Personal Standards and Concern over Mistakes subscales of the FMPS and the Self Orientated Perfectionism subscale of the HMPS, whilst a medium sized effect was found for change in Socially Prescribed Perfectionism. Medium pooled effect sizes were also found for symptoms of anxiety and depression. Conclusions: There is some support that it is possible to significantly reduce perfectionism in individuals with clinical disorders associated with perfectionism and/or clinical levels of perfectionism. There is also some evidence that such interventions are associated with decreases in anxiety, depression, eating disorder and obsessive compulsive symptoms. Further research is needed in order to investigate the optimal dosage and format of such interventions as well as into specific disorders where there is a lack of evidence for their effectiveness.
Article
Full-text available
To examine empirically whether the mean difference (MD) or the standardised mean difference (SMD) is more generalizable and statistically powerful in meta-analyses of continuous outcomes when the same unit is used. From all the Cochrane Database (March 2013), we identified systematic reviews that combined 3 or more randomised controlled trials (RCT) using the same continuous outcome. Generalizability was assessed using the I-squared (I2) and the percentage agreement. The percentage agreement was calculated by comparing the MD or SMD of each RCT with the corresponding MD or SMD from the meta-analysis of all the other RCTs. The statistical power was estimated using Z-scores. Meta-analyses were conducted using both random-effects and fixed-effect models. 1068 meta-analyses were included. The I2 index was significantly smaller for the SMD than for the MD (P < 0.0001, sign test). For continuous outcomes, the current Cochrane reviews pooled some extremely heterogeneous results. When all these or less heterogeneous subsets of the reviews were examined, the SMD always showed a greater percentage agreement than the MD. When the I2 index was less than 30%, the percentage agreement was 55.3% for MD and 59.8% for SMD in the random-effects model and 53.0% and 59.8%, respectively, in the fixed effect model (both P < 0.0001, sign test). Although the Z-scores were larger for MD than for SMD, there were no differences in the percentage of statistical significance between MD and SMD in either model. The SMD was more generalizable than the MD. The MD had a greater statistical power than the SMD but did not result in material differences.
Article
Full-text available
The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2-20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of "positive" (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > =3 studies of interventions from the Cochrane Database of Systematic Reviews. The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes.
Article
Full-text available
Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.
Article
In recent years, many studies link supply chain sustainability practices to firm performance, since more and more firms are implementing sustainable practices in their manufacturing/services supply chains. This study uses a psychometric meta-analysis to synthesize the results from 167 effect sizes, collected from 129 articles, to understand the impact of various types of sustainability practices (i.e., environmental, social, and combined) on firm performance (Financial and Operational). A sub-group analysis, using industry (manufacturing/service) and economy (developed/developing), was also performed to study the relative strength of sustainability-firm performance relationships in respective categories. The study confirms a positive association between the various aspects of sustainability and firm performance and finds that the strength of sustainability-firm performance relationships grows over time. Findings also suggest a stronger relationship between sustainability-firm performances in manufacturing industries than in service industries. This study provides interesting insights for policymakers and companies in various economies, and it augments the understanding of the impact of sustainable supply chain practices on firm performance.
Article
This methodological guidance article discusses the elements of a high-quality meta-analysis that is conducted within the context of a systematic review. Meta-analysis, a set of statistical techniques for synthesizing the results of multiple studies, is used when the guiding research question focuses on a quantitative summary of study results. In this guidance article, we discuss the systematic review methods that support high-quality meta-analyses and outline best practice meta-analysis methods for describing the distribution of effect sizes in a set of eligible studies. We also provide suggestions for transparently reporting the methods and results of meta-analyses to influence practice and policy. Given the increasing use of meta-analysis for important policy decisions, the methods and results of meta-analysis should be both transparent and reproducible.
Article
Decline in the theoretical and empirical review of Brownian motion is worth noticing, not just because its relevance lies in the field of mathematical physics but due to unavailability of statistical technique. The ongoing debate on transport phenomenon and thermal performance of various fluids in the presence of haphazard motion of tiny particles as explained by Albert Einstein using kinetic theory and Robert Brown is further clinched in this report. This report presents the outcome of detailed inspections of the significance of Brownian motion on the flow of various fluids as reported in forty-three (43) published articles using the method of slope linear regression through the data point. The technique of slope regression through the data points of each physical property of the flow and Brownian motion parameter was established and used to generate four forest plots. The outcome of the study indicates that an increase in Brownian motion corresponds to an enhancement of haphazard motion of tiny particles. In view of this, there would always be a significant difference between the corresponding effects when Brownian motion is small and large in magnitude. Maximum heat transfer rate can be achieved due to Brownian motion in the presence of thermal radiation, thermal convective and mass convective at the wall in three-dimensional flow. In the presence of heat convective and mass convective at the wall, and thermal radiation, a significant increase in Nusselt number due to Brownian motion is guaranteed. A decrease in the concentration of fluid substance due to an increase in Brownian motion is bound to occur. This is not achievable in the case of high entropy generation and homogeneous-heterogeneous quartic autocatalytic kind of chemical reaction.
Article
Every meta-analysis involves a number of choices made by the analyst. These choices may refer to, for example, estimator of effect, model for analysis (fixed effects or random effects), or the treatment of varying study quality. The choices made can affect the results of the analysis. Every meta-analysis should therefore include a sensitivity analysis, designed to probe how choices made as part of the analysis affect its results. This paper describes a systematic approach to sensitivity analysis in meta-analyses. An index intended to summarize the results of a sensitivity analysis, the robustness score, is developed. The robustness score varies from 0 to 1. A value of 1 indicates that the results of a meta-analysis are robust; they are not at all affected by the choices made by the analyst. It is proposed that every meta-analysis include a sensitivity analysis for (a) the potential presence of publication bias, (b) the choice of estimator of effect (if relevant), (c) the possible presence of outlier bias (a single result having decisive influence on the summary estimate), (d) statistical weighting of individual estimates of effect, and (e) assessment of study quality. A recently reported meta-analysis of studies that have evaluated the effects on road safety of daytime running lights for cars is used as a case to explain the proposed approach to sensitivity analysis.
Article
At the beginning of the development of meta‐analysis, understanding the role of moderators was given the highest priority, with meta‐regression provided as a method for achieving this goal. Yet in current practice, meta‐regression is not as commonly used as anticipated. This paper seeks to understand this mismatch by reviewing the history of meta‐regression methods over the past 40 years. We divide this time span into four periods and examine three types of methodological developments within each period: technical, conceptual, and practical. Our focus is broad and includes development of methods in the fields of education, psychology, and medicine. We conclude the paper with a discussion of five consensus points, as well as open questions and areas of research for the future.
Article
The definition of second order interaction in a (2 × 2 × 2) table given by Bartlett is accepted, but it is shown by an example that the vanishing of this second order interaction does not necessarily justify the mechanical procedure of forming the three component 2 × 2 tables and testing each of these for significance by standard methods.*
Article
Meta-analyses are an important tool within systematic reviews to estimate the overall effect size and its confidence interval for an outcome of interest. If heterogeneity between the results of the relevant studies is anticipated, then a random-effects model is often preferred for analysis. In this model, a prediction interval for the true effect in a new study also provides additional useful information. However, the DerSimonian and Laird method - frequently used as the default method for meta-analyses with random effects - has been long challenged due to its unfavourable statistical properties. Several alternative methods have been proposed that may have better statistical properties in specific scenarios. In this paper, we aim to provide a comprehensive overview of available methods for calculating point estimates, confidence intervals and prediction intervals for the overall effect size under the random-effects model. We indicate whether some methods are preferable than others by considering the results of comparative simulation and real-life data studies.
Article
The term “multilevel meta-analysis” is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term “multilevel meta-analysis” is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.
Article
Meta-analysis is a common tool for synthesizing results of multiple studies. Among methods for performing meta-analysis, the approach known as ‘fixed effects’ or ‘inverse variance weighting’ is popular and widely used. A common interpretation of this method is that it assumes that the underlying effects in contributing studies are identical, and for this reason it is sometimes dismissed by practitioners. However, other interpretations of fixed effects analyses do not make this assumption, yet appear to be little known in the literature. We review these alternative interpretations, describing both their strengths and their limitations. We also describe how heterogeneity of the underlying effects can be addressed, with the same minimal assumptions, through either testing or meta-regression. Recommendations for the practice of meta-analysis are given; it is hoped that these will foster more direct connection of the questions that meta-analysts wish to answer with the statistical methods they choose.
Article
The impact of information and communication technology (ICT) on economic performance has been the subject of academic research for several decades, and despite the remarkable and significant innovation in computer technology, usage, and investments, only a small growth in productivity has been observed. This observations has been coined the productivity paradox. This paper uses meta-analytical methods to examine publication bias and the size of ICT elasticity. The empirical part is based on a collection of more than 800 estimates of ICT payoff effects from more than 70 studies written in the last 20 years. The meta-analysis reveals a strong presence of publication bias within the ICT productivity literature and, using a mixed effect multilevel model, estimates the ICT elasticity to be only 0.3%, which is more than ten times smaller than what was reported by a previous meta-analysis from 10 years ago.
Article
Student evaluation of teaching (SET) ratings are used to evaluate faculty's teaching effectiveness based on a widespread belief that students learn more from highly rated professors. The key evidence cited in support of this belief are meta-analyses of multisection studies showing small-to-moderate correlations between SET ratings and student achievement (e.g., Cohen, 1980, 1981; Feldman, 1989). We re-analyzed previously published meta-analyses of the multisection studies and found that their findings were an artifact of small sample sized studies and publication bias. Whereas the small sample sized studies showed large and moderate correlation, the large sample sized studies showed no or only minimal correlation between SET ratings and learning. Our up-to-date meta-analysis of all multisection studies revealed no significant correlations between the SET ratings and learning. These findings suggest that institutions focused on student learning and career success may want to abandon SET ratings as a measure of faculty's teaching effectiveness.
Article
Objective: The dose-response of short sleep duration in mortality has been studied, in addition to the incidences of notable health complications and diseases such as diabetes mellitus, hypertension, cardiovascular diseases, stroke, coronary heart diseases, obesity, depression, and dyslipidemia. Methods: We collected data from prospective cohort studies with follow-ups of one year or more on associations between short sleep duration and the outcomes. For the independent variable, we divided participants at baseline into short sleepers and normal sleepers. The primary outcomes were defined as mortality and an incident of each health outcome in the long-term follow-up. Risk ratios (RRs) for each outcome were calculated through meta-analyses of adjusted data from individual studies. Sub-group and meta-regression analyses were performed to investigate the association between each outcome and the duration of short sleep. Results: Data from a cumulative total of 5,172,710 participants were collected from 153 studies. Short sleep was significantly associated with the mortality outcome (RR, 1.12; 95% CI, 1.08-1.16). Similar significant results were observed in diabetes mellitus (1.37, 1.22-1.53), hypertension (1.17, 1.09-1.26), cardiovascular diseases (1.16, 1.10-1.23), coronary heart diseases (1.26, 1.15-1.38), and obesity (1.38, 1.25-1.53). There was no sufficient usable evidence for meta-analyses in depression and dyslipidemia. Meta-regression analyses found a linear association between a statistically significant increase in mortality and sleep duration at less than six hours. No dose-response was identified in the other outcomes. Conclusions: Based on our findings, future studies should examine the effectiveness of psychosocial interventions to improve sleep on reducing these health outcomes in general community settings.
Article
Systematic reviews provide a method for collating and synthesizing research, and are used to inform healthcare decision making by clinicians, consumers and policy makers. A core component of many systematic reviews is a meta-analysis, which is a statistical synthesis of results across studies. In this review article, we introduce meta-analysis, focusing on the different meta-analysis models, their interpretation, how a model should be selected and discuss potential threats to the validity of meta-analyses. We illustrate the application of meta-analysis using data from a review examining the effects of early use of inhaled corticosteroids in the emergency department treatment of acute asthma.
Article
Since the 1990s, a growing body of research has sought to quantify the relationship between women’s representation in leadership positions and organizational financial performance. Commonly known as the “business case” for women’s leadership, the idea is that having more women leaders is good for business. Through meta-analysis (k = 78, n = 117,639 organizations) of the direct effects of women’s representation in leadership (as CEOs, on top management teams, and on boards of directors) on financial performance, and tests that proxy theoretical arguments for moderated relationships, we call attention to equivocal findings. Our results suggest women’s leadership may affect firm performance in general and sales performance in particular. And women’s leadership—overall and, specifically, the presence of a female CEO—is more likely to positively relate to firms’ financial performance in more gender egalitarian cultures. Yet taking our findings as a whole, we argue that commonly used methods of testing the business case for women leaders may limit our ability as scholars to understand the value that women bring to leadership positions. We do not advocate that the business case be abandoned altogether but, rather, improved and refined. We name exemplary research studies to show how different perspectives on gender, alternative conceptualizations of value, and the specification of underlying mechanisms linking leadership to performance can generate changes in both the dominant ontology and the epistemology underlying this body of research.
Article
Despite a sizeable theoretical and empirical literature, no firm conclusions have been drawn regarding the impact of political democracy on economic growth. This article challenges the consensus of an inconclusive relationship through a quantitative assessment of the democracy-growth literature. It applies meta-regression analysis to the population of 483 estimates derived from 84 studies on democracy and growth. Using traditional meta-analysis estimators, the bootstrap, and Fixed and Random Effects meta-regression models, it derives several robust conclusions. Taking all the available published evidence together, it concludes that democracy does not have a direct impact on economic growth. However, democracy has robust, significant, and positive indirect effects through higher human capital, lower inflation, lower political instability, and higher levels of economic freedom. Democracies may also be associated with larger governments and less free international trade. There also appear to be country- and region-specific democracy-growth effects. Overall, democracy's net effect on the economy does not seem to be detrimental.
Article
Problem: Localities and states are turning to land planning and urban design for help in reducing automobile use and related social and environmental costs. The effects of such strategies on travel demand have not been generalized in recent years from the multitude of available studies.Purpose: We conducted a meta-analysis of the built environment-travel literature existing at the end of 2009 in order to draw generalizable conclusions for practice. We aimed to quantify effect sizes, update earlier work, include additional outcome measures, and address the methodological issue of self-selection.Methods: We computed elasticities for individual studies and pooled them to produce weighted averages.Results and conclusions: Travel variables are generally inelastic with respect to change in measures of the built environment. Of the environmental variables considered here, none has a weighted average travel elasticity of absolute magnitude greater than 0.39, and most are much less. Still, the combined effect of several such variables on travel could be quite large. Consistent with prior work, we find that vehicle miles traveled (VMT) is most strongly related to measures of accessibility to destinations and secondarily to street network design variables. Walking is most strongly related to measures of land use diversity, intersection density, and the number of destinations within walking distance. Bus and train use are equally related to proximity to transit and street network design variables, with land use diversity a secondary factor. Surprisingly, we find population and job densities to be only weakly associated with travel behavior once these other variables are controlled.Takeaway for practice: The elasticities we derived in this meta-analysis may be used to adjust outputs of travel or activity models that are otherwise insensitive to variation in the built environment, or be used in sketch planning applications ranging from climate action plans to health impact assessments. However, because sample sizes are small, and very few studies control for residential preferences and attitudes, we cannot say that planners should generalize broadly from our results. While these elasticities are as accurate as currently possible, they should be understood to contain unknown error and have unknown confidence intervals. They provide a base, and as more built-environment/travel studies appear in the planning literature, these elasticities should be updated and refined.Research support: U.S. Environmental Protection Agency.
Article
This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Article
We studied publication bias in the social sciences by analyzing a known population of conducted studies—221 in total—in which there is a full accounting of what is published and unpublished. We leveraged Time-sharing Experiments in the Social Sciences (TESS), a National Science Foundation–sponsored program in which researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings.
Article
. A common conjecture in the study of publication bias is that studies reporting a significant result are more likely to be selected for review than studies whose results are inconclusive. We envisage a population of studies following the standard random-effects model of meta-analysis, and a selection probability given by a function of the study's ‘t-statistic’. In practice it is difficult to estimate this function, and hence difficult to estimate its associated bias correction. The paper suggests the more modest aim of a sensitivity analysis in which the treatment effect is estimated by maximum likelihood constrained by given values of the marginal probability of selection. This gives a graphical summary of how the inference from a meta-analysis changes as we allow for increasing selection (as the marginal selection probability decreases from 1), with an associated diagnostic plot comparing the observed treatment effects with their fitted values implied by the corresponding selection model. The approach is motivated by a medical example in which the highly significant result of a published meta-analysis was subsequently overturned by the results of a large-scale clinical trial.
Article
Cancer is a leading cause of death worldwide. Mind-body interventions are widely used by cancer patients to reduce symptoms and cope better with disease- and treatment-related symptoms. In the last decade, many clinical controlled trials of qigong/tai chi as a cancer treatment have emerged. This study aimed to quantitatively evaluate the effects of qigong/tai chi on the health-related outcomes of cancer patients. Five databases (Medline, CINAHL, Scopus, the Cochrane Library, and the CAJ Full-text Database) were searched until June 30, 2013. Randomized controlled trials (RCTs) of qigong/tai chi as a treatment intervention for cancer patients were considered for inclusion. The primary outcome for this review was changes in quality of life (QOL) and other physical and psychological effects in cancer patients. The secondary outcome for this review was adverse events of the qigong/tai chi intervention. A total of 13 RCTs with 592 subjects were included in this review. Nine RCTs involving 499 subjects provided enough data to generate pooled estimates of effect size for health-related outcomes. For cancer-specific QOL, the pooled weighted mean difference (WMD) was 7.99 [95% confidence interval (CI): 4.07, 11.91; Z score=4.00, p<0.0001]. The standardized mean differences (SMDs) for changes in depression and anxiety score were -0.69 (95% CI: -1.51, 0.14; Z score=1.64, p=0.10), and -0.93 (95% CI: -1.80, -0.06; Z score=2.09, p=0.04), respectively. The WMDs for changes in body mass index and body composition from baseline to 12 weeks follow-up were -1.66 (95% CI: -3.51, 0.19; Z score=1.76, p=0.08), and -0.67 (95% CI: -2.43, 1.09; Z score=0.75, p=0.45) respectively. The SMD for changes in the cortisol level was -0.37 (95% CI: -0.74, -0.00; Z score=1.97, p=0.05). This study found that qigong/tai chi had positive effects on the cancer-specific QOL, fatigue, immune function and cortisol level of cancer patients. However, these findings need to be interpreted cautiously due to the limited number of studies identified and high risk of bias in included trials. Further rigorous trials are needed to explore possible therapeutic effects of qigong/tai chi on cancer patients.
Article
A meta-analysis is a statistical treatment of a dataset derived from a literature review. Meta-analysis appears to be a promising approach in agricultural and environmental sciences, but its implementation requires special care. We assessed the quality of the meta-analyses carried out in agronomy, with the intent to formulate recommendations, and we illustrate these recommendations with a case study relative to the estimation of nitrous oxide emission in legume crops. Eight criteria were defined for evaluating the quality of 73 meta-analyses from major scientific journals in the domain of agronomy. Most of these meta-analyses focused on production aspects and the impact of agriculture activities on the environment or biodiversity. None of the 73 meta-analyses reviewed satisfied all eight quality criteria and only three satisfied six criteria. Based on this quality assessment, we formulated the following recommendations: (i) the procedure used to select papers from scientific databases should be explained, (ii) individual data should be weighted according to their level of precision when possible, (iii) the heterogeneity of data should be analyzed with random-effect models, (iv) sensitivity analysis should be carried out and (v) the possibility of publication bias should be investigated. Our case study showed that meta-analysis techniques would be beneficial to the assessment of environmental impacts because they make it possible to study between site-year variability, to assess uncertainty and to identify the factors with a potential environmental impact. The quality criteria and recommendations presented in this paper could serve as a guide to improve future meta-analyses made in this area.
Article
This paradox is the possibility of $P(A \mid B) even though P(A ∣ B) ≥ P(A ∣ B′) both under the additional condition C and under the complement C′ of that condition. Details are given on why this can happen and how extreme the inequalities can be. An example shows that Savage's sure-thing principle ("If you would definitely prefer g to f, either knowing that the event C obtained, or knowing that C did not obtain, then you definitely prefer g to f.") is not applicable to alternatives f and g that involve sequential operations.
Article
Previous meta‐analytic reviews of research concerning the door‐in‐the‐face (DITF) influence strategy have used the correlation coefficient as the index of effect size, an arguably incorrect choice; the odds ratio provides a much more appropriate effect‐size index for the kinds of outcomes studied. Correlations and odds ratios can yield quite different descriptions of experimental results, which makes for uncertainty about previously‐reported meta‐analytic results. This paper reports a meta‐analysis of the DITF research literature using the odds ratio as the effect‐size index. The results largely confirm those of the most recent DITF meta‐analysis.
Article
Meta-analysis is the dominant approach to research synthesis in the organizational sciences. We discuss seven meta-analytic practices, misconceptions, claims, and assumptions that have reached the status of myths and urban legends (MULs). These seven MULs include issues related to data collection (e.g., consequences of choices made in the process of gathering primary-level studies to be included in a meta-analysis), data analysis (e.g., effects of meta-analytic choices and technical refinements on substantive conclusions and recommendations for practice), and the interpretation of results (e.g., meta-analytic inferences about causal relationships). We provide a critical analysis of each of these seven MULs, including a discussion of why each merits being classified as an MUL, their kernels of truth value, and what part of each MUL represents misunderstanding. As a consequence of discussing each of these seven MULs, we offer best-practice recommendations regarding how to conduct meta-analytic reviews.