J.P.T. Higgins's scientific contributions

Publications (13)

Citations

... Therefore, it allows for improving the precision of the results by the estimation of the effect size and direction, and clarifying whether or not the effect size is consistent across studies. Thus, also based on study design and risk of bias assessment, the meta-analysis allows for assessing the strength of the evidence [34,39]. ...
... Results were pooled in a meta-analysis when studies reported consistent data analysis methods, study designs and mental disorder outcomes. As suggested by Borenstein et al. (2009), when studies reported results in different ways (e.g., prevalence ratios versus odds ratios), summary data (i.e., the number of 'cases' versus 'controls' in un/exposed groups) were used to compute and then combine the same index for all studies [37]. Hazard ratios and odds ratios with 95% confidence intervals (95% CIs) for binary outcomes were used. ...
... Effect sizes of 0.2, 0.5, and 0.8 will be considered small, moderate, and large respectively (Cohen, 1977). For clinical significance, we will use the risk difference as effect size (Borenstein et al., 2009). In case the included studies will differ in their outcome measures, we will transform the different effect size matrics into Hedges' g to facilitate a single analysis. ...
... Lakens, 2013) and converted binary outcomes (provided as proportions) to Hedges' g av (Borenstein et al., 2009). Effect sizes were converted when necessary such that positive values always represent effects that benefited the intervention group or period compared to the control group or baseline period (Tables S2, S4, and S5). ...
... where V Y is the variance of the mean ES, V is the variance of the 1RM outcome ESs, m is the number of 1RM outcomes within the study and r is the correlation coefficient that describes the extent to which the outcomes correlate [18]. The correlation coefficients were calculated from the raw data of the included study by Eifler [19], which consisted of raw data from 200 participants and eight 1RM outcomes. ...
... Pooled effect sizes, moderator and publication bias analyses were conducted using Comprehensive Meta-Analysis Software, Version 3.0 (Borenstein et al., 2013). ...
... The inverse variance model was adopted to construct the weights of the meta-analysis with random effects models, considering a possible heterogeneity in the included studies. The results were expressed with 95% confidence intervals and presented as standardized mean differences, considering the measurement of outcomes by different assessment measures (Borenstein et al., 2009a). ...
... Researchers who report a synthesis and heterogeneity effect are missing the crucial synthesis. 35 The results of our meta-analysis confirmed that a relationship between exposure to ionizing radiation and the incidence of mesothelioma could be assessed. In addition, differences in the distribution of data were highlighted between the studies with radiotherapy exposure (North America for the most part) and those studying occupational exposures (United Kingdom, France, and Australia). ...
... Meta-analysis is essential to cumulative science. 1 However, a common concern to meta-analysis is the overestimation of effect size due to publication bias, the preferential publishing of statistically significant studies. 2,3,4,5,6 In addition, this effect size exaggeration can be further increased by questionable research practices, that is, researchers' tendency to manipulate their data in a way that increases the effect size and the evidence for an effect. ...
... We carried out our meta-analysis in line with the recommendations made by Borenstein, Hedges, Higgins, and Rothstein (2009). First, because the studies in our corpus reported effect sizes in a variety of ways, including t tests (Stone et al., 1994), chi-square tests (Fointiat, 2008), and F tests (Morrongiello & Mark, 2008), we used Arthur, Bennett, and Huffcutt's (2001) formulae to transform all reported effect sizes into correlation coefficient, r. ...