September 2020
·
163 Reads
·
953 Citations
International Coaching Psychology Review
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
September 2020
·
163 Reads
·
953 Citations
International Coaching Psychology Review
September 2020
·
168 Reads
·
2,671 Citations
International Coaching Psychology Review
January 2011
·
344 Reads
·
82 Citations
January 2009
·
298 Reads
·
109 Citations
January 2009
·
157 Reads
·
57 Citations
January 2009
·
61 Reads
·
105 Citations
January 2009
·
485 Reads
·
6,461 Citations
January 2009
·
753 Reads
·
41 Citations
January 2009
·
373 Reads
·
181 Citations
January 2009
·
57 Reads
·
116 Citations
... Two evaluators employed the Cochrane Risk of Bias Assessment Tool (Cochrane Handbook (Higgins and Green, 2008)) to assess the methodological quality of the studies. This tool categorizes bias ratings as low, high, or unclear within six different domains, including randomization, allocation concealment, blinding of participants and outcome assessors, incomplete outcome data, and selective reporting. ...
September 2020
International Coaching Psychology Review
... The primary outcome of interest was the standardized mean difference (Hedges g) 32 in PTSD severity between comparator groups. Random-effects NMAs were conducted given that high heterogeneity in outcomes was expected. ...
September 2020
International Coaching Psychology Review
... If we sort by sample size, then we can display the potential impact of the publication bias. By sorting the studies chronologically retrospectively, we were able to identify when the treatment effect first reached conventional levels of statistical significance (Egger, Smith, & Altman, 2001;Borenstein, Hedges, Higgins, & Rothstein, 2009). ...
January 2009
... Thus, such approaches may serve to establish effect size estimates that are some what difficult to interpret. In meta-analyses, it is important to take account of the fact that experimental studies, such as randomized control trials (RCTs) of intervention studies, generate effect sizes in fundamentally different ways than do observational studies (Borenstein et al., 2009;Deeks et al., 2019). For instance, effects from RCTs (for quasi-experimental trials) are generated by comparing the degree to which groups change over time, whereas observational studies generate effects by quantifying the strength of association between variables. ...
January 2009
... With the meta-analysis method, the similarities or differences between the results of primary studies on a particular subject can be synthesized by bringing them together. Although there are different perspectives in the literature, researchers generally define meta-analysis as an analysis method in which the statistical results of quantitative studies are systematically combined and evaluated (Borenstein et al., 2009;Glass, 1976;Lipsey & Wilson, 2001). This research employed a meta-analysis approach to investigate the outcomes of experimental studies focused on the impact of GenAI utilization on learning outcomes within educational settings. ...
January 2009
... According to our hypothesis that moderators of individual responses differ from those at population levels, we grouped the effect sizes of metrics by the biological organization level. For each level, random-effect models were adopted to estimate the mean response of the metrics to droughts across the observations, to account for the interdependence among observations within each study in addition to the studies' sampling variances (Borenstein et al., 2009). Moreover, because effect sizes of various power were ...
January 2009
... Effect size estimates were calculated (F values or t values were used when means and standard deviations were not reported) with a bias correction using Hedges' g (Lakens, 2013). To facilitate a precise estimate of the studies' effect and to account for variances within and between studies, the effect sizes and variances within individual studies were aggregated (Borenstein et al., 2009) based on the seven outcome categories described above using RStudio Version 4.3.3 (RStudio Team, 2024) and its available packages (compute.es, ...
January 2009
... Statistical heterogeneity was evaluated using Higgins I 2 statistics and the Cochrane Q (Chi 2 test) [18]. Meta-analysis was performed using Review Manager version 5.4 (Revman 5.4) and Comprehensive Meta-Analysis v3 (CMA V3) software [19,20]. The significant difference was revealed when the probability value (P) < 0.05. ...
January 2005
... The current meta-analysis was conducted with a random-effects model using Comprehensive Meta-Analysis software (version 4, Biostat, Englewood, NJ, USA) [26]. A two-tailed p-value of less than 0.05 was established as the threshold for statistical significance. ...
January 2009
... By analyzing the effect sizes of predictors in different experiments, a weighted average effect size can emerge to produce a more generalizable effect size that may not have been apparent in any one study [62][63][64][65]. However, meta-analyses may be hindered by publication bias, as studies that do not present marked results tend not to be published [66]. A small-scale meta-analysis can be conducted using results from several experiments carried out by a research team. ...
January 2009