Methods of Meta-Analysis: Correcting Error and Bias in Research Findings
... The study employed the meta-analysis mechanism developed by Hunter and Schmidt (2004); Hunter et al. (1982). This technique is a widely used method due to its precise techniques for analyzing and resolving conflicting results from reviewed studies. ...
... If a study did not report the Pearson's correlation coefficient (r), the t-value or Z-value was used to calculate it by converting these values into a Pearson correlation coefficient (r). This conversion method follows the formulas suggested by Hunter and Schmidt (2004) and Cooper et al. (2019). The formulas are presented as follows: ...
... The next procedure for analyzing the meta-analysis data follows the three steps proposed by Hunter and Schmidt (2004). The first step is to calculate the mean correlation (r) to achieve an accurate aggregation for the population's average correlation. ...
The significance of Gender diversity in enhancing quality of reporting has been a focal point of the theoretical and empirical research in recent decades. However, discrepancies persist between empirical findings and theoretical perspectives regarding the role of female representation in improving disclosure quality. This study aims to elucidate the relationship between female representation and disclosure. This study employs a series of meta-analyses techniques on 189 empirical studies spanning over 39 countries to investigate the relationship between female representation and disclosure. It further examines how gender parity, shareholder protection, types of disclosure, and publication quality moderate this relationship. The meta-analysis results indicate that female representation is significantly positively associated with disclosure. This positive association is more pronounced in countries with high gender parity and low shareholder protection, underscoring the crucial monitoring role female representation can play in safeguarding shareholders’ interests, particularly when women have greater influence in boardroom decision-making (i.e., gender parity). Additionally, the findings reveal that female representation is more significantly associated with social and environmental disclosure than with financial and governance disclosure. The study provides valuable insights for regulators, directors, and shareholders by advocating for the empowering of female representation and increasing the representation of women in senior positions within firms.
... To ensure a representative and comprehensive meta-analysis (Schmidt & Hunter, 2015;White, 2009), we conducted a systematic search to identify all relevant sources in the literature meeting the inclusion criteria above. First, we searched online research databases and search engines (i.e., EBSCO and Google Scholar) in April and May 2020. ...
... To analyze the coded samples, we followed psychometric metaanalytic methods outlined in Schmidt and Hunter (2015). Corrections were made for sampling error as well as measurement error for both predictor and criterion measures (using artifact distributions based on internal consistency values). ...
... Data preparation also involved reflecting negatively valenced correlations (e.g., reflecting a negative correlation between resilience and depression) to ensure consistent keying across all aggregated correlations. In addition, nonindependent correlations (i.e., correlations from the same sample falling into the same meta-analytic aggregation) and their respective reliability estimates (if provided) were combined into composites using procedures recommended by Schmidt and Hunter (2015). This required a source to provide intercorrelations among variables needing to be combined; in select cases where this was not reported, then the simple means of correlations and reliabilities were calculated and used (Arthur et al., 2008;Schmidt & Hunter, 2015). ...
Resilience is a potentially important factor for stress management, well-being, and success in the workplace. Consistent with this, there is a growing literature base on resilience at work. At the same time, there remains confusion and inconsistency about basic issues such as how to conceptualize resilience and the nature of resilience–outcome relationships. This meta-analysis focused on three distinct conceptualizations of individual-level resilience (i.e., capacity, enactment, demonstration), as well as their associations with valued workplace outcomes (i.e., performance, job attitudes, psychological well-being, physical health) and further considered potential moderators of these relationships. Overall, based on an examination of 279 primary sources including 297 independent samples (total N = 432,458), it was found that the three resilience components generally exhibited positive associations with each other and with four outcome domains. Several moderating variables were examined, including the focus of resilience measurement, presence of adversity, occupational risk, outcome source, and publication status; unexplained variability in resilience–outcome relationships remained even after accounting for these factors. The authors discuss the implications of these findings and provide suggestions for future research.
... In this study, a coding sheet was created using Microsoft Excel, which comprises author(s)/year, sample size, r-value (correlation coefficient), Cronbach's α, performance type, agility type, country, and scores of Hofstede's cultural dimensions. If more than 75% of items within each construct closely align with our defined criteria, we categorized that construct as a relevant dimension of performance type (Schmidt and Hunter, 2004). However, if the FP scale is unidimensional and combines financial and nonfinancial indicators without a set of indicators dominating by more than 75%, it is coded as MXP. ...
... Before conducting the analysis, correlation coefficients for each study were corrected based on the reliability (Schmidt and Hunter, 2004). When the reliability coefficient of a certain study was missing, the weighted average reliability was used for substitution (Lipsey and Wilson, 2001). ...
... When the I 2 value exceeds 75%, it suggests a high heterogeneity among studies (Huedo-Medina Moderation and mediation analysis procedures. In this study, the moderator NC comprises six dimensions as continuous variables, and weighted least squares (WLS) regression is recommended for its estimation (Schmidt and Hunter, 2004). WLS regression is employed to explicate and synthesize the estimated regression coefficients, as it accommodated both heteroscedasticity and excessed heterogeneity (Stanley and Doucouliagos, 2017). ...
Big data analytics (BDA) is widely adopted by many firms to gain competitive advantages. However, some empirical studies have found an inconsistent relationship between BDA and firm performance (FP). Therefore, an underlying mediating mechanism may exist that facilitates their relationship. Based on the dynamic capabilities view (DCV) theory, this study aims to investigate the relationships among BDA, organizational agility (OA), and FP through meta-analysis. Additionally, we explore the mediating effect of OA on the BDA-FP relationship and the moderating effect of national culture (NC) on the BDA–OA–FP relationship. Furthermore, we examine potential methodological moderators in the BDA-FP relationship. Using the random-effect model, meta-analytic structural equation modeling, subgroup analysis, and meta-regression, we analyzed 34 studies with 42 independent samples conducted between 2019 and 2024. The results indicate that, firstly, BDA has a positive impact on OA and FP. Secondly, OA partially mediates the BDA–FP relationship, especially at the process level. Moreover, individualism and indulgence moderate the BDA–OA relationship, while uncertainty avoidance and long-term orientation moderate the OA–FP relationship at the marginal significance level. Regarding methodological moderators, the time dimension and analytical technique also moderate the BDA–FP relationship. This study contributes to the DCV theory in information system research and provides practical insights for firms.
... In this respect, the study provides a comprehensive perspective to the literature by using a multidimensional conceptualization in terms of performance indicators. Secondly, although individual studies examine the relationship between green entrepreneurial orientation and sustainable firm performance [6][7][8], this study is important in presenting the current evaluation of the literature using metaanalysis. The meta-analysis method reveals the relationship between green entrepreneurial orientation and sustainable firm performance with a larger sample. ...
... Meta-analysis is the process of reviewing and re-examining individual studies previously conducted [8]. The critical point in meta-analysis is to present the relationships between the variables examined with the most accurate effect size value possible. ...
... Hunter and Schmidt's psychometric meta-analysis method was used in this study, and procedures and interpretations were made based on correlation values [8]. Due to the nature of correlation-based studies, there is an implicit causality assumption in the correlation tests subject to this study. ...
The purpose of this study is to examine the relationship between green entrepreneurial orientation and sustainable firm performance. In order to examine this relationship, a meta-analysis method was used, and analyses were carried out with a Comprehensive Meta-Analysis Software (CMA) v4 package program. In the study, a sample of 23 articles, 42 effect sizes, and 6666 enterprises was reached through a systematic literature review. The studies included in the research were accessed by searching the keywords “green entrepreneurial orientation” and “sustainable firm performance” from Web of Science, EBSCO Host, Scopus, and Google Scholar databases, and only articles without any year limit were included. Throughout the study, statistical analyses were performed on Fisher z values and conducted under the random effects model. The effect size, heterogeneity, and publication bias analyses of green entrepreneurial orientation and sustainable firm performance and its sub-dimensions were tested separately, and the findings were interpreted by converting them into correlation coefficients. As a result of the analyses, it was found that the relationship between green entrepreneurial orientation and sustainable firm performance is positive and highly significant (p < 0.05). In addition, the relationship between financial, environmental, social, sustainable, entrepreneurial, and green innovation performance, which is the sub-dimensions of sustainable firm performance, and green entrepreneurial orientation, was found to be high and significant (p < 0.05). However, it was concluded that there is no significant relationship between green innovation performance, which is another dimension of sustainable firm performance, and green entrepreneurial orientation. Moderator analyses revealed that sector and continent have a moderating effect on the relationship between green entrepreneurship orientation and sustainable firm performance.
... However, techniques such as funnel plots can at least help highlight these problems [23]. Lack of heterogeneity (perhaps due to methodological differences or the existence of meaningful sub-populations) can also be detected and investigated [16,23]. ...
... However, techniques such as funnel plots can at least help highlight these problems [23]. Lack of heterogeneity (perhaps due to methodological differences or the existence of meaningful sub-populations) can also be detected and investigated [16,23]. ...
... Avoid close replications since these may violate the independency assumption underlying meta-analysis [14]. Also, consider corrections to the meta-analysis [23] needed due to potential bias from inflated effect size estimates from the first study arising from publication bias [10]. ...
CONTEXT: There is growing interest in establishing software engineering as an evidence-based discipline. To that end, replication is often used to gain confidence in empirical findings, as opposed to reproduction where the goal is showing the correctness, or validity of the published results. OBJECTIVE: To consider what is required for a replication study to confirm the original experiment and apply this understanding in software engineering. METHOD: Simulation is used to demonstrate why the prediction interval for confirmation can be surprisingly wide. This analysis is applied to three recent replications. RESULTS: It is shown that because the prediction intervals are wide, almost all replications are confirmatory, so in that sense there is no 'replication crisis', however, the contributions to knowledge are negligible. CONCLUSIONS: Replicating empirical software engineering experiments, particularly if they are under-powered or under-reported, is a waste of scientific resources. By contrast, meta-analysis is strongly advocated so that all relevant experiments are combined to estimate the population effect.
... We integrated effect sizes using a combined method by Hedges and Olkin (1985) and Hunter and Schmidt (2004). Before integrating effect sizes, we corrected correlation coefficients for measurement error (Hunter and Schmidt, 2004), and we divided the correlations by the square root of the product of the reliabilities of the two constructs. ...
... We integrated effect sizes using a combined method by Hedges and Olkin (1985) and Hunter and Schmidt (2004). Before integrating effect sizes, we corrected correlation coefficients for measurement error (Hunter and Schmidt, 2004), and we divided the correlations by the square root of the product of the reliabilities of the two constructs. When a specific study did not report the required reliability information, we used the average reliability of that construct. ...
... We also reported the number of effect sizes (k), cumulative sample size (N), 95% confidence intervals, Q-statistic, I 2 statistic, and fail-safe N for each evaluated relationship. A significant Q-test indicates substantial variance in effect size distribution (Hunter and Schmidt, 2004). ...
Purpose
This study aims to clarify the direct impact of digitalization on export performance (EP) by synthesizing previous research and testing this relationship empirically. Furthermore, the study investigates digitalization types, contextual moderators and method moderators affecting the impact of digitalization on EP.
Design/methodology/approach
The study uses meta-analysis to test the digitalization–EP relationship (k = 81) using data from 106 independent samples involving 62,082 respondents across nearly 30 countries.
Findings
The study finds digitalization’s positive and significant effect on EP (r = 0.36). The impact of digitalization on EP is also subject to different moderators, including digitalization type (i.e. digital capabilities), contextual factors (i.e. institutions, export experience, development of the region and industry) and method factors (i.e. back translation and strategy measurement).
Originality/value
Scholars have initiated studies on the impacts of diverse digitalization types on EP, while empirical findings on these effects remain inconclusive. Based on resource-based theory, the study develops and validates a comprehensive meta-analytic framework, revealing the important influence of digitalization on EP. The moderator findings further highlight the impact of internal and external contingencies on the outcomes of exporting firms’ digitalization.
... This study focused on the overall correlation coefficients between leadership and innovation. In a few instances where studies only reported sub-dimension correlation coefficients between constructs, weighted average correlation coefficients were calculated to merge the sub-dimension correlations into an overall correlation coefficient (Hunter & Schmidt, 2004). ...
... To address this gap, this study proposes a theoretical model based on SDT and tests it using metaanalysis. Individual empirical studies are often subject to sampling and other methodological constraints, leading to potentially varying conclusions (Hunter & Schmidt, 2004). For example, the relationship between a specific leadership style and employee innovation can differ across types of organizations or depending on the hierarchical level of the surveyed employees. ...
Leadership plays a crucial role in fostering employee innovation. Which kind of leadership style is more closely associated with higher levels of employee innovation? This question remains a matter of debate in empirical research. To answer this question, this study proposes a theoretical framework based on self-determination theory (SDT) to explain the variation in correlation coefficients between different leadership styles and employee innovation. Using meta-analysis, this study synthesizes evidence from 432 independent empirical studies (229 in Chinese and 203 in English, with a total sample size of 161,599) to test the hypotheses. The results show that: (1) Transactional leadership, ethical leadership, transformational leadership, servant leadership, leader-member exchange (LMX), empowering leadership, inclusive leadership, and authentic leadership all have significant positive correlations with employee creative performance, with the correlation increasing; (2) factors such as individualism, the method of performance appraisal, the time point of data collection, the measurement of leadership styles, the measurement of employee creative performance, and publication language partially moderate the relationship between leadership styles and employee creative performance. The results align with the theoretical forecasting and contribute to the development of SDT. More importantly, this study provides key practical insights for managers by suggesting that employing the appropriate leadership style can effectively enhance employee creative performance
... For studies that reported multiple correlations between two constructs, we averaged them and treated them as a single effect size [83]. ANOVA (F) values were converted into correlation coefficients using Rosenthal's formula [84]: ...
... To ensure the robustness and validity of this study, we double-checked the correlation coefficients and meticulously corrected them following the methodology outlined by Schmidt & Hunter [83]. This correction aimed to mitigate potential measurement errors in both dependent and independent variables. ...
This study provides a comprehensive understanding of online impulse buying (OIB) by integrating multiple theoretical frameworks through meta-analytic structural equation modelling approach. We identified the most influential antecedents of OIB and highlighted the significant role of urge to buy impulsively (UBI) as a key mediator with a strong R² value of 0.75. Our findings reveal that UBI and OIB are distinct, with UBI serving as a major predictor rather than actual impulse buying behavior. By focusing on online environments, our research addresses gaps in the literature and offers a tailored framework for digital contexts. Additionally, the study explores the moderating effects of cultural factors and sample types, providing new insights for future research. For practitioners, our findings emphasize the importance of creating personalized and engaging online shopping experiences that align with consumer psychology and cultural nuances, enabling marketers to design more effective strategies for driving OIB. This research advances both theoretical and practical understanding of OIB, paving the way for future investigations into the evolving nature of consumer behavior in digital environments.
... We followed random-effects procedures described by Schmidt and Hunter (2015) to statistically integrate the data. We corrected effect sizes for sampling and measurement error of both stereotype threat and workplace outcomes. ...
... In cases where 95% CIs do not overlap, a moderation effect is likely present. Comparison of the CIs for overlap is the recommended approach for testing categorical moderators and serves as a stringent and reliable test (Hwang & Schmidt, 2011;Morris, 2023;Rudolph et al., 2020;Schmidt, 2017;Schmidt & Hunter, 2015). There were not enough studies considering meta-stereotyping in either of the relationships, so we were only able to compare stereotype threat and stigma consciousness. ...
Stereotype threat refers to the concern of being judged based on stereotypes about one’s social group. This preregistered meta-analysis examines the correlates of stereotype threat in the workplace ( k = 61 independent samples, N = 40,134). Results showed that stereotype threat was positively related to exhaustion, identity separation, negative affect, turnover intentions, and behavioral coping, and negatively related to career aspirations, job satisfaction, organizational commitment, job engagement, job performance, positive affect, self-efficacy, and work authenticity. In addition, moderator analyses for constructs represented in at least k = 10 samples in the focal analyses showed that relations did not differ for measures of stereotype threat and stigma consciousness. However, the negative relationships between stereotype threat and career aspirations, job satisfaction, and job engagement were stronger for older employees compared with female employees as the stereotyped group. Overall, the findings suggest that stereotype threat constitutes an important stressor in the workplace.
... 2. Also, undertake power calculations in advance to determine the likelihood of the study finding an effect of the expected size. Where possible, when there are relevant previous studies, use a bias-corrected pooled estimate [90] to help address likely exaggerated estimates. ...
... The key point here is that this variability is very small; i.e. differences between the median results is often much larger than the variability introduced by a stochastic selection of the train set (but, of course, that visual impressions requires a sanity check-which is the role of statistical tests). Meta-analysis is extremely difficult without dispersion statistics for the results needed to construct confidence intervals [12,90]. ...
CONTEXT: There has been a rapid growth in the use of data analytics to underpin evidence-based software engineering. However the combination of complex techniques, diverse reporting standards and poorly understood underlying phenomena are causing some concern as to the reliability of studies. OBJECTIVE: Our goal is to provide guidance for producers and consumers of software analytics studies (computational experiments and correlation studies). METHOD: We propose using "bad smells", i.e., surface indications of deeper problems and popular in the agile software community and consider how they may be manifest in software analytics studies. RESULTS: We list 12 "bad smells" in software analytics papers (and show their impact by examples). CONCLUSIONS: We believe the metaphor of bad smell is a useful device. Therefore we encourage more debate on what contributes to the validty of software analytics studies (so we expect our list will mature over time).
... Elorinne et al., 2019). For these cases, a composite correlation coefficient was calculated based on the data provided by the authors (Hunter and Schmidt, 2004). Some articles (n=2) contained multiple studies (e.g. ...
... If only one study within a multi-study article was relevant, only the study information and results of the relevant study were included. If multiple studies within one multi-study article were relevant, composite correlation coefficients (Hunter and Schmidt, 2004) and the average sample size across the studies were calculated. ...
... In addition, an informative meta-analytic test of cultural differences in advice-taking might require a psychometric meta-analysis rather than the bare-bones meta-analysis reported by Bailey et al. (2023). The crucial difference is that psychometric meta-analysis considers-and corrects for-measurement error in between-study comparisons (Schmidt and Hunter, 2015). In the context of advice taking this would result in more accurate estimates of moderating variables such as participants' culture. ...
Advice taking is a crucial part of decision-making and has attracted the interest of scholars across the world. Laboratory research on advice taking has revealed several robust phenomena, such as sensitivity to advice quality or a tendency to underutilize advice. Despite extensive investigations in different countries, cultural differences in advice taking remain understudied. Knowing whether such cultural differences exist would not only be interesting from an academic standpoint but might also have consequences for multinational organizations and businesses. Here, we argue that prior laboratory research on cultural differences in advice taking is hindered by confounding factors, particularly the confound between participants’ cultural background and task difficulty. To draw a valid conclusion about cultural differences in advice taking, it is vital to develop a decision task devoid of this confound. Here, we develop such a judgment task and demonstrate that the core phenomena of advice taking manifest in a sample of German participants. We then use this task in a cross-national comparison of German and Chinese participants. While the core phenomena of advice taking consistently manifested in both samples, some differences emerged. Most notably, Chinese participants were more receptive of advice, even though they still underutilized it. This greater reliance on advice was driven by Chinese participants’ greater preference for averaging their own and the advisor’s judgments. We discuss how our findings extend current understanding of the nuanced interplay between cultural values and the dynamics of advice taking.
... The meta-analysis was run in JASP (version 0.18.3). A random effects model was used to estimate effect sizes between compulsive exercise and perfectionism [56]. The wald-type confidence interval calculated confidence intervals [18]. ...
Purpose
There is a consistent link between perfectionism and compulsive exercise, and both are implicated in the maintenance of eating disorders, however no meta-analysis to date has quantified this relationship. We hypothesised that there would be significant, small-moderate pooled correlations between perfectionism dimensions and compulsive exercise.
Methods
Published, peer-reviewed articles with standardised measures of perfectionism and the Compulsive Exercise Test were included. There were 7 studies included (N = 3117 participants, M age = 21.78 years, 49% female).
Results
Total perfectionism (r = 0.37), perfectionistic strivings (r = 0.33), and perfectionistic concerns (r = 0.32) had significant pooled positive associations with compulsive exercise. Most studies (67%) were rated as fair or good quality as an indication of risk of bias. Limitations included the low number of available studies, the inclusion of only one clinical sample, and predominately cross-sectional studies which precluded causal inference.
Conclusion
Higher perfectionism was associated with higher compulsive exercise. More research is needed on compulsive exercise to determine the best intervention approach given its relationship to perfectionism and relevance in the context of eating disorders.
Level of evidence
Level I: Evidence obtained from a systematic review and meta-analysis.
... The effect sizes were weighted by inverse variances. Random effect models were fitted using the Hunter and Schmidt method because this method corrects for variance due to artifacts such as sampling error, reliability of measurement and range restriction (Hunter & Schmidt, 2004). The I 2 was used to test heterogeneity among the effect sizes as recommended by Higgins and Thompson (2002). ...
Online comments have become an essential component of online media consumption. A meta-analysis was conducted to understand how online comment valence affects message perception, issue-relevant beliefs and attitudes, issue-relevant behaviors and behavioral intentions, communication behaviors and intentions, and emotions. Comment valence is defined as the distinction between positive comments, which align with, support, or favor the opinions expressed in the original message, and negative comments, which oppose, criticize, or disagree with the opinions expressed in the original message. After a comprehensive search and systematic screening and coding of existing studies, we identified 44 studies that are eligible to be included in the meta-analysis. We found that positive (vs. negative) comments led to significantly more positive evaluations of original messages (r = .22), stronger beliefs and attitudes that align with the positive comments (r = .29), higher likelihood to engage in behaviors that align with the positive comments (r = .09), higher likelihood to express opinions that align with the positive comments (r = .26), and more positive emotions (r = .16). Moreover, the number of comments, whether comment valence was mixed or not, and whether the original message was news or non-news moderated the effects of online comment valence on several outcomes. The findings suggest integrating these outcomes and moderators to develop a media effect theory and guide media practices in light of comment valence effects.
... The amount of variation not attributable to sampling error (percentage of heterogeneity within or between study) was quantified using the var.comp function ('dmetar' package) with the I 2 statistic computing the part of variance attributable to each level of the three level of the meta-analysis. Based on the (Schmidt and Hunter, 1990) rule: the heterogeneity can be regarded as substantial, if less than 75 % of the total amount of variance can be attributed to sampling variance (at level 1). Therefore, if this rule was respected, the moderator analysis was performed. ...
The enhancement of invertebrate generalist predator populations through habitat management is a promising way to control pest populations and could contribute to pesticide use reduction in arable agriculture. The majority of studies on invertebrate ground-dwelling predators are focusing on the activity-density of adults during their period of activity and provide limited insight into their overwintering ecology. Semi-natural habitats (SNH) are frequently considered as key winter refuge but their contribution is often not compared with the contribution of adjacent arable crops. We performed a meta-analysis to investigate whether SNH are key overwintering sites relatively to adjacent crops, for two abundant and widespread generalist predator groups in agroecosystems: carabid beetles and spiders. We identified a corpus limited to 19 studies and 114 comparisons between SNH (linear or patch) and arable crops (autumn-sown and spring-sown crops) that monitored predators with traps avoiding predator movement during their overwintering. Our analysis revealed that SNH significantly sheltered higher densities of overwintering spiders than adjacent crops. Concerning carabid populations, densities of overwintering carabids were influenced by the shape of SNH with higher overwintering densities in linear elements (grass strips, flower strips, hedges) than in arable crops. In addition, carabid overwintering density and diversity were higher in SNH when the adjacent crop was a spring-sown crop, indicating a higher sensitivity to agricultural disturbances or low trophic resources. These findings highlight the predator and agricultural context-dependent role of semi-natural habitats as overwintering refuge and underline the increased consideration that should be granted to autumn-sown crops as suitable overwintering habitat.
... A psychometric meta-analysis attempts to estimate the effect sizes of different studies removing any bias caused by low reliability of measures or small participant samples (Schmidt, 2010;Schmidt & Hunter, 2014;Vadillo et al., 2022;Wiernik & Dahlke, 2020). In this study, we conducted a psychometric meta-analysis on the correlations between masked priming and visibility in the five experiments included in Berkovitch and Dehaene (2019). ...
Research on unconscious processing has been a valuable source of evidence in psycholinguistics for shedding light on the cognitive architecture of language. The automaticity of syntactic processing, in particular, has long been debated. One strategy to establish this automaticity involves detecting significant syntactic priming effects in tasks that limit conscious awareness of the stimuli. Criteria for assessing unconscious priming include the visibility (d’) of masked words not differing significantly from zero and no positive correlation between visibility and priming. However, such outcomes could also arise for strictly methodological reasons, such as low statistical power in visibility tests or low reliability of dependent measures. In this study, we aimed to address these potential limitations. Through meta-analysis and Bayesian re-analysis,we find evidence of low statistical power and of participants having above-chance awareness of ‘subliminal’ words. Moreover, we conducted reliability analyses on a dataset from Berkovitch and Dehaene (2019), finding that low reliability in both syntactic priming and visibility tasks may better explain the absence of a significant correlation. Overall, these findings cast doubt on the validity of previous conclusions regarding the automaticity of syntactic processing based on masked priming effects.The results underscore the importance of revisiting the methods employed when exploring unconscious processing in future psycholinguistic research.
... As a result, it is unreasonable to assume there was a single true effect size, and a random effects model was used in all relevant analyses. A Hunter-Schmidt correction was used for correlation coefficients (Schmidt & Hunter, 2015). This was selected primarily because the observed correlations were already normally distributed and because it allows the data to be kept, analyzed, and reported on in their original scale. ...
In higher education, motivational factors are considered one of "the strongest predictors of academic performance" (Honike et al., 2020, p. 1). A meta-analysis of face-to-face (f2f) courses (Richardson et al., 2012) supports these claims, finding a strong correlation between performance self-efficacy and academic performance (r = 0.59), as well as accounting for 14% of the variation in academic performance using locus of control, performance self-efficacy, and grade goal as predictors. These f2f results are compelling enough that self-efficacy is often used synonymously with online learning in primary research. However, the results of prior f2f meta-analytic reviews have yet to be extended to online and blended learning contexts. We explored student motivation, specifically subscales for attributional style, self-efficacy, achievement goal orientation, self-determination and task value in relation to student academic performance. Informed by 94 outcomes from 52 studies, our results diverge from f2f findings. The highest correlation was mastery avoidance goals (r = 0.22); academic self-efficacy (r = 0.19) was substantially lower than f2f findings (r = 0.31; r = 0.59) in Richardson et al. (2012). Using a parsimonious model (i.e., delivery mode, learning self-efficacy, and mastery approach goals), students' average academic performance failed to identify statistically significant predictors. These results call into question the assumption that student motivation is a strong predictor of academic performance in online and blended courses. The lack of strong relationships and the lack of predictive power hold clear implications for researchers, practitioners, and policymakers that assume these relationships are stronger.
... Disproportionate stratification sampling involves selecting equal participants for each stratum, regardless of the representation rates of the strata in the population (Schmidt & Hunter, 2014). The reason why the disproportionate stratified sampling method is preferred is to ensure that each stratum is represented in the study with a significant size (Morgan & Morgan, 2008). ...
This study aimed to examine mathematical bullying victimization among middle school students and its relationship with socio-demographic variables using the Mathematical Bullying Victimization Scale (MBV-S). The sample consisted of 493 middle school students selected using a disproportionate stratified sampling method from five different schools in Konya, Türkiye, during the spring semester of the 2021-2022 academic year.Data were collected using the relational screening model and analyzed with nonparametric statistical methods. The results showed that female students were more frequently subjected to mathematical bullying compared to males. No significant differences were found in MBV-S scores based on grade levels. However, the analysis of school types revealed that students in girls-only religious middle schools experienced higher levels of mathematical bullying than students in other types of schools. Additionally, students with below-average mathematics achievement were more likely to be victims of mathematical bullying, whereas students with above-average achievement reported lower levels of victimization. The findings highlight the importance of developing strategies to address mathematical bullying that consider students' mathematical abilities and social dynamics. Supportive and inclusive environments should be prioritized, particularly for students with low achievement and those in specific school settings.
... Students asked potential participants whether they would participate in a predictive online study on organizational behavior and job success and if they were willing to invite coworkers to provide assessments of work-related behaviors. To reduce range restrictions (Schmidt & Hunter, 2015), we aimed to attract a broad range of participants by offering three different types of incentives: having the researchers donate 1€ to a charitable organization, entry into a prize drawing for gift vouchers, and individual feedback on participants' emotion recognition ability. All participants received an e-mail containing information on the study and an individualized link to access the online survey. ...
Earning a living is the manifest function of vocational careers, with strong implications for psychological well-being. We hypothesize that some forms of vocational behaviors, namely interpersonal facilitation (people pleasing), i.e., helping, cooperation, and encouraging others at work, predict pay decreases, while other forms, namely internal networking, i.e., building, maintaining, and using contacts within one’s organization, predict pay increases. In a multisource predictive career analysis with 210 target-coworker data sets and a time interval of more than a year, we find that women, mediated by their higher trait emotionality, tend to provide more interpersonal facilitation behaviors to coworkers. This contributes to individual pay decreases. Engaging in internal networking, however, can counterbalance this disadvantage and contribute to pay increases over time. We discuss four implications: improving salary negotiation skills, improving self-presentation skills, sociotechnical job redesign, and implementing new organizational remuneration schemes.
... For a description and comparison of these estimators, see Jiang and Kopp-schneider [14]. In this study, the Hunter-Schmidt estimator [20,21] is used. ...
... We used Bosco et al. (2015) benchmarks to determine the practical significance of effect sizes. To assess heterogeneity in effect sizes, we report the percentage of variance attributable to sampling error and the I 2 index using the Hunter and Schmidt (2004) estimator, which indicates the proportion of observed variance due to real differences between effect sizes rather than chance (Higgins et al., 2003). For all relationships of (primary and secondary) psychopathy with workplace-related behavior, variance attributable to sampling error was less than 75% and I 2 was greater than 25, indicating likely moderation (Higgins et al., 2003). ...
Despite the large and growing number of studies on psychopathy in the workplace, the field lacks a comprehensive understanding of the link between psychopathy and core workplace-related behaviors. Basing assumptions on social exchange theory, the purpose of this meta-analytic review (k = 166; N = 49,350) is (a) to test the relationship of psychopathy with task performance, organizational citizenship behavior, and counterproductive work behavior, (b) to differentiate the relationships of primary versus secondary psychopathy with these behaviors, and (c) to test for relevant moderating influences by actor- and target-/exchange-partner factors. In contrast to earlier significant but weak meta-analytic findings (O’Boyle et al., 2012), both meta-analytic overall effects and meta-analytic structural equation modeling suggest that psychopathy substantially reduces task performance and organizational citizenship behavior and enhances counterproductive work behavior. Compared to primary psychopathy, effects were mostly more pronounced for secondary psychopathy. Besides methodological factors, moderator analyses revealed relationships to vary by actor (age, organizational tenure, hierarchical level) but not by target. Together, these findings point toward new and relevant directions for future research on the effects of psychopathy in the workplace.
... First, we suggest evaluating other control variables that could affect the subsidiary's performance due to SFDI. Several academic studies highlight the importance of moderating variables (Becker, 2005;Hunter & Schmidt, 2004;Spector & Brannick, 2011). Therefore, this paper considers the country's attractiveness (market size) and interacts with the learning variables. ...
... A random-effects model and Hedges g effect size estimate were applied (Schmidt & Hunter, 2014) with the correction for correlated samples (Tanner-Smith et al., 2016). We have selected Hedges' g effect size estimate as it includes a correction factor for small sample O. Chernikova et al. ...
... When compared to typical literature reviews, meta-analysis provides an efficient way to combine study data (Hunter & Schmidt, 2004), shifting the emphasis away from individual studies and toward a comprehensive investigation of a particular area of study. However, it is important to highlight that the quality and reporting of original investigations are beyond meta-analyses. ...
A sense of school belonging has been receiving a growing an increasing amount of attention in the educational community due to its numerous developmental and educational benefits. Nevertheless, research on the sense of school belonging has been scattered and has missed some clarity for the terminology ambiguity. Therefore, it is necessary to conduct a meta-analysis study to eliminate the ambiguity about the terminology of “sense of school belonging” and to determine the relationship between school belonging and academic achievement. The aim of this study is to determine the effect of the sense of school belonging on academic achievement and to explore the factors that may moderate these relationships. A pool of 6891 studies was created, including titles containing the terms “sense of school belonging” and “academic achievement”. Twenty-two studies that met the inclusion criteria were selected for the analysis. The findings reveal a statistically significant yet small effect of the sense of school belonging on academic achievement. The effect of the sense of school belonging on academic achievement varied across publication years. Moreover, the study's findings showed no statistically significant difference among the various measurement tools used. These findings highlight the need for further investigation of the relationships between these variables, followed by researchers employing diverse measurement tools.
... where e x = exp(x) denotes the exponential function. However, both Hunter et al. (1996) and Hunter & Schmidt (2004) caution against the indiscriminate use of Fisher's z-transformation. They argue that while the transformation mitigates a negative bias in the untransformed correlation coefficients, it introduces a positive bias in the transformed values. ...
Meta-analysis is the use of statistical methods to combine the results of individual studies to estimate the overall effect size for a specific outcome of interest. The direction and magnitude of this estimated effect, along with its confidence interval, provide insights into the phenomenon or relationship being investigated. As an extension of the standard meta-analysis, meta-regression analysis incorporates multiple moderators representing identifiable study characteristics into the meta-analysis model, thereby explaining some of the heterogeneity in true effect sizes across studies. This form of meta-analysis is especially designed to quantitatively synthesize empirical evidence in economics. This study provides an overview of the meta-analytic procedures tailored for economic research. By addressing key challenges, including between-study heterogeneity, publication bias, and effect size dependence, it aims to equip researchers with the tools and insights needed to conduct rigorous and informative meta-analytic studies in economics and related disciplines.
... se d IG is the standard error of the d-statistic using Eq. (2) and is calculated by [33,35,36] (1) ...
As a new emerging evolutionary framework, evolutionary multitasking aims to optimize multiple tasks simultaneously. Knowledge transfer is an important component of evolutionary multitasking. How to extract and transfer knowledge significantly affects the performance of the algorithm. A serious challenge for evolutionary multitasking is the inappropriate knowledge transfer or insufficient exploration and exploitation. To address this challenge, an evolutionary multitasking with adaptive tradeoff selection strategy (EMT-ATS) is proposed. To enhance global exploration and local exploitation during the evolution, an adaptive tradeoff selection mechanism is developed to select promising offspring during different stages to guide the population toward more promising solution regions. In addition, a Cohen’s d indicator-based is used to adjust knowledge transfer. To verify the effectiveness of the proposed EMT-ATS, a series of experiments are conducted with several popular evolutionary multitasking algorithms on multitasking benchmark problems. In addition, a multitask optimization problem involving two real-world problems is used to validate the practicability of the proposed EMT-ATS. Experimental results demonstrate the effectiveness of the proposed EMT-ATS.
... method of Hunter and Schmidt (2004) was adopted, which involves the sequential steps of extraction, transformation, and standardization. ...
The heightened awareness of environmental and social concerns has prompted businesses, including financial institutions (hereto FIs), to incorporate non‐financial factors into their operations for long‐term sustainability and value creation. Despite FI's crucial role in resource allocation and national stability, there remains a notable gap in comprehensive research on the significance of sustainable practices for driving financial performance (hereto FP) within these institutions. This study addresses this gap through a systematic review of 533 articles from (1983–2024), employing bibliometric analysis to map key contributors, themes, and future research directions. Additionally, a meta‐analysis of 40 articles assesses the relationship between sustainable practices and banks' FP. Results show a positive yet weak association between the two, suggesting that banks have not fully embraced sustainable principles in it. Social and governance practices positively correlated with FP, while environmental factors exhibit a negative relationship. This may be due to the required investment, the time differential to materialize the same, and other factors adhering to the relationship, such as innovation, development of sustainable products, or adherence to sustainable processes for screening investment choices. The study concludes by discussing implications and offering suggestions for further research.
... The Q-test provides evidence that, at 5% significance level, there is sufficient evidence to reject the null hypothesis, positing a consistent effect size across all 136 sampled studies (see Table 3). This result indicates statistically significant variability between the primary studies, which could influence the meta-analysis findings [39]. Additionally, our sample's high heterogeneity is confirmed by an I2 value of 91.64%, which exceeds the threshold of 25% [38]. ...
... In this paper, the author used a specific method that was different from Arlow and Gannon (1982); Albertini (2013) utilised meta-analysis to investigate the relationship. Metaanalysis is a quantitative method and is a statistical technique which could be used to distinguish and quantify the association among published literature (Hunter and Schmidt, 2004). The author has collected and researched 52 studies for the period from 1975 to 2011 that are relevant to such a topic after applying the criteria. ...
The increase of eco-tourism in Greece can be attributed to the growing consciousness among tourists regarding the environmental consequences of their journeys. Consequently, the aim of this research is to investigate guests’ perceptions of eco- tourism in Greece and identify the strategies employed by hotels to attract eco- conscious travelers. It is crucial to recognize the factors that motivate tourists to opt for eco-tourism and comprehend the mindset of eco-tourists, as this knowledge would facilitate the development of effective marketing strategies for hotels. The study also examines the distinctive characteristics of eco-tourists and their willingness to pay for staying in eco-friendly hotels. The data were collected between May 2023 and July 2023 using a structured questionnaire. The research results show that there is a significant percentage of guest who choose eco-friendly hotels when they travel. Moreover, based on the respondents’ views, there are also many hotels in Greece adopting practices to preserve the environment. Based on the research results it is proposed that mostly young people should be the target for hotels that are environmentally friendly. Moreover, the need to educate people on environmental sustainability and the concept of ecotourism become evident.
... di erent aspects when designing and implementing digital environments in education. The basic idea of a meta-analytic synthesis is to find all studies on a specific research question (such as: What is the e ect of learning with animations compared to learning from textbooks?), document the e ects that were found in each of these studies, and aggregate e ect sizes across studies (with studies with larger samples receiving a higher weight than studies with smaller samples; Schmidt & Hunter, 2015). The benefits of this approach are clear: First, statements on the e ects of certain kinds of digital technologies are based on a much larger sample size than can ever be achieved in one primary study. ...
... Publication bias was assessed by visual inspection of funnel plots [50]. In the absence of publication bias, the plot should resemble a symmetrical inverted funnel. ...
Background
Breast thermography originated in the 1950s but was later abandoned due to the contradictory results obtained in the following decades. However, advances in infrared technology and image processing algorithms in the twenty-first century led to a renewed interest in thermography. This work aims to provide an updated and objective picture of the recent scientific evidence on its effectiveness, both as a screening and as a diagnostic tool.
Methods
We searched for clinical studies published between 2001 and May 31, 2023, in the databases PubMed and Scopus, that aimed to evaluate the effectiveness of digital, long-wave infrared imaging for detecting breast cancer. Additional documents were retrieved from the studies included in the systematic reviews that resulted from the search and by searching for the names of commercial systems. We limited our selection to studies that reported the sensitivity and specificity of breast thermography (or the data needed to calculate them) using images collected by themselves, with at least five breast cancer cases. Studies that considered breast diseases other than cancer to be positive or that did not use standard tests to set the ground truth diagnosis were excluded, as well as articles written in a language other than English and documents we could not access. We also conducted meta-analyses of proportions of the sensitivity and specificity values reported in the selected studies and a bivariate meta-analysis to account for the correlation between these metrics.
Results
Our systematic search resulted in 22 studies, with an average pooled sensitivity and specificity of 88.5% and 71.8%, respectively. However, the differences in patient recruitment, sample size, imaging protocol, equipment, and interpretation criteria yielded a high heterogeneity measure (79.3% and 99.1% value, respectively).
Conclusions
Overall, thermography showed a high sensitivity in the selected studies, whereas specificity started off lower and increased over time. The most recent studies reported a combination of sensitivity and specificity comparable to standard diagnostic tests. Most of the selected studies were small and tend to include only patients with a suspicious mass that requires biopsy. However, larger studies with a wider variety of patient types (asymptomatic, women with dense breasts, etc.) have been published in the latest years.
... Given that object-level effect size estimates are underpowered, it may appear compelling that meta-analytical procedures (Hunter & Schmidt, 2004;Stanley et al., 2018) could aggregate a large number of (topically related) studies to produce better effect size estimates (Fletcher, 2022). However, the quality of a meta-analytical effect size estimate primarily depends on the quality of the underlying object-level estimates. ...
An often-cited convention for discovery-oriented behavioral science research states that the general relative seriousness of the antecedently accepted false positive error rate of α = 0.05 be mirrored by a false negative error rate of β = 0.20. In 1965, Jacob Cohen proposed this convention to decrease a β-error rate typically in vast excess of 0.20. Thereby, we argue, Cohen (unintentionally) contributed to the wide acceptance of strongly uneven error rates in behavioral science. Although Cohen’s convention can appear epistemically reasonable for an individual researcher, the comparatively low probability that published effect size estimates are replicable renders his convention unreasonable for an entire scientific field. Appreciating Cohen’s convention helps to understand why even error rates (α = β) are “non-conventional” in behavioral science today, and why Cohen’s explanatory reason for β = 0.20—that resource restrictions keep from collecting larger samples—can easily be mistaken for the justificatory reason it is not.
... After screening literature based on inclusion criteria set by researchers, 36 independent studies from 24 main studies were found. Hunter & Schmidt (2004) explains that the number of articles that meta analysis can do consists of at least ten main studies. The summary of the research included in the meta analysis in this study is presented in table 1 consisting of authors, year, sample size (N), effect size (r), education level and publication. ...
This study aims to conduct a statistical evaluation of the correlation between character education and Islamic Religious Education (PAI) learning outcomes among students in Indonesia, addressing the variations in findings from previous research. A meta-correlation analysis was employed, analyzing data from 36 independent studies derived from 24 primary sources published between 2019 and 2023. The data was processed using JASP software (version 0.16.4). The analysis yielded a combined effect size of 0.30 (p < 0.001) based on a random effects model, indicating a small but statistically significant effect. These results suggest that improvements in character education are positively associated with academic achievement in Islamic Religious Education. The findings underscore the importance of integrating character education into the curriculum to enhance students' learning outcomes in PAI.
Pioneering research suggested that sexual afterglow (lingering sexual satisfaction following an act of sex) lasts 2 but not 3 days and predicts subsequent relationship satisfaction. Nevertheless, recent research highlights the importance of considering the differential impacts of sexual acceptance and rejection. We used 2-week, daily-diary data from 576 participants to demonstrate that sexual afterglow lasted at least 1 day on average, particularly following partner-initiated and mutually initiated sex, and did not depend on individual differences in the importance of sex or sexual rejection, though negative aftereffects of sexual rejection lasted 3 days. Furthermore, lingering sexual (dis)satisfaction often predicted subsequent relationship satisfaction. Mini-meta-analyses of the current data with all published data suggest sexual afterglow lasts at least 1 day and predicts relationship quality whereas sexual rejection did not reliably produce aftereffects. Conclusions focus future research on other factors that may contribute to differences in sexual afterglow and reactions to other discrete events.
Cronbach’s α is the most widely reported metric of the reliability of psychological measures. Decisions about an observed α’s adequacy are often made using rule-of-thumb thresholds, such as α of at least .70. Such thresholds can put pressure on researchers to make their measures meet these criteria, similar to the pressure to meet the significance threshold with p values. We examined whether α values reported in the psychology literature are inflated at the rule-of-thumb thresholds (αs = .70, .80, .90) because of, for example, overfitting to in-sample data (α-hacking) or publication bias. We extracted reported α values from three very large data sets covering the general psychology literature (> 30,000 α values taken from > 74,000 published articles in American Psychological Association [APA] journals), the industrial and organizational (I/O) psychology literature (> 89,000 α values taken from > 14,000 published articles in I/O journals), and the APA’s PsycTests database, which aims to cover all psychological measures published since 1894 (> 67,000 α values taken from > 60,000 measures). The distributions of these values show robust evidence of excesses at the α = .70 rule-of-thumb threshold that cannot be explained by justifiable measurement practices. We discuss the scope, causes, and consequences of α-hacking and how increased transparency, preregistration of measurement strategy, and standardized protocols could mitigate this problem.
Performance information (PI) has received significant attention in public administration research. However, evaluating the impact of public sector PI on stakeholders is challenging due to varying empirical results. Drawing on information propagation theory, as well as social and cognitive psychology, we conduct a meta‐analysis to examine the effect of public sector PI. Using 461 effect sizes from 75 studies, the meta‐analysis reveals PI's positive effects on stakeholder attitudes, behaviors, and perceptions of performance. Moreover, the effects tend to be stronger when PI is sent by third parties, received by citizens, delivered with positive valence, presented in absolute forms, and disseminated in law enforcement administrative subfields and in societies characterized by low power distance. The findings reinforce the significance of public sector PI and illuminate the complex interplay between it and stakeholder responses.
The literature on the antecedents and consequences of knowledge hiding remains fragmented, limiting its practical applications. Social exchange theory (SET), one of the most widely adopted sociological frameworks, offers unique insights into the dynamics of knowledge hiding. This study synthesizes the application of SET in analyzing the nomological framework of knowledge hiding through a systematic literature review and meta-analysis. A meta-analysis was conducted based on the random-effects model and the meta-analytic structural equation modeling method, incorporating 66 primary studies with a total of 20,603 participants. Additionally, we examined the mediating role of knowledge hiding by linking key antecedents and consequences. Moreover, an exploratory analysis was conducted to investigate the moderating effects of national culture and research methodology, providing evidence to justify the true heterogeneity in the pairwise relationships between knowledge hiding and its antecedents. The research results generally support most pairwise relationships between knowledge hiding and its correlates, which were theoretically developed based on SET. This study is the first attempt to explore the explanatory power of SET in analyzing the knowledge-hiding phenomenon, and whether the establishment of a knowledge exchange loop contributes to a deeper understanding of this dyadic construct.
Meta-analysis is a powerful statistical technique used to synthesize findings from multiple studies, offering a comprehensive understanding of specific research questions. This study explores the application of meta-analysis to assess correlational data in diabetic patients, focusing on relationships between key variables such as glycemic control 0and demographic factors. The primary objective is to consolidate fragmented research findings and provide a unified framework to inform clinical practices and policy decisions. A systematic literature review was conducted across major databases to identify studies reporting correlational data in diabetic populations. Relevant data were extracted, coded, and analyzed using advanced meta-analytic techniques. Heterogeneity among studies was addressed using random-effects model, and publication bias was evaluated using funnel plots and Egger's test. Results reveal consistent and statistically significant correlations between poor glycemic control. The findings highlight critical areas requiring targeted intervention. The study concludes that meta-analysis provides robust insights into complex relationships within diabetic populations, enhancing evidence-based decision-making. It recommends the adoption of standardized reporting protocols and further research into less-explored psychosocial and environmental determinants of diabetes outcomes.
Students' acceptance of various technologies in their learning process has long been of interest to academics and practitioners. Amongst other models, the Technology Acceptance Model (TAM) has predominated in this area. Nevertheless, there has not been a single empirical study that has analyzed and compiled the results of all these TAM-based studies. As a result, we perform a comprehensive meta-analysis and apply Meta-Analytic Structural Equation Modelling (MASEM) for a conceptual framework to assess an effect size of 2,462 and a total sample size of 158,096 from 299 primary studies. Our findings reveal that enjoyment is the
most prominent antecedent of TAM that plays a critical role in students’ technology acceptance behavior. Further, several methodological, cultural, and contextual moderators were confirmed. We then address the implications of these findings for both research and practice.
Interaction is typically at the core of the value co-creation process through operant resource exchange in online collaborative innovation communities (OCICs). While some studies emphasize the facilitating effect of interaction on value co-creation, others have drawn opposite conclusions, such as more peer interaction leads to less idea generation. Thus, the purpose of this paper is to utilize the service ecosystem framework to clarify the overall relationship between interaction and value co-creation and to explore the moderating factors that may have contributed to the divergence and inconsistency of previous studies. We conducted a meta-analysis of 65 effect sizes obtained from 63 articles with a cumulative sample size of 25,185 between 2004 and 2023, using a random effects model. The results indicate that interaction has a significantly positive impact on user value co-creation within OCICs (r = 0.453, 95%CI [0.405, 0.499]), and the heterogeneity among studies was significant (Q = 1409.29, p < 0.001). The strength of this correlation was moderated by the types of interaction (human–computer or human–human interactions), the types of OCICs (business-sponsored or socially constructed online communities), and the number of involved OCICs (one or multiple online communities), but not by the cultural background. These findings support the service ecosystem perspective rather than resource scarcity theory by resolving the mixed findings regarding the relationship between interaction and user value co-creation. Furthermore, this study systematically examined the contingent factors separately across three levels, micro (types of actor interactions), meso (types and number of OCICs), and macro (cultural background), combining the whole and the part insights, and empirically integrating service ecosystems as the foundational paradigm and unit of analysis for value co-creation research for the first time. This research contributes to theoretical frameworks in service ecosystems and offers actionable insights for management practices in business and marketing.
This paper explores six widely used research methods in PhD studies: meta-analysis, Partial Least Squares Structural Equation Modeling (PLS-SEM) (Survey Data), empirical analysis (Panel Data analysis), bibliometric analysis, qualitative interviews and thematic content analysis, and mathematical and computational proof. Each method is systematically analyzed with respect to its theoretical foundation, variables, measurement constructs, research design, and application scenarios. By providing a detailed comparative analysis, this study aims to guide researchers in selecting the most appropriate methodology for their academic pursuits. Findings indicate that methodological choices significantly impact research outcomes and theoretical contributions. Introduction PhD research demands rigorous and reliable methodologies to address complex academic inquiries and contribute meaningfully to the existing body of knowledge. Researchers must choose methods that not only align with their research questions but also provide robust, replicable results. Over the years, various methods have gained prominence across diverse disciplines, each offering unique strengths, applications, and theoretical foundations. These methodologies range from statistical techniques like meta-analysis and Partial Least Squares Structural Equation Modeling (PLS-SEM), to qualitative approaches such as interviews and thematic analysis. Each method provides distinct insights, from testing hypotheses in large datasets to exploring nuanced, contextualized experiences in small sample groups. Understanding the strengths and limitations of these methods is crucial for researchers to ensure the validity and reliability of their findings. This paper provides a detailed overview of six commonly employed research methods, elucidating their theoretical underpinnings, application contexts, and methodological strengths. The methods under discussion-meta-analysis, PLS-SEM, empirical analysis using panel data, bibliometric analysis, qualitative interviews with thematic content analysis, and mathematical and computational proof-are critical tools for generating meaningful research outputs in both the social and natural sciences. Each method offers specific advantages depending on the research goals, ranging from providing generalizable patterns across large datasets to offering deep insights into complex, qualitative phenomena. By examining these methods in detail, this paper aims to equip researchers with the knowledge necessary for informed decision-making in their research designs. The objective is to guide them in selecting the appropriate methodological approach to best address their research questions while ensuring the rigor and credibility of their academic contributions. The central research question driving this study is: RQ1. How do different research methodologies influence the reliability, validity, and overall outcomes of PhD-level research across various academic disciplines? This inquiry is critical as researchers are often faced with a diverse range of methods that vary in terms of complexity, application, and appropriateness to specific research contexts. Given the extensive range of research methods available-each with its own theoretical assumptions and methodological strengths-understanding which methods provide the most reliable, replicable results is of paramount importance. Furthermore, as PhD research aims to advance knowledge within specific fields, it is essential for researchers to employ methods that not only yield robust findings but also contribute meaningfully to the theoretical development of the discipline. The
After decades of scholarly focus on studying trust from the trustor's perspective, there has been a rapidly growing interest in understanding trust from the trustee's perspective, with a particular focus on felt trust (i.e., a trustee's perception of being trusted by a trustor). The fundamental assumption underlying this trustee‐centric perspective is that it complements the dominant trustor‐centric perspective and enables a more comprehensive understanding of how trust manifests and operates in the workplace. Unfortunately, our critical review of 121 felt trust studies reported in 87 manuscripts reveals major problems in multiple areas (conceptualization, measurement, theorizing, and research methods) that limit this field's ability to achieve this potential. To remedy this, we build on existing frameworks, best practices, and exemplars from the (felt) trust and meta‐perceptions literature to outline a constructive redirection of the field. We subsequently empirically test the field's fundamental assumption by meta‐analytically exploring the distinctiveness and incremental validity of felt trust beyond other trust concepts. Taken together, our envisioned redirection and meta‐analytic findings enable the field of felt trust to live up to its promise and enrich our understanding of organizational trust.
From the publisher:
This fully updated fourth edition of Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data. The guiding philosophy is to provide a strong conceptual foundation so that readers can generalize to new situations they encounter in their research, including new developments in data analysis.
Key features include:
Emphasis on basic concepts such as sampling distributions, design efficiency, and expected mean squares, relating the research designs and data analyses to the statistical models that underlie the analyses.
Detailed instructions on performing analysis using both R and SPSS.
Pedagogical exercises mapped to key topic areas to support students as they review their understanding and strive to reach their higher learning goals.
Incorporating the analyses of both experimental and observational data, and with coverage that is broad and deep enough to serve a two-semester sequence, this textbook is suitable for researchers, graduate students and advanced undergraduates in psychology, education, and other behavioral, social, and health sciences.
The book is supported by a robust set of digital resources, including data files and exercises from the book in an Excel format for easy import into R or SPSS; R scripts for running example analysis and generating figures; and a solutions manual.
Examination copies can be requested at https://www.routledge.com/Research-Design-and-Statistical-Analysis/Rotello-Myers-Well-LorchJr/p/book/9781032897288
ResearchGate has not been able to resolve any references for this publication.