Article

Statistical Methods for Research Workers

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... (2.3) and ∇ P ⊗∇ P γ = 0. The MLE uncertainties are [6] σ P i = − H| −1 ...
... The approximation behind the propagation assumes that the relative uncertainties are small. Therefore, in small samples where the relative uncertainties are large, the propagation approximation is invalid and the formula is erroneous [6,11]. This influences the minimal possible bin width. ...
... According to Wilks' theorem for log-likelihood ratio statistics, for a large number of events, 2Λ has a χ 2 distribution with ν degrees of freedom, where ν is the difference between the numbers of parameters in the compared models [6,11]. In this case, one 'model', the experimental data, has 3 parameters (r i,1 ), and the theoretical model (the 'null model') has no parameters (since r i = 1) so ν = 3. ...
Preprint
We present a novel analysis method for measurements of polarization transferred in A(e,eN)A(\vec{e},e'\vec{N}) experiments, which can be applied to other kinds of polarization measurements as well. In this method the polarization transfer components are presented in spherical coordinates using an efficient likelihood numerical maximization based on an analytic derivation. We also propose a formalism that accounts for multi-parameter models, and which yields a smooth and continuous representation of the data (rather than using standard binning). Applying this method on simulated data generates results with reduced statistical and systematic uncertainties and enables revealing physical information that is lost in standard binning of the data. The obtained results can be compared easily to theoretical models and other measurements. Furthermore, CPU time is significantly reduced using this method.
... Already Fisher (1925), who proposed p = 0.05 as a conventional rejection criterion for H 0 , had offered no more than a convenience justification for its specific value (Hubbard, 2015;Kennedy-Shaffer, 2019): ...
... So, "we shall not often be astray if we draw a conventional line at .05, and consider that higher values of χ 2 indicate a real [rather than a mistaken] discrepancy" (Fisher, 1925, p. 79;italics added). As a threshold for rejecting H 0 , then, already Fisher's (1925) statistical inference system (on which NHST is based) only offers a conventional justification for p = 0.05. Similarly, when Edgeworth coined the term 'statistical significance', in 1885, he merely wanted "a tool to indicate when a result warrants further scrutiny; [but] statistical significance was never meant to imply scientific importance" (Di Leo et al., 2020, p. 2; italics added; see Kennedy-Shaffer, 2019, p. 84). ...
Article
Full-text available
An often-cited convention for discovery-oriented behavioral science research states that the general relative seriousness of the antecedently accepted false positive error rate of α = 0.05 be mirrored by a false negative error rate of β = 0.20. In 1965, Jacob Cohen proposed this convention to decrease a β-error rate typically in vast excess of 0.20. Thereby, we argue, Cohen (unintentionally) contributed to the wide acceptance of strongly uneven error rates in behavioral science. Although Cohen’s convention can appear epistemically reasonable for an individual researcher, the comparatively low probability that published effect size estimates are replicable renders his convention unreasonable for an entire scientific field. Appreciating Cohen’s convention helps to understand why even error rates (α = β) are “non-conventional” in behavioral science today, and why Cohen’s explanatory reason for β = 0.20—that resource restrictions keep from collecting larger samples—can easily be mistaken for the justificatory reason it is not.
... Think of statistical methods-they have been developed and expanded over hundreds of years by Gauss, Laplace, Pearson, Fisher and many others [62]. Our cumulative statistical methods make it possible to test most types of hypotheses, analyse vast data and make systematic predictions in fields ranging from physics and biology to economics. ...
... The year 1925 is applied here as the year modern statistics was developed, as this is when Ronald Fisher, commonly viewed as the father of the field, published 'Statistical Methods for Research Workers' , which marked the first full-length book on statistical methods and was critical in establishing and spreading modern statistics. (62) The standardised use of controlled studies became more commonly applied around 1920. Animal experimentation techniques became standardised and widespread since around 1890, including the production and preparation of animals for experimental purposes. ...
Article
Full-text available
How can scientific progress be conceived best? Does science mainly undergo revolutionary paradigm shifts? Or is the evolution of science mainly cumulative? Understanding whether science advances through cumulative evolution or through paradigm shifts can influence how we approach scientific research, education and policy. The most influential and cited account of science was put forth in Thomas Kuhn’s seminal book The structure of scientific revolutions. Kuhn argues that science does not advance cumulatively but goes through fundamental paradigm changes in the theories of a scientific field. There is no consensus yet on this core question of the nature and advancement of science that has since been debated across science. Examining over 750 major scientific discoveries (all Nobel Prize and major non-Nobel Prize discoveries), we systematically test this fundamental question about scientific progress here. We find that three key measures of scientific progress—major discoveries, methods and fields—each demonstrate that science evolves cumulatively. First, we show that no major scientific methods or instruments used across fields (such as statistical methods, X-ray methods or chromatography) have been completely abandoned, i.e. subject to paradigm shifts. Second, no major scientific fields (such as biomedicine, chemistry or computer science) have been completely abandoned. Rather, they have all continuously expanded over time, often over centuries, accumulating extensive bodies of knowledge. Third, scientific discoveries including theoretical discoveries are also predominately cumulative, with only 1% of over 750 major discoveries having been abandoned. The continuity of science is most compellingly evidenced by our methods and instruments, which enable the creation of discoveries and fields. We thus offer here a new perspective and answer to this classic question in science and the philosophy and history of science by utilizing methods from statistics and empirical sciences.
... The RPF overcomes the degradation induced by an increase in the number of sensors by using several potatoes of low dimension in parallel, each one designed to capture a particular class of artifacts that affects specific spatial areas at specific frequency bands. Eventually, the output z-scores of all potatos (i.e., a potato field) are combined into a single p-value using the right-tail Fisher's combination function [23], allowing a Signal Quality Index (SQI) for each epoch ranging from 1 (clean) to 0 (noisy) [24]. ...
... When it comes to muscular artifacts, multiple potatoes can be defined using external electrodes and an high-pass filter above 20Hz. These may include a potato defined using temporal electrodes to identify jaw clenching and swallowing, After defining a set of J potatoes comprising the RPF, their output z-scores are combined into a single p-value using the Fisher's combination method [23]. For z-scores z j , j = 1, . . . ...
... Since the introduction of Fisher's combination test statistic in Fisher (1932), there have been numerous studies and long-standing interests focused on multiple hypothesis testing. Early studies predominantly focused on deriving combination test statistics by summing transformed p-values. ...
... It was commonly assumed that these p-values were independent, facilitating the establishment of the statistical distribution under the null hypothesis. Among these methods, the seminal work by Fisher (1932) proposed statistic −2 K i=1 lnp i with a chi-squared distribution. Pearson (1933) developed a statistic as − K i=1 ln(1 − p i ) following a gamma distribution under the null, and Edgington (1972) presented the approach of summing individual p-values as K i=1 p i . ...
Preprint
Full-text available
In the field of multiple hypothesis testing, combining p-values represents a fundamental statistical method. The Cauchy combination test (CCT) (Liu and Xie, 2020) excels among numerous methods for combining p-values with powerful and computationally efficient performance. However, large p-values may diminish the significance of testing, even extremely small p-values exist. We propose a novel approach named the positive Cauchy combination test (PCCT) to surmount this flaw. Building on the relationship between the PCCT and CCT methods, we obtain critical values by applying the Cauchy distribution to the PCCT statistic. We find, however, that the PCCT tends to be effective only when the significance level is substantially small or the test statistics are strongly correlated. Otherwise, it becomes challenging to control type I errors, a problem that also pertains to the CCT. Thanks to the theories of stable distributions and the generalized central limit theorem, we have demonstrated critical values under weak dependence, which effectively controls type I errors for any given significance level. For more general scenarios, we correct the test statistic using the generalized mean method, which can control the size under any dependence structure and cannot be further optimized.Our method exhibits excellent performance, as demonstrated through comprehensive simulation studies. We further validate the effectiveness of our proposed method by applying it to a genetic dataset.
... The following treatments were included in the study, weedy check-T0, Pendimethalin @ 0.75kg/ha (PE)-T1, Imazethapyr @100g/ha (PoE)-T2, Quizalofop-ethyl @40 g/ha (PoE)-T3, Pendimethalin @ 0.75kg/ha (PE) + Imazethapyr @100g/ha (PoE)-T4, Pendimethalin @ 0.75kg/ha (PE) + Quizalofop-ethyl @40g/ha (PoE)-T5, Pendimethalin @ 0.75kg/ha (PE) + one hand weeding-T6, Pendimethalin @ 0.75kg/ha (PE) + Straw mulch (3 t/ha)-T7 and Pendimethalin @ 0.75kg/ha (PE) + Straw Mulch (5 t/ha)-T8 and Two-hand weeding (20-25 DAS, 40-45DAS) (Weed free)-T9. The herbicidal observations were recorded upto harvest and were analysed by Statistical methods (Fisher, 1950) [1] . ...
... The following treatments were included in the study, weedy check-T0, Pendimethalin @ 0.75kg/ha (PE)-T1, Imazethapyr @100g/ha (PoE)-T2, Quizalofop-ethyl @40 g/ha (PoE)-T3, Pendimethalin @ 0.75kg/ha (PE) + Imazethapyr @100g/ha (PoE)-T4, Pendimethalin @ 0.75kg/ha (PE) + Quizalofop-ethyl @40g/ha (PoE)-T5, Pendimethalin @ 0.75kg/ha (PE) + one hand weeding-T6, Pendimethalin @ 0.75kg/ha (PE) + Straw mulch (3 t/ha)-T7 and Pendimethalin @ 0.75kg/ha (PE) + Straw Mulch (5 t/ha)-T8 and Two-hand weeding (20-25 DAS, 40-45DAS) (Weed free)-T9. The herbicidal observations were recorded upto harvest and were analysed by Statistical methods (Fisher, 1950) [1] . ...
... Famous methods for combining p-values include Fisher's method (Fisher, 1992), Bonferroni's method (Wasserman, 2004), Sime's method (Simes, 1986), and the BH procedure (Benjamini and Hochberg, 1995). Each method is associated with a test statistic t(q) that rejects the global null H 0 when t(q) ≤ τ , for an appropriately chosen threshold τ . ...
Preprint
Full-text available
In out-of-distribution (OOD) detection, one is asked to classify whether a test sample comes from a known inlier distribution or not. We focus on the case where the inlier distribution is defined by a training dataset and there exists no additional knowledge about the novelties that one is likely to encounter. This problem is also referred to as novelty detection, one-class classification, and unsupervised anomaly detection. The current literature suggests that contrastive learning techniques are state-of-the-art for OOD detection. We aim to improve on those techniques by combining/ensembling their scores using the framework of null hypothesis testing and, in particular, a novel generalized likelihood ratio test (GLRT). We demonstrate that our proposed GLRT-based technique outperforms the state-of-the-art CSI and SupCSI techniques from Tack et al. 2020 in dataset-vs-dataset experiments with CIFAR-10, SVHN, LSUN, ImageNet, and CIFAR-100, as well as leave-one-class-out experiments with CIFAR-10. We also demonstrate that our GLRT outperforms the score-combining methods of Fisher, Bonferroni, Simes, Benjamini-Hochwald, and Stouffer in our application.
... Split-plot factorials were first invented by Fisher [14] for applications in agriculture, when he was studying the impact of different fertilizing methods on the crop yield of plots of land and smaller patches within these plots. Hence, the terms split-plot, whole plot, and sub-plot. ...
Article
Full-text available
Considerable effort has been put over the last few decades into clarifying the correct design and analysis of split-plot factorial experiments. However, the information found in the literature is scattered and sometimes still not easy to grasp for non-experts. Because of the importance of split-plots for the industry and the fact that any experimenter may need to use them at some point, a detailed and step-by-step guide collecting all the available information on the fundamental methodology in one place was deemed necessary. More specifically, this paper discusses the simple case of an unreplicated split-plot factorial experiment with more than one whole-plot (WP) factors and all factors set at two levels each. Explanations on how to properly design the experiment, analyze the data, and assess the proposed model are provided. Special attention is given to clarifications on the calculations of contrasts, effects, sum of squares (SS), parameters, WP and sub-plot (SP) residuals, as well as the proper division of the proposed model into its sub-designs and sub-models for calculating measures of adequacy correctly. The application of the discussed theory is showcased by a case study on the recycling of molybdenum (Mo) from CIGS solar cells. Factors expected to affect Mo recovery were investigated and the analysis showed that all of them are significant, while the way they affect the response variable was also revealed. After reading this guide, the reader is expected to acquire a good understanding of how to work with split-plots smoothly and handle with confidence more complex split-plot types.
... The independence meta-metric quantifies how much information in one metric is already captured by a set of other metrics. Our meta-metrics are based on ideas which have a long history in statistics (e.g., analysis of variance) and psychometrics (e.g., Cronbach's alpha) (Fisher, 1925;Cronbach, 1951;Kuder and Richardson, 1937) but have not received widespread treatment in sports. The limited work quantifying the reliability of metrics in sports mostly appears in blogs (Sprigings, 2014;Blackport, 2014;Arthur, 2015) and our hope is to formalize and generalize some of the ideas discussed in these these articles. ...
Preprint
In sports, there is a constant effort to improve metrics which assess player ability, but there has been almost no effort to quantify and compare existing metrics. Any individual making a management, coaching, or gambling decision is quickly overwhelmed with hundreds of statistics. We address this problem by proposing a set of "meta-metrics" which can be used to identify the metrics that provide the most unique, reliable, and useful information for decision-makers. Specifically, we develop methods to evalute metrics based on three criteria: 1) stability: does the metric measure the same thing over time 2) discrimination: does the metric differentiate between players and 3) independence: does the metric provide new information? Our methods are easy to implement and widely applicable so they should be of interest to the broader sports community. We demonstrate our methods in analyses of both NBA and NHL metrics. Our results indicate the most reliable metrics and highlight how they should be used by sports analysts. The meta-metrics also provide useful insights about how to best construct new metrics which provide independent and reliable information about athletes.
... which go back to Fisher [1925Fisher [ , 1935, Neyman [1923Neyman [ /1990Neyman [ , 1935, clarify how the act of randomization allows for the testing for the presence of treatment effects and the unbiased estimation of average treatment effects. Traditionally these methods have not been used much in economics. ...
Preprint
In this paper we discuss recent developments in econometrics that we view as important for empirical researchers working on policy evaluation questions. We focus on three main areas, where in each case we highlight recommendations for applied work. First, we discuss new research on identification strategies in program evaluation, with particular focus on synthetic control methods, regression discontinuity, external validity, and the causal interpretation of regression methods. Second, we discuss various forms of supplementary analyses to make the identification strategies more credible. These include placebo analyses as well as sensitivity and robustness analyses. Third, we discuss recent advances in machine learning methods for causal effects. These advances include methods to adjust for differences between treated and control units in high-dimensional settings, and methods for identifying and estimating heterogeneous treatment effects.
... The simple regression method was used to show the variations among the studied parameters as a result of insect infestation. This method was calculated by Fisher (1950). In addition, the coefficient of determination and the percentage of variance explained were calculated to provide important information about the extent of variation among the variables studied. ...
Article
Full-text available
The fall armyworm (Spodoptera frugiperda) is a serious crop pest that destroys maize plants in Egypt and the world, leading to reduced quality and quantity of the maize crop. We conducted this study to monitor and determine the damage status and infestation frequency of S. frugiperda on maize plants in Luxor Governorate, southern Egypt. The sampling date was set with the first observable occurrence of fall armyworm at the study site. Forty randomly selected corn plants (ten plants from each replicate) were evaluated and estimated weekly until harvest. The total number of plants used for sampling was 960 plants during the two seasons. The invasion and damage of maize plants by S. frugiperda started at the age of 16 days after sowing until the time of harvest, i.e. S. frugiperda larvae were detected on maize plants during the period from the third week of June until the maize harvest. In terms of number of larvae, number of plants infested with larvae, percentage of infestation, and percentage of damage intensity, three peaks were recorded in each season, occurring at 30, 58 and 86 days after sowing in 2021 and 2022, respectively. Our study shows that the number of damaged plants was higher than the number of infected plants throughout the season. Thus, the percentage of plants damaged by S. frugiperda increased as the timing of corn plant inspections increased during the two seasons. The results indicate that monitoring plant inspections at key times during the growing season can provide crucial data to help farmers implement timely control measures.
... Statistical Parametric Mapping (SPM) was used to perform statistical analysis on time series level (Friston et al., 2006;Pataky, Robinson & Vanrenterghem, 2016). When a seemingly single effect was split into multiple significant clusters a combined p-value, denoted by p*, was obtained using Fisher's method (Fisher, 1936). In case of significant differences between sessions, post-hoc two-tailed repeated-measures t-tests were performed between sessions (Pre-Post1, Pre-Post2, Post1-Post2) with Bonferroni correction. ...
Article
Full-text available
Background As we age, avoiding falls becomes increasingly challenging. While balance training can mitigate such challenges, the specific mechanisms through which balance control improves remains unclear. Methods We investigated the impact of balance training in older adults on feedback control after perturbations, focusing on kinematic balance recovery strategies and muscle synergy activation. Twenty older adults aged over 65 underwent short-term (one session) and long-term (3-weeks, 10 sessions) balance training, and their recovery from unpredictable mediolateral perturbations was assessed. Perturbations consisted of 8° rotations of a robot-controlled platform on which participants were balancing on one leg. We measured full-body 3D kinematics and activation of 15 leg and trunk muscles, from which linear and rotational kinematic balance recovery responses and muscle synergies were obtained. Results Our findings revealed improved balance performance after long-term training, characterized by reduced centre of mass acceleration and (rate of change of) angular momentum. Particularly during the later stage of balance recovery the use of angular momentum to correct centre of mass displacement was reduced after training, decreasing the overshoot in body orientation. Instead, more ankle torque was used to correct centre of mass displacement, but only for perturbations in medial direction. These situation and strategy specific changes indicate adaptations in feedback control. Activation of muscle synergies during balance recovery was also affected by training, specifically the synergies responsible for leg stiffness and ankle torques. Training effects on angular momentum and the leg stiffness synergy were already evident after short-term training. Conclusion We conclude that balance training in older adults refines feedback control through the tuning of control strategies, ultimately enhancing the ability to recover balance.
... Research into feature interactions dates back many decades within the field of statistics. For example, two-way ANOVA (Fisher, 1925) uncovers interactions between two variables on a dependent variable by decomposing it into a sum of main effects, stemming from a single feature, and interaction effects, stemming from interactions between groups of features. The behaviour of a neural network can be explained in terms of these effects as well. ...
Preprint
Full-text available
When we speak, write or listen, we continuously make predictions based on our knowledge of a language's grammar. Remarkably, children acquire this grammatical knowledge within just a few years, enabling them to understand and generalise to novel constructions that have never been uttered before. Language models are powerful tools that create representations of language by incrementally predicting the next word in a sentence, and they have had a tremendous societal impact in recent years. The central research question of this thesis is whether these models possess a deep understanding of grammatical structure similar to that of humans. This question lies at the intersection of natural language processing, linguistics, and interpretability. To address it, we will develop novel interpretability techniques that enhance our understanding of the complex nature of large-scale language models. We approach our research question from three directions. First, we explore the presence of abstract linguistic information through structural priming, a key paradigm in psycholinguistics for uncovering grammatical structure in human language processing. Next, we examine various linguistic phenomena, such as adjective order and negative polarity items, and connect a model's comprehension of these phenomena to the data distribution on which it was trained. Finally, we introduce a controlled testbed for studying hierarchical structure in language models using various synthetic languages of increasing complexity and examine the role of feature interactions in modelling this structure. Our findings offer a detailed account of the grammatical knowledge embedded in language model representations and provide several directions for investigating fundamental linguistic questions using computational methods.
... This test is particularly suitable for small sample sizes. Fisher originally proposed this test in 1934 for analyzing contingency tables, especially in small sample sizes [42]. The null hypothesis was as follows: There is no association between Clusters and symptoms. ...
... It is worth noting that the Law of Total Variance [11] can be seen as a special case of this decomposition, where d = 1, n = 1 and N ≥ 2, or in other words, a 'Law of Total Fréchet Variance' defines a Pythagorean-like theorem in the 2-Wasserstein space for empirical probability measures supported on R d . This decomposition thus also relates to the F -statistic used in One-Way Analysis of Variance [9,26]: observe that if the µ ℓ ∈ P(R) (treated as sample data rather than measures) all have uniform weights, and we take n = 1, one can calculate an F -statistic as ...
Preprint
Full-text available
Wasserstein distances form a family of metrics on spaces of probability measures that have recently seen many applications. However, statistical analysis in these spaces is complex due to the nonlinearity of Wasserstein spaces. One potential solution to this problem is Linear Optimal Transport (LOT). This method allows one to find a Euclidean embedding, called LOT embedding, of measures in some Wasserstein spaces, but some information is lost in this embedding. So, to understand whether statistical analysis relying on LOT embeddings can make valid inferences about original data, it is helpful to quantify how well these embeddings describe that data. To answer this question, we present a decomposition of the Fr\'echet variance of a set of measures in the 2-Wasserstein space, which allows one to compute the percentage of variance explained by LOT embeddings of those measures. We then extend this decomposition to the Fused Gromov-Wasserstein setting. We also present several experiments that explore the relationship between the dimension of the LOT embedding, the percentage of variance explained by the embedding, and the classification accuracy of machine learning classifiers built on the embedded data. We use the MNIST handwritten digits dataset, IMDB-50000 dataset, and Diffusion Tensor MRI images for these experiments. Our results illustrate the effectiveness of low dimensional LOT embeddings in terms of the percentage of variance explained and the classification accuracy of models built on the embedded data.
... Rosenthal's Failsafe N of 9,556 (z = 21.74) (101) and Fisher Failsafe-N of 3,571 (Fisher's chisquared p<0.0001) (102) suggested that large numbers of unpublished data would be required to negate the current metaanalysis. Reiteration to account for these influences led to an adjusted combined effect size of 0.75 (95% CI, 0.67-0.83). ...
Article
Full-text available
Reduced natural killer (NK) cell cytotoxicity is the most consistent immune finding in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). Meta-analysis of the published literature determined the effect size of the decrement in ME/CFS. Databases were screened for papers comparing NK cell cytotoxicity in ME/CFS and healthy controls. A total of 28 papers and 55 effector:target cell ratio (E:T) data points were collected. Cytotoxicity in ME/CFS was significantly reduced to about half of healthy control levels, with an overall Hedges’ g of 0.96 (0.75–1.18). Heterogeneity was high but was explained by the range of E:T ratios, different methods, and potential outliers. The outcomes confirm reproducible NK cell dysfunction in ME/CFS and will guide studies using the NK cell model system for pathomechanistic investigations. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42024542140.
... Estimating a PVAR model under a GMM estimation requires the variables used to be stationary which can be tested in panel modelling using tests developed by (Maddala & Wu, 1999); (Choi, 2001); (Hadri, 2000); (Levin, et al., 2002); and (Im, et al., 2003). The Fisher-type tests are the most popular in the literature based on the idea developed by (Fisher, 1932). Therefore, the current research uses a Fisher-type test based on Phillips-Perron tests to test for stationarity of variables with the null hypothesis of a unit root in each cross section, and an alternative one of at least one stationary cross-section. ...
... To examine the effect of family clustering, Fisher exact tests were run to investigate the associations between unmet financial needs of (1) people with psychosis and siblings, and (2) people with psychosis and parents within the same family [66]. Effect sizes were indicated by Cramer's V, and interpreted as small (≤0.10), medium (0.11-0.30), or large (>0.30; [67]). ...
Article
Full-text available
Background. Psychotic disorders have a strong negative impact on people’s lives, including their financial situation. This study aimed to examine differences in unmet financial needs between people with psychotic disorders, parents, siblings, and controls. Secondly, we aimed to examine whether family clustering contributes to unmet financial needs. Lastly, we aimed to examine to what extent demographic, economic, psychiatric, functional, and cognitive characteristics and substance use predict unmet financial needs in people with psychosis. Methods. Data from the first assessment of people with psychosis (n = 956), siblings (n = 889), parents (n = 858), and controls (n = 496) included in the Genetic Risk and Outcome of Psychosis study were used. Group differences were assessed with Kruskal–Wallis tests (aim 1), while a mixed-effects logistic regression analysis and explorative and confirmative ordinal logistic regression analyses were conducted for aims 2 and 3, respectively. Results. Twenty-four percent of people with psychotic disorders reported unmet financial needs. These levels of unmet financial needs were significantly higher than levels for siblings, parents, and controls. We found a negligible influence of (direct) familial clustering on unmet financial needs. Lastly, cannabis and tobacco use significantly and consistently predicted higher levels of unmet financial needs of people with psychosis. Conclusions. Relatively high levels of unmet financial needs occurred in a heterogeneous group of people with psychosis, especially when people used cannabis or tobacco. Unmet financial needs can have detrimental consequences for mental health, stigmatization, leisure time activities, and social engagement. Thus, it is pivotal to recognize unmet financial needs, especially combined with substance use, as a crucial stressor for people with psychosis.
... As explained in 2.3, the imposition of an ensures that the camera-to-target relative pose lies within proper acceptability ranges ensuring pseudo-parallelism between the camera and the plane containing the crack. A two-way ANOVA [48] analysis was performed on the collected data in the six different angular scenarios to examine the effect of operator, crack type and angular threshold. Results are reported in Tables 1 and 2. The results show there is a significant interaction between the target crack (i.e., Crack ID) and the operator (i.e., < 0.05, considering significance threshold level of 0.05), the null hypothesis can be rejected, and the test is statistically significant considering all angular scenarios. ...
... Nota. EM = edad media; Media ± Desviación Estándar; F = análisis de la varianza (Fisher, 1925); p =significancia estadística (Fisher, 1959); α = nivel de significancia; η 2 = tamaño del efecto parcial de eta cuadrado (Cohen, 1988); d = tamaño del efecto para las diferencias entre grupos, Cohen's (Cohen, 1988); t = Prueba t de Student (Gosset, 1908); β = coeficiente beta; R 2 = coeficiente de determinación; r = coeficiente de correlación de Pearson; OR= Odds Ratio; Modelo SEM, Ecuaciones Estructura-les: CFI, Índice de Ajuste Comparativo, TLI, Índice de Ajuste No Normado, RMSEA, Error Cuadrático Medio de Aproximación de la Raíz Cuadrada (Bentler, 1990); Z = valor estadístico del test no paramétrico de Wilcoxon. UM, ultramaratón; REBT, Terapia cognitivo conductual racional; SDT, teoría de la autodeterminación; IE, Inteligencia Emocional; EEG, electroencefalograma. ...
Article
Full-text available
Objetivo: Esta revisión sistemática explora el perfil psicológico de los corredores de Ultra Trail, analizando su relación con el rendimiento y el bienestar de los deportistas. Método: Se realizó una revisión sistemática siguiendo la Declaración PRISMA, empleando diversas bases de datos: Web of Science, Scopus, PubMed, PsycINFO, PsycARTICLES, Psicodoc, Dialnet, Science direct y Research Gate. Se aplicaron criterios de inclusión (artículos científicos en contextos deportivos que exploren variables psicológicas en Ultra Trail Running publicados entre 2014 y 2024, en lengua castellana o inglesa) y exclusión (estudios de contextos deportivos que no aborden características psicológicas y que no sean de Ultra Trail Running específicamente), valorando la calidad metodológica mediante la lista de verificación STROBE, la escala de la Base de Datos de Evidencia en Fisioterapia (PEDro) y el Programa de Habilidades en Lectura Crítica Español (CASPe). Este estudio ha sido registrado en la plataforma internacional PROSPERO para revisiones sistemáticas prospectivas, con el número de registro 512424. Resultados: De treinta y un estudios seleccionados, se identificaron las características psicológicas clave asociadas con los corredores de Ultra Trail, que fueron agrupadas en los siguientes tópicos: motivación; habilidades psicológicas y gestión del estrés. Las tendencias indican que un perfil psicológico favorable, es decir, un atleta y orientado hacia la motivación intrínseca, con adecuadas habilidades psicológicas y gestión del estrés, influye positivamente en el rendimiento deportivo y el bienestar. Conclusiones: Se destaca la necesidad de enfoques personalizados en el entrenamiento mental por parte de profesionales del deporte, señalando la importancia de investigaciones de calidad enfocadas en las peculiaridades psicológicas, salud mental y preparación psicológica de estos atletas.
... To assess the meaning of the presented metrics and their implications, subjecting them to a statistical test against a null distribution is essential. To this end, hypothesis testing methodologies have been extensively discussed [28][29][30], which provided the theoretical foundations and practical applications of statistical hypothesis testing, offering insights into optimal testing procedures and efficiency. ...
Article
Full-text available
Network analysis has found widespread utility in many research areas. However, assessing the statistical significance of observed relationships within networks remains a complex challenge. Traditional node permutation tests are often insufficient in capturing the effect of changing network topology by creating reliable null distributions. We propose two randomization alternatives to address this gap: random rewiring and controlled rewiring. These methods incorporate changes in the network topology through edge swaps. However, controlled rewiring allows for more nuanced alterations of the original network than random rewiring. In this sense, this paper introduces a novel evaluation tool, the Expanded Quadratic Assignment Procedure (EQAP), designed to calculate a specific p-value and interpret statistical tests with enhanced precision. The combination of EQAP and controlled rewiring provides a robust network comparison and statistical analysis framework. The methodology is exemplified through two real-world examples: the analysis of an organizational network structure, illustrated by the Enron-Email dataset, and a social network case, represented by the UK Faculty friendship network. The utility of these statistical tests is underscored by their capacity to safeguard researchers against Type I errors when exploring network metrics dependent on intricate topologies.
... The experiments was laid out in split plot design (SPD) with three replications and 18 treatments of combination of consisted two factor of panchgavya i.e. application doses (conteol, 2%, 4%, 6%, 8% and 10%) and application stages (branching, flowering and branching + flowering). The parameters was analysed by statistical methods (Fisher, 1950) [2] . ...
Article
Full-text available
Intraclass correlation coefficients are widely used not only in medicine but also in various other fields. This study focuses on the intraclass correlation coefficient based on the one-way random effects model. We review the methods for constructing confidence intervals and compare their performance through simulations. Donner and Wells (Biometrics 42:401–412, 1986) and Ukoumunne (Stat Med 21(24):3757–3774, 2002) conducted simulation studies to compare some methods. However, in this study, we also include methods developed after their studies. These include a method based on the nonparametric bootstrap by Ukoumunne et al. (Stat Med 22(24):3805–3821, 2003), a method based on the restricted maximum likelihood (REML) by Burch (Comput Stat Data Anal 55(2):1018–1028, 2011), and a method based on the beta distribution by Demetrashvili et al. (Stat Methods Med Res 25(5):2359–2376, 2016). Our simulations reveal that under the normality of random effects and errors, the REML-based method performs best overall in terms of coverage probability of confidence intervals, upper and lower error rates, and mean interval width.
Article
Full-text available
Wind turbines used to combat climate change pose a green-green dilemma when endangered and protected wildlife species are killed by collisions with rotating blades. Here, we investigated the geographic origin of bats killed by wind turbines along an east-west transect in France to determine the spatial extent of this conflict in Western Europe. We analysed stable hydrogen isotopes in the fur keratin of 60 common noctule bats (Nyctalus noctula) killed by wind turbines during summer migration in four regions of France to predict their geographic origin using models based on precipitation isoscapes. We first separated migratory from regional individuals based on fur isotope ratios of local bats. Across all regions, 71.7% of common noctules killed by turbines were of regional and 28.3% of distant origin, the latter being predominantly females from northeastern Europe. We observed a higher proportion of migratory individuals from western sites compared to eastern sites. Our study suggests that wind-turbine-related losses of common noctule bats may impact distant breeding populations across whole Europe, confirming that migratory bats are highly vulnerable to wind turbines and that effective conservation measures, such as temporary curtailment of turbine operation, should be mandatory to protect them from colliding with the rotating blades of wind turbines.
Article
Full-text available
Routine least squares regression analyses may sometimes miss important aspects of data. To exemplify this point we analyse a set of 1171 observations from a questionnaire intended to illuminate the relationship between customer loyalty and perceptions of such factors as price and community outreach. Our analysis makes much use of graphics and data monitoring to provide a paradigmatic example of the use of modern robust statistical tools based on graphical interaction with data. We start with regression. We perform such an analysis and find significant regression on all factors. However, a variety of plots show that there are some unexplained features, which are not eliminated by response transformation. Accordingly, we turn to robust analyses, intended to give answers unaffected by the presence of data contamination. A robust analysis using a non-parametric model leads to the increased significance of transformations of the explanatory variables. These transformations provide improved insight into consumer behaviour. We provide suggestions for a structured approach to modern robust regression and give links to the software used for our data analyses.
Article
Full-text available
In this work, we address the question of how to enhance signal-agnostic searches by leveraging multiple testing strategies. Specifically, we consider hypothesis tests relying on machine learning, where model selection can introduce a bias towards specific families of new physics signals. Focusing on the New Physics Learning Machine, a methodology to perform a signal-agnostic likelihood-ratio test, we explore a number of approaches to multiple testing, such as combining p -values and aggregating test statistics. Our findings show that it is beneficial to combine different tests, characterised by distinct choices of hyperparameters, and that performances comparable to the best available test are generally achieved, while also providing a more uniform response to various types of anomalies. This study proposes a methodology that is valid beyond machine learning approaches and could in principle be applied to a larger class model-agnostic analyses based on hypothesis testing.
Chapter
Many questions are answered using quantitative data. The main options are to analyse data using a narrative approach of tables and narrative explanation, or the formal pooling of data using meta-analysis. In this chapter, we outline a four-step approach to understanding quantitative data: what is the point estimate, how much variability or uncertainty is there about this, what is the clinical significance, and what is the statistical significance? When undertaking a meta-analysis, there are key decisions such as the outcome measure and meta-analysis model to be used. Key results include the point estimate with its confidence interval, prediction interval, and measures of heterogeneity. When the key assumption of studies being independent is violated, there are different approaches, namely, multilevel and multivariate analyses.
Article
This paper presents likelihood–based inference methods for the family of univariate gamma–normal distributions GN(α,r,μ,σ2)\textrm{GN}(\alpha , r, \mu , \sigma ^2) that result from summing independent γ(α,r)\gamma (\alpha , r) and N(μ,σ2)N(\mu ,\sigma ^2) random variables. First, the probability density function of a gamma–normal variable is provided in compact form with the use of parabolic cylinder functions, along with key properties. We then provide analytic expressions for the maximum–likelihood score equations and the Fisher information matrix, and discuss inferential methods for the gamma–normal distribution. Given the widespread use of the two constituting distributions, the gamma–normal distribution is a general purpose tool for a variety of applications. In particular, we discuss two distributions that are obtained as special cases and that are featured in a variety of statistical applications: the exponential–normal distribution and the chi–squared–normal (or overdispersed chi–squared) distribution.
Article
Full-text available
While the construct of market is one of the basic concepts in Economics, the term competition became the elemental paradigm in apprehending the organisation of markets. For economic assessment, it is crucial to understand how markets are organized, how they function, and how firms operating within them behave. Economists remain interested in how different market structures and the concentration of sellers affect market prices and quantities. Long before the advent of neoclassical economics and its equilibrium in a perfectly competitive economy, Adam Smith advocated for competitive markets as the preferred market structure because they lead to socially optimal economic outcomes. This concept essentially encapsulates the entire theoretical background of microeconomics. There are numerous arguments for clarification of the EU competitions rules as some of them are pointed at improving market efficiency for the benefit of consumers while others are driven by purely political and/or competitiveness scrutiny. Hence, the goal of this study is to test perfect competition conditions and therefore the competitive dynamics among European countries, as to analyse the disparity between the theoretical positions and empirical reality. Evaluation is based on testing the equality of prices and marginal cost in the long run, as well as in the short run within a panel structured sample of 38 European countries for the period of 1960-2022. Various estimation methods indicated the absence of the equality between prices and marginal costs across the panel sample and different sub-samples, but with the presence of long term cointegration between these variables, indicating that these variables share a common long run trend.
Article
Biochemical research on mungbean focuses on its nutritional value, antioxidant properties, and potential health benefits. Studies often explore the composition of proteins, vitamins, and minerals, as well as phytochemicals like flavonoids and phenolic compounds that contribute to its health effects. Research also examines Mungbean’s role in sustainable agriculture and its ability to improve soil health through nitrogen fixation. Overall, ungbean is valued for its nutritional benefits and potential in promoting health and environmental sustainability. In the present study 10 genotypes used for the study for the assessment of correlation coefficient and simple linear regression analysis. According to analysis for correlation Plant stand, Days to 50% flowering, Branch/Plant, Pod length, Pod/plant, Seed/plant, Plant height, Days to maturity and 100 Seed wt. was highly significant correlated with yield. The implication is correlation suggests that the relationship is not due to random chance. It’s statistically robust. In practical terms, this means that by selecting for or improving the maximum yield attributing character, you are likely to enhance the overall yield per plant. This could lead to more efficient farming practices and better resource utilization. In practical footings suggests that enhancing or selecting for this character could lead to improved yields in the crop. The situation valuable finding for breeding programs, as it identifies a key trait that could be targeted to optimize production. The simple linear regression result shows that he results can help in this study understand the extent and nature of the relationship between the two variables, which can be useful for prediction further analysis. Linear regression algorithms are used to make precise predictions, and having a large dataset can enhance the effectiveness of the decision making model.
Article
Full-text available
We update and extend the non-parametric test proposed in Ashley and Patterson (J Financ Quant Anal 21:221–227, 2014) – of the proposition that the (pre-whitened) daily stock returns for a firm are serially independent, and hence unpredictable from their own past. That paper applied this test to daily returns from 1962 to 1981 for several U.S. corporations and aggregate indices, finding mixed evidence against this null hypothesis of serial independence. The returns dataset is updated here to include thirteen firms which are currently more relevant, and the sample is extended through the end of 2023. We also update the simulation methodology here to properly account for the conditional heteroskedasticity in the daily returns data, so that the present results should now be more statistically reliable. The results are broadly in line with our earlier results, but they do suggest further avenues of research in this area.
Article
Full-text available
Drug-drug interactions may amplify or diminish their intended effects, or even produce entirely new effects. Multicomponent mixture HPLC analysis offers a thorough and effective method for comprehending the makeup and behavior of complicated materials, advancing research and development across a range of scientific and industrial domains. A novel experimental design-assisted HPLC methodology for the concurrent investigation of the drug-drug interaction of pholcodine, ephedrine, and guaifenesin in biological fluids has been established. Rather than the routine methodology, the application of the factorial design-HPLC method offers a powerful and efficient tool for the analysis of these compounds. Both mixed and full factorial designs were employed to assess the impact of variable factors on chromatographic results. Utilizing an isocratic elution mode on a C18 column, the chromatographic separation was carried out. 15% Methanol, 5% acetonitrile, and 80% phosphate buffer with 0.1%(v/v) triethylamine set to pH 3 make up the mobile phase flowing at rate 1.0 mL/min. The calibration curves of the drugs show excellent linearity over a concentration ranges: 0.20–13.0 µg/mL for PHO, 0.50–20.0 µg/mL for EPH and 0.70–20.0 µg/mL for GUA with LOQ values of 0.18, 0.38 and 0.50. The fast separation and quantitation in less than 6 min is an advantage. Also, the method includes a robust sample preparation protocol for the analysis of complex biological samples, ensuring high selectivity and precision. The ease, speed and cost-effectiveness of the method are ideal for supporting in vitro studies, including drug-drug interaction investigations, especially in bioanalytical labs.
Article
Full-text available
We present a nation-wide network analysis of non-fatal opioid-involved overdose journeys in the United States. Leveraging a unique proprietary dataset of Emergency Medical Services incidents, we construct a journey-to-overdose geospatial network capturing nearly half a million opioid-involved overdose events spanning 2018–2023. We analyze the structure and sociological profiles of the nodes, which are counties or their equivalents, characterize the distribution of overdose journey lengths, and investigate changes in the journey network between 2018 and 2023. Our findings include that authority and hub nodes identified by the HITS algorithm tend to be located in urban areas and involved in overdose journeys with particularly long geographical distances.
Article
Full-text available
Across three online studies, we examined the relationship between the Fear of Missing Out (FoMO) and moral cognition and behavior. Study 1 (N = 283) examined whether FoMO influenced moral awareness, judgments, and recalled and predicted behavior of first-person moral violations in either higher or lower social settings. Study 2 (N = 821) examined these relationships in third-person judgments with varying agent identities in relation to the participant (agent = stranger, friend, or someone disliked). Study 3 (N = 604) examined the influence of recalling activities either engaged in or missed out on these relationships. Using the Rubin Causal Model, we created hypothetical randomized experiments from our real-world randomized experimental data with treatment conditions for lower or higher FoMO (median split), matched for relevant covariates, and compared differences in FoMO groups on moral awareness, judgments, and several other behavioral outcomes. Using a randomization-based approach, we examined these relationships with Fisher Tests and computed 95% Fisherian intervals for constant treatment effects consistent with the matched data and the hypothetical FoMO intervention. All three studies provide evidence that FoMO is robustly related to giving less severe judgments of moral violations. Moreover, those with higher FoMO were found to report a greater likelihood of committing moral violations in the past, knowing people who have committed moral violations in the past, being more likely to commit them in the future, and knowing people who are likely to commit moral violations in the future.
Article
Full-text available
Climate change is a critical global issue with wide-ranging impacts, particularly on agriculture. This study examines how climate change influences food prices and poverty in underdeveloped countries. Rising temperatures and extreme weather events are diminishing agricultural productivity, leading to increased food prices and worsening poverty. The research involved developing a climate change index using an autoencoder model, which can learn the important features of data and translate it into a lower-dimensional representation. This index was based on variables such as carbon emission rates, annual average rainfall, forest cover, fossil fuel consumption, renewable energy use, and temperature changes. The relationship between this climate change index and food prices and poverty was analyzed using panel causality methods. Additionally, food prices from 2020 to 2030 were projected using various time series forecasting techniques to determine the most accurate predictive model. The findings indicate that while climate change does not significantly affect poverty when considering all countries as a panel, it does have a notable impact on food prices. This underscores the need for effective policy measures to address the effects of climate change on food costs. To mitigate these impacts, it is essential for policymakers to enhance agricultural resilience through sustainable practices and targeted interventions. Future research should expand the dataset and include a broader range of countries to gain a more comprehensive understanding of how climate change affects food prices and poverty.
Article
This study aims to investigate the relationship between military expenditures and geopolitical risk using the Panel Fourier Toda-Yamamoto Causality test over the 1993–2020 period. Considering structural changes, the findings reveal that geopolitical risk fluctuations in Colombia, India, South Korea, Russia, Saudi Arabia, Ukraine, and the USA affect military expenditures. Conversely, the results point out that for Chile, Israel, Russia, Taiwan, and the UK, military expenditures appear to cause geopolitical risk. This highlights that changes in military spending across nations trigger an arms race due to the perception of increased threat by neighbours and/or interest groups. In a nutshell, the results show a complex interplay between military expenditures and geopolitical risk, where changes in one can affect the other. Based upon this, policymakers must prioritize diplomacy, utilize international mediation/peacekeeping initiatives, develop military alliances, and commit to non-threatening military expenditures for regional stability.
Article
Full-text available
Purpose We provide the first systematic review and meta-analysis of research examining multidimensional perfectionism—perfectionistic strivings and perfectionistic concerns—and orthorexia. Methods The systematic review and meta-analysis was pre-registered and conducted using a search of PsycINFO, MEDLINE, Education Abstracts, and Oxford Academic, and ScienceDirect up to April 2023. PRISMA guidelines were also followed. Meta-analysis using random-effects models was used to derive independent and unique effects of perfectionism, as well as total unique effects (TUE), and relative weights. Moderation of effects were examined for age, gender, domain, perfectionism and orthorexia instruments, and methodological quality. Results Eighteen studies, including 19 samples (n = 7064), met the eligibility criteria with 12 of these studies (with 13 samples; n = 4984) providing sufficient information for meta-analysis. Meta-analysis revealed that perfectionistic strivings (r⁺ = 0.27, 95% CI [0.21, 0.32]) and perfectionistic concerns (r⁺ = 0.25, 95% CI [0.18, 0.31]) had positive relationships with orthorexia. After controlling for the relationship between perfectionism dimensions, only perfectionistic strivings predicted orthorexia which also contributed marginally more to an overall positive total unique effect of perfectionism (TUE = 0.35; 95% CI [0.28, 0.42]). There was tentative evidence that orthorexia instrument moderated the perfectionistic concerns-orthorexia relationship. Discussion Research has generally found that both dimensions of perfectionism are positively related to orthorexia. More high-quality research is needed to examine explanatory mechanisms while also gathering further evidence on differences in findings due to how orthorexia is measured, as well as other possible moderating factors. Level of evidence Level 1, systematic review and meta-analysis.
Article
Full-text available
The ability of atmospheric pressure plasma jets to treat complex non-planar surfaces is often cited as their advantage over other atmospheric plasmas. However, the effect of complex surfaces on plasma parameters and treatment efficiency has seldom been studied. Herein, we investigate the interaction of the atmospheric pressure plasma slit jet (PSJ) with block polypropylene samples of different thicknesses (5 and 30 mm) moving at two different speeds. Even though the distance between the slit outlet and the sample surface was kept constant, the treatment efficiency of PSJ ignited in the Ar and Ar/O2\hbox {Ar/O}_2 Ar/O 2 gas feeds varied with the sample thickness due to the plasma parameters such as filament count and speed being affected by the different distances of the ground (the closer the ground is, the higher the discharge electric field). On the other hand, the Ar/N2\hbox {Ar/N}_2 Ar/N 2 PSJ diffuse plasma plumes were less affected by the changes in the electric field, and the treatment efficiency was the same for both sample thicknesses. Additionally, we observed a difference in the efficiency and uniformity of the PSJ treatment of the edges and the central areas in some working conditions. The treatment efficiency near the edges depended on the duration of the filament contact, i. e. , how long the local electric field trapped the filaments. Conversely, the treatment uniformity near the edges and in the central areas was different if the number of filaments changed rapidly as the discharge moved on and off the sample (the 5 mm samples treated by easily sustained Ar PSJ).
ResearchGate has not been able to resolve any references for this publication.