Article

A Heteroskedasticity-Consistent Covariance Matrix and A Direct Test for Heterskedasticity

Wiley
Econometrica
Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator to those of the usual covariance estimator, one obtains a direct test for heteroskedasticity, since in the absence of heteroskedasticity, the two estimators will be approximately equal, but will generally diverge otherwise. The test has an appealing least squares interpretation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Responda si cree que existe algún componente estacional, cíclico o aleatorio. 36. Investigue el consumo de energía eléctrica mensual de algún periodo de 10 años para algún país que usted elija. ...
... La prueba de no heterocedasticidad de White (1980) [36] se basa en una regresión auxiliar del cuadrado de los residuos (o errores) p z 2 sobre todos los regresores originales x it , sus cuadrados x 2 it y los productos cruzados de los regresores. La prueba contrasta la hipótesis de homocedasticidad H 0 contra la hipótesis alternativa de heterocedasticidad H a de la siguiente manera: H 0 : EpZ 2 i q " σ 2 para todo valor de i vs H a : EpZ 2 i q ‰ σ 2 al menos para un valor de i. ...
... La prueba de no heterocedasticidad de White (1980) [36] se basa en una regresión auxiliar del cuadrado de los residuos (o errores) p z 2 sobre todos los regresores originales x it , sus cuadrados x 2 it y los productos cruzados de los regresores. La prueba contrasta la hipótesis de homocedasticidad H 0 contra la hipótesis alternativa de heterocedasticidad H a de la siguiente manera: H 0 : EpZ 2 i q " σ 2 para todo valor de i vs H a : EpZ 2 i q ‰ σ 2 al menos para un valor de i. ...
Book
Full-text available
El presente texto contiene algunos elementos básicos para el estudio matemático de las series de tiempo univariadas. Está dirigido a todo estudiante del área científica con conocimientos medios en probabilidad y estadística a nivel licenciatura. En especial, los estudiantes de la carrera de Actuaría pueden obtener provecho de este material, quienes deben estudiar estos temas como parte de su formación académica. Una serie de tiempo es un registro sistemático a lo largo del tiempo de una variable de interés. Estos registros sucesivos se llevan a cabo dentro de diversas áreas del conocimiento humano, como la ingeniería, la ecología, la economía, las finanzas, la epidemiología, la medicina y la demografía, entre muchas otras. El objetivo es comprender el comportamiento en el tiempo de la variable de interés para proponer modelos y pronósticos, y de esta manera llevar a cabo una mejor toma de decisiones dentro del área de estudio considerada. El presente trabajo inicia con una introducción a los primeros elementos gráficos, numéricos y de modelación, asociados a una serie de tiempo. Posteriormente, se definen cuatro componentes principales que podrían estar presentes y ser identificados en una serie de tiempo. Estos componentes son: la tendencia, el componente cíclico, la estacionalidad y, finalmente, el elemento errático o aleatorio. Se revisa también el modelo autorregresivo AR, el modelo de promedios móviles MA, los modelos ARMA, ARIMA y SARIMA. Al final del trabajo se proporciona una breve introducción al problema de realizar pronósticos con base en los modelos elaborados. Nuestro tratamiento es tradicional en el sentido de estudiar la evolución de las variables como funciones del tiempo. Es importante señalar que el texto es breve y está orientado con una perspectiva matemática, de modo que se ha buscado presentar procedimientos, enunciados y demostraciones con cierto grado de justificación matemática donde ha sido posible tal tratamiento sin recurrir a resultados más avanzados. Por otro lado, la mayor parte de la información se encuentra almacenada actualmente en bases de datos u hojas electrónicas, y el análisis práctico de las series de tiempo se ha agilizado debido a que varios de los procedimientos se encuentran implementados en los programas de cómputo disponibles. Sin embargo, este trabajo no está asociado con ningún programa de cómputo. Aún así, se agregaron algunos códigos en lenguaje R que se utilizaron en la realización de algunos ejemplos. Tales códigos se pueden consultar al final del libro. Al lector interesado en acompañar este trabajo con el uso de R se le recomienda consultar el libro de Coghlan (2018) [12], el cual contiene una introducción práctica y elemental de la utilización de R en el análisis de series de tiempo. Otros textos más completos que se apoyan en el uso de R en su presentación son: Cowpertwait y Metcalfe (2009) [14], Cryer y Chan (2008) [15], y Shumway y Stoffer (2017) [33], por mencionar sólo algunos. Agradecemos a la DGAPA UNAM por el apoyo otorgado a través del proyecto PAPIME PE102321 “Estadística y simulación”, gracias al cual fue posible la edición de este trabajo. Asimismo, agradecemos muy sinceramente a los árbitros de este trabajo por sus valiosos comentarios, así como a los Comités de Publicaciones del Departamento de Matemáticas y de la Facultad de Ciencias por su excelente labor editorial. Los autores Noviembre 2023
... For a confidence set at 95%, two standard deviations were considered (Bland and Altman 1999) to indicate a range where 95% of the differences between manual and LiDAR-based measurements are anticipated to fall. The extent of the potential sampling error can be estimated using the 95% confidence interval (CI) of agreement limits (Gia- (Breusch-Pagan 1979, White 1980. Although the White test concept is similar to that of Breusch and Pagan, it is based on less solid premises on the shape that heteroscedasticity takes. ...
Article
Full-text available
The evaluation of soil impact of forest operations has been done using professional platforms and time-consuming traditional methods. However, today low-cost LiDAR technology may achieve a potentially effective 3D mapping of soil impact. This work aimed at evaluating the accuracy of smartphone and GeoSLAM Zeb-Revo LiDAR platforms, by comparing the scanned data to a manual reference. Manual measurements using a tape were taken on four sample plots to obtain reference data, followed by scanning with LiDAR platforms to obtain data in the form of point clouds. CloudCompare was then used to process the LiDAR data, and the Bland and Altman's method was used to check the agreement between the manually taken and scanned data. The results showed that the low-cost LiDAR technology of iPhone has the potential for mapping and estimating soil impact with a high accuracy. The Mean Absolute Error was estimated at 0.64 cm for the iPhone measurements with SiteScape App, while the figure ranged from 0.68 to 0.91 cm for the iPhone measurements done with 3D Scanner App. Zeb-Revo measurements, however, had an estimated MAE of 0.61 cm. The Root Mean Squared Error was estimated at 0.95 cm for the iPhone measurements with SiteScape, whereas the iPhone with 3D Scanner App and Zeb-Revo measurements produced RMSEs of 0.99-1.51 cm and 1.11 cm, respectively. These findings might provide the basis for further studies on the applicability of low-cost LiDAR technology to larger sample sizes and different operating conditions.
... These predicted probabilities facilitate interpretation of the results by providing estimates of how likely respondents were to report each level of the dependent variable (e.g., believing "too little" is spent on education) in different time periods. Robust standard errors were used in all models to account for potential heteroskedasticity, which is common in cross-sectional data and can lead to inefficient estimates and invalid statistical inference if not addressed (White, 1980). The analysis specifically sought to identify significant temporal trends in these dependent variables, with particular attention to whether perceptions about the value of higher education have become more positive or negative over time. ...
Article
This study examines the evolving perceptions of higher education's value across four decades (pre-1980 to 2020s) in the United States, analyzing data from the General Social Survey (N=34,388). Using ordered logistic regression models, this research investigates three dimensions of higher education's perceived value: public support for education spending, financial satisfaction among college graduates, and happiness levels of those with higher education credentials. The regression models, which control for demographic and socioeconomic factors, reveal a paradoxical pattern in how higher education's value has transformed over time: while public support for education spending has increased significantly over time and financial satisfaction among college graduates has remained relatively stable, happiness among the college-educated has declined dramatically in recent years, particularly in the 2020s. The regression results show that compared to the pre-1980 period, the odds of believing more should be spent on education were significantly higher in subsequent decades, reaching 2.57 times higher in the 2020s (β = 0.944, p < 0.001). For college graduates specifically, this effect was even stronger, with odds 2.82 times higher in the 2020s (β = 1.035, p < 0.001). In contrast, regression models examining financial satisfaction among college graduates showed no statistically significant differences across time periods, suggesting stability in economic returns despite changing conditions. Most strikingly, happiness models showed significant positive coefficients for the 1990s (β = 0.179, p < 0.05) and 2000s (β = 0.256, p < 0.01) compared to pre-1980, but a large negative coefficient for the 2020s (β = -0.785, p < 0.001), indicating a dramatic decline in subjective well-being. Additionally, the analysis of return on educational investment revealed a substantial decrease in respondents achieving high income without college degrees (from 43.82% to 12.90%) alongside an increase in college graduates experiencing low returns (from 13.57% to 23.63%). These findings suggest that higher education has become increasingly necessary yet decreasingly sufficient for ensuring positive life outcomes. The study contributes to theoretical understandings of education's evolving social contract and has implications for educational institutions and policymakers navigating the changing landscape of higher education's value proposition in contemporary society.
... A series of diagnostic checks were performed to ensure data quality and robustness. We have used the robust standard error technique to address possible heteroscedasticity and autocorrelation among observations pertaining to the same country (White, 1980). ...
Article
Full-text available
Purpose Intrapreneurship is to the organization what entrepreneurship is to the economy, yet much remains to be understood about the role of individuals’ human capital behind intrapreneurship. While previous research suggests that intrapreneurs can be “made”, extant research have predominantly focused on the role of formal education and training. As a result, the impact of informal training, often more prevalent in intrapreneurship, remains largely unexplored. Drawing on human capital theory (HCT), this study aims to examine the impact of various training sources on intrapreneurship. Design/methodology/approach This study employs secondary data from the Global Entrepreneurship Monitor across 36 countries. Hypotheses are tested using logistic regression on a sample of 2,141 observations. Findings This study finds that both formal and informal training sources matter for intrapreneurial engagement. Particularly, the findings showcase the positive impact of training provided by local business associations, past/present employers and online sources, offering novel insights into the importance of informal training. Originality/value This study is among the first attempts to examine the impact of formal and informal training simultaneously. It contributes to the ongoing conversation whether intrapreneurs are born or made, providing empirical evidence that intrapreneurial skills are indeed learnable through diverse forms of training. While formal training remains important, our findings suggest that informal training obtained outside formal education may be more prevalent. In addition, while traditionally focused on enhancing human capital to meet the expected job performance, this study extends the application of HCT by highlighting the role of training as a means toward new opportunities, namely, intrapreneurship. Practical implications are also discussed.
... Heteroscedasticity denotes the uneven distribution of error variances among observations, leading to "heteroskedastic" residual variance. Methods to detect this phenomenon include tests for constancy of error variance, such as Cameron and Trivedi's IM-test decomposition [136] and White's General Heteroscedasticity test [137]. In this study, the White test was employed (Table 1), revealing rejection of the null hypothesis (H 0 ), indicating heteroscedasticity in the model (χ 2 = 3.3e + 05, p < 0.001). ...
Article
This study examines the relationship between integrated reporting quality (IRQ) disclosures , corporate governance quality (CGQ), and the implied cost of equity capital (ICC) in developed markets, focusing on Australia and New Zealand. The increasing adoption of integrated reporting and its potential implications for firms' ICC motivates this research. Moreover, the study highlights the role of IRQ in mitigating information asymmetry between firms and investors, emphasizing the need for high-quality disclosures. sing a quantitative approach with panel data analysis, the research analyzes a sample of the top 174 companies by Standard and Poor's market capitalization in Australia and New Zealand from 2018 to 2022, encompassing 870 observations post-IRQ implementation. Statistical methods, including fixed-effects, IV2SLS, two-step system-GMM, pooled OLS, and medium quantile regression, were applied to ensure robust findings. The results reveal a significant negative relationship between IRQ disclosure and ICC, with CGQ playing a moderating role in strengthening this association. Consistent with agency theory, the findings suggest that to reduce information asymmetry, firms issue more information which allows to reduce the cost of capital. Therefore, a more comprehensive firms' reporting, including information about their strategy and risks, increases investors' confidence, hence it may reduce the cost of capital. This study provides valuable insights for regulators and policymakers by emphasizing the importance of integrated reporting frameworks and robust corporate governance practices to promote transparency, reduce information asymmetry, and optimize capital allocation efficiency in developed markets.
... To compare utility scores from the EQ-5D and EQ VAS across different sample group characteristics, independent samples t-tests were employed. The association between these factors and caregivers' HRQoL was examined using multiple linear regression analysis, with robust standard errors [16]. An alpha of 0.05 was applied to determine statistical significance. ...
... Notes: (i) Akaike (Akaike, 1974)information criterion and Schwartz (Schwartz, 1978) criterion were used to select the number of lags required in the cointegration test; (ii) all the model specifications pass the diagnostic tests: Ramsey (Ramsey, 1969) test for specification; Jarque-Bera (Jarque and Bera, 1980) test for normality; Breusch-Godfrey (Breusch-Godfrey, 1978) test for serial correlation; White (White, 1980) test for heteroscedasticity; ARCH-LM (Engle, 1982) test for autoregressive conditional heteroscedasticity; CUSUM and CUSUM square tests confirm the stability of the models Brown et al. (Brown et al., 1975); (iii) values in parentheses are t statistics; (iv) significance levels denoted by ***(1%), **(5%), and *(10%). ...
... Heteroscedasticity denotes the uneven distribution of error variances among observations, leading to "heteroskedastic" residual variance. Methods to detect this phenomenon include tests for constancy of error variance, such as Cameron and Trivedi's IM-test decomposition [136] and White's General Heteroscedasticity test [137]. In this study, the White test was employed (Table 1), revealing rejection of the null hypothesis (H 0 ), indicating heteroscedasticity in the model (χ 2 = 3.3e + 05, p < 0.001). ...
Article
Full-text available
This study examines the relationship between integrated reporting quality (IRQ) disclosures, corporate governance quality (CGQ), and the implied cost of equity capital (ICC) in developed markets, focusing on Australia and New Zealand. The increasing adoption of integrated reporting and its potential implications for firms’ ICC motivates this research. Moreover, the study highlights the role of IRQ in mitigating information asymmetry between firms and investors, emphasizing the need for high-quality disclosures. Using a quantitative approach with panel data analysis, the research analyzes a sample of the top 174 companies by Standard and Poor’s market capitalization in Australia and New Zealand from 2018 to 2022, encompassing 870 observations post-IRQ implementation. Statistical methods, including fixed-effects, IV2SLS, two-step system-GMM, pooled OLS, and medium quantile regression, were applied to ensure robust findings. The results reveal a significant negative relationship between IRQ disclosure and ICC, with CGQ playing a moderating role in strengthening this association. Consistent with agency theory, the findings suggest that to reduce information asymmetry, firms issue more information which allows to reduce the cost of capital. Therefore, a more comprehensive firms’ reporting, including information about their strategy and risks, increases investors’ confidence, hence it may reduce the cost of capital. This study provides valuable insights for regulators and policymakers by emphasizing the importance of integrated reporting frameworks and robust corporate governance practices to promote transparency, reduce information asymmetry, and optimize capital allocation efficiency in developed markets.
... Homoscedasticity To assess if the residuals have constant variance, we further perform Breusch-Pagan (Breusch & Pagan, 1979) and White tests (White, 1980). Table 39 displays the code snippet followed by residuals vs. fitted scatter plot. ...
Book
This book provides a comprehensive guide to econometric modeling, combining theory with practical implementation using Python. It covers key econometric concepts, from data collection and model specification to estimation, inference, and prediction. Readers will explore linear regression, data transformations, and hypothesis testing, along with advanced topics like the Capital Asset Pricing Model and dynamic modeling techniques. With Python code examples, this book bridges theory and practice, making it an essential resource for students, finance professionals, economists, and data scientists seeking to apply econometrics in real-world scenarios.
... To detect heteroscedasticity, there are several similar tests. Among these, the White [14] tests are designed to detect heteroscedasticity, which undermines the consistency of conventional estimates of the various parameters used in linear regressions. We can conclude from Table 6 that the null hypothesis is strongly accepted, with the rejection of the alternative hypothesis, resulting in a homogeneity of variances with an identical distribution of model errors. ...
Article
Full-text available
This paper examines the impact of capital flight on economic growth in Africa, highlighting the crucial role of institutional quality. Through an econometric analysis of data from 28 African countries observed during the 2000-2022 period, it is shown that capital flight significantly hinders growth by reducing the resources available for economic development. However, strong institutions can mitigate this negative impact through transparent and effective governance, notably by reducing corruption and enhancing political stability. The study recommends improving institutional quality to limit illicit financial flows and promote sustainable growth, while also emphasizing significant income-based disparities between countries.
... This estimator has the ability to take account of the nonindependence of observations resulting from cluster sampling and to correct the potential bias in estimation that may result from potential sampling differences, and it has been shown to provide a robust estimation of standard errors(Huber, 1967;Rogers, 1993;White, 1980). 5 Our results remained robust even when these control variables were not included in the models. ...
Article
Full-text available
When first joining an organization, newcomers need to obtain information about relationships and tasks, as well as about the organization itself. Although many scholars have emphasized the role of information provided by organizational insiders (supervisors and coworkers) in facilitating newcomers' successful adjustment to the organization, the meaningful role of information from sources external to the organization has rarely been included in this line of research. In this study, we propose that both social media and customers can provide information about organizational performance and social image. Based on affective events theory and two fundamental social judgements of competence and warmth, we explore how positive information about organizational performance and social image from social media and customers, along with their interactive effects, affect newcomers' learning behaviours and socialization outcomes through promoting their pride in the organization. In an experiment and a four‐wave, two‐source survey, the results show that positive information from social media and customers plays a critical role in newcomer socialization. We discuss the implications for theory and practice.
... Finally, we computed a multiple regression model pre dicting psychological distress from all variables. As the assumption of homoskedasticity was not confirmed for all regression models, we employed heteroskedasticity-consistent, robust estimations using the HC3 method in all regression analyses (Hayes & Cai, 2007;Long & Ervin, 2000;White, 1980). Where the assumption of normality of residuals was violated, we conducted bootstrapped regression analyses with 5000 bootstrap samples to assess the stability of the original regression models' results. ...
Article
Full-text available
Background Characterized by uncertainty and recurring periods of social isolation, the COVID-19 pandemic resulted in increases of loneliness and distress in young adults, such as university students. Despite the lifting of the last restrictions in Germany in April 2023, the state of mental health in vulnerable groups after the three-year global crisis remains to be investigated. Therefore, we aimed to assess university students’ mental health after the pandemic. Method Between April and July 2023, N = 886 university students throughout Germany participated in a fully anonymous cross-sectional online survey. Psychological distress (BSI; Brief Symptom Inventory), loneliness (LS-SOEP; Loneliness Scale), and emotion regulation strategies (ERQ; Emotion Regulation Questionnaire) were assessed by standardized questionnaires, and mental health was compared to a survey of students in April 2020 ( N = 1,062). Results Unexpectedly, we found higher levels of distress in 2023 than in 2020. Overall, R ² adj = 41% of variance in psychological distress was accounted for in a multiple linear regression, with loneliness emerging as the most important predictor. Additionally, emotion regulation, gender identity, and health behaviors such as keeping daily routines, sufficient sleep, and regular exercise were significant predictors. Analyses of variance (ANOVAs) revealed that students with past or present mental health conditions were significantly lonelier than those without. Conclusion These findings highlight the ongoing mental health challenges of university students in the aftermath of the COVID-19 pandemic, identifying non-binary and female students, as well as students with current or past mental health conditions as particularly lonely and distressed.
... The pooled cross-sectional analysis allowed us to increase the number of observations and to assess the relationship across time (Wooldridge, 2009). In order to address any concerns on the heteroscedasticity and serial correlation in the error terms, the estimation included heteroscedasticity-robust standard errors, in line with White (1980). ...
Article
Full-text available
Purpose This longitudinal study explores how socio-demographic factors, including ethnicity and Living Standards Measures (LSM), alongside trust in banks, influence mobile banking adoption in South Africa. Design/methodology/approach Using 10 years of FinScope survey data, logistic regression analyses were conducted to assess the impact of socio-demographics and trust on mobile banking use. Findings Trust in banks positively influenced adoption, though its economic impact was modest. Individuals aged 30–39, higher-income earners and those with advanced education were most likely to adopt mobile banking, with income increasing adoption probability by 9.44%. Black individuals showed higher adoption rates compared to other ethnic groups. Practical implications Findings enable banks and fintechs to tailor strategies for targeted consumer segments while providing evidence for policymakers to address socio-demographic barriers to enhance financial inclusion. Originality/value This study uniquely integrates socio-demographic factors with trust in banks, offering fresh insights into financial inclusion and the distinct roles of ethnicity and living standards in mobile banking adoption.
... (1)-(5) was heteroskedasticity, particularly with respect to income. We adjusted our estimates following White (1980) so that they were robust to heteroskedasticity. ...
Article
Full-text available
Deliberation-the process of group discussion and consideration-has been increasingly integrated to valuation of ecosystem services. In an online stated preference survey on the Fanjing Mountain National Nature Reserve in the Southwestern China, we assessed participants' willingness to pay (WTP) for cultural services (non-material benefits gained through interacting with nature, including its ecological and geological elements and characteristics) before and after deliberation. However, among the initial participants, only a subset completed deliberation and the full survey. This dropout of participants may occur in any deliberation-based valuation survey, introducing sample selection bias for estimating the impacts of deliberation. To control sample selection bias, we applied the Heckman correction approach which uses the probability of a given observation being included in the sample based on its other observed characteristics. Overall, deliberation led to a more concentrated distribution of WTP and reduced the effects of sociodemographic drivers of WTP. Deliberation also had varied impacts on different participants' WTP, including increases, decreases, and no change. The median WTP remained unchanged, although the mean WTP became significantly lower after deliberation (even when controlling for sample selection bias that significantly influenced the effects of deliberation). The use value of the Reserve's cultural services for visitors was estimated at approximately 520 million CNY per year based on the pre-deliberation mean WTP, and 314 million CNY based on the post-deliberation mean WTP. This value reflects the Reserve's natural, cultural, and economic significance and the need for continued support for both nature conservation and sustainable tourism management.
... First, the absence of autocorrelation in the errors is confirmed by the non-significance of the Wooldridge test (2002) [34], with a p-value exceeding 10%, both for the relationship between the Human Development Index (HDI) and the digital economy and financial technology indicators, as well as for the relationship between the Environmental Performance Index (EPI) and these same explanatory variables. Second, the absence of heteroscedasticity is validated by the White test (1980) [35], whose non-significant statistics indicate a homogeneous variance of the errors. Finally, we rule out any endogeneity issues using the Durbin-Wu-Hausman test (1978), whose non-significant results confirm the exogeneity of the explanatory variables in both relationships studied. ...
Article
Full-text available
Purpose: Our study investigates the combined effects of financial technologies (fintech) and the digital economy on sustainable development, considering geopolitical risks as a moderating factor. Origin: While sustainable development is a global imperative, the integrated roles of digital transformation and fintech remain insufficiently explored. Our research addresses this gap by analyzing their impacts on socioeconomic advancement and environmental sustainability across diverse contexts. Methodology: Employing panel data from 30 developed and developing countries between 1990 and 2023, we assess sustainable development using the Environmental Performance Index (EPI) and the Human Development Index (HDI). Independent variables include proxies for the digital economy (e.g., internet usage, mobile subscriptions, and high-tech exports) and fintech (e.g., digital payments, digital currency, and peer-to-peer lending). The Geopolitical Risk Index (GPRI) is used to evaluate the effect of political instability. We apply generalized least squares (GLS) and fixed-effects estimation (within) to ensure robustness. Findings: Our results indicate that digital transformation and fintech significantly foster socioeconomic development and environmental performance, even amidst geopolitical instability. Key variables such as digital payments and internet access show substantial positive impacts, providing valuable insights for policymakers aiming to enhance resilience and sustainability. Contributions: Our article offers a comprehensive evaluation of how the digital economy and fintech jointly influence sustainable development under geopolitical risks, providing a nuanced understanding for policymakers and researchers.
... Dependent on the error structure specified in equation (1) , the study estimates a set of six such equations. Multicollinearity will be checked using the correlations matrix while heteroskedasticity and autocorrelation, respectively, will be checked using the White's (36) and Durbin-Watson tests. ...
... Homogeneity of Variance, the variance of the errors across different time points and locations is constant (homoskedasticity). If the error variance changes (heteroskedasticity), this could indicate problems within the model and affect the accuracy of parameter estimation (Cribari-Neto & Galvão, 2003;Engle, 1982;White, 1980). Autocorrelation, the assumption is that there is a significant correlation between the values of the variable at adjacent times and locations. ...
Article
Full-text available
The GSTARIMA (Generalied Space–Time Autoregressive Integrated Moving Average) model is commonly used to analyse time series and spatial data with temporal and spatial dependencies. This paper focuses on estimating the autoregressive and moving average parameters of the GSTARIMA model using Maximum Likelihood Estimation (MLE). We theoretically demonstrate the unbiasedness of these estimates, proving that the expected values of the estimates match the true parameters. Empirical experiments further verify this property, both before and after applying Deep Neural Network (DNN) interventions to correct model errors. The results show that the parameter estimates remain unbiased, and error properties (zero mean and constant variance) are preserved even after DNN processing. This study highlights the robustness of MLE in providing unbiased estimates within the GSTARIMA framework, even when integrated with machine learning techniques.
... Another widely used test for homoscedasticity, which doesn't require knowledge of the heteroscedasticity form, was proposed by White (1980). This test compares the variance of OLS estimates under homoscedasticity and heteroscedasticity. ...
Article
Full-text available
Heteroscedasticity in regression analysis occurs when the variance of the error term changes across different levels of the independent variable(s), leading to inefficient estimates and incorrect inference. In Generalized Linear Models (GLMs), heteroscedasticity significantly impacts prediction and inference accuracy. This study evaluates White's test for detecting heteroscedasticity in GLMs through Monte Carlo simulations. We investigate the test's power, Type II errors, and Type I errors at different sample sizes (100, 250, and 500). Our findings reveal that White test performs well in detecting strong heteroscedasticity, particularly for exponential heteroscedasticity structures (EHS), but poorly for weaker forms like linear heteroscedasticity structures (LHS) and square root heteroscedasticity structures (SQRTHS). While increased sample size enhances performance, the test remains susceptible to over-rejection of homoscedasticity. We recommend cautious use, especially with weaker heteroscedasticity or specific structures. For improved performance, use the test with moderate to high sample sizes (e.g., n = 500), particularly for EHS and quadratic heteroscedasticity structures (QHS). Alternative tests may be considered for researchers with limited sample sizes or dealing with LHS and SQRTHS. Finally, we emphasize the importance of assessing the underlying structure of heteroscedasticity in the dataset to choose the most suitable test and interpretation.
... Researchers often adopt statistical procedure to their data without validating the assumptions of the procedure they chose to adopt. If one or more of the assumptions of a given statistical procedure are violated, it is likely to arrive at misleading results by that procedure (White, 1980). In many real-life data and applications, variances of the errors vary from one observation to the other which is often regarded as heteroscedasticity. ...
Article
Full-text available
This study provides a comprehensive evaluation of the Breusch-Pagan test's performance in detecting heteroscedasticity across various structures and levels, addressing a significant gap in existing literature. Through Monte Carlo simulations, we investigate the test's power, Type II errors (σ = 0), and Type I errors (σ ≠ 0) in confirming homoscedasticity assumptions at different sample sizes (100, 250, and 500). Our objectives include assessing the test's ability to detect heteroscedasticity at various levels and structures, examining the impact of sample size on its performance, comparing its performance across different structures, and identifying its limitations and potential biases. Our findings reveal that the Breusch-Pagan test's performance varies across different heteroscedasticity structures and levels, with poor detection of low-level heteroscedasticity but improved performance at higher levels, particularly for exponential heteroscedasticity structures (EHS). While increased sample size enhances the test's performance, it remains inadequate for linear heteroscedasticity structures (LHS) and square root heteroscedasticity structures (SQRTHS). Based on our results, we recommend cautious use of the Breusch-Pagan test, especially when dealing with low-level heteroscedasticity or specific structures like LHS and SQRTHS. We suggest using the test with moderate to high sample sizes for improved performance, particularly for EHS and quadratic heteroscedasticity structures (QHS). For researchers with limited sample sizes or dealing with LHS and SQRTHS, alternative tests for heteroscedasticity may be considered. Finally, we emphasize the importance of assessing the underlying structure of heteroscedasticity in the dataset to choose the most suitable test and interpretation.
... We performed NLA using previously described methods. 121,122 Briefly, FC values at each edge were studentized using the cluster-robust sandwich estimator approach described above 185,186 to obtain edge-level FC t-values. The average FC t-value of edges within each network pair (e.g. ...
Preprint
Full-text available
Prescription stimulants such as methylphenidate are being used by an increasing portion of the population, primarily children. These potent norepinephrine and dopamine reuptake inhibitors promote wakefulness, suppress appetite, enhance physical performance, and are purported to increase attentional abilities. Prior functional magnetic resonance imaging (fMRI) studies have yielded conflicting results about the effects of stimulants on the brain's attention, action/motor, and salience regions that are difficult to reconcile with their proposed attentional effects. Here, we utilized resting-state fMRI (rs-fMRI) data from the large Adolescent Brain Cognitive Development (ABCD) Study to understand the effects of stimulants on brain functional connectivity (FC) in children (n = 11,875; 8-11 years old) using network level analysis (NLA). We validated these brain-wide association study (BWAS) findings in a controlled, precision imaging drug trial (PIDT) with highly-sampled (165-210 minutes) healthy adults receiving high-dose methylphenidate (Ritalin, 40 mg). In both studies, stimulants were associated with altered FC in action and motor regions, matching patterns of norepinephrine transporter expression. Connectivity was also changed in the salience (SAL) and parietal memory networks (PMN), which are important for reward-motivated learning and closely linked to dopamine, but not the brain's attention systems (e.g. dorsal attention network, DAN). Stimulant-related differences in FC closely matched the rs-fMRI pattern of getting enough sleep, as well as EEG- and respiration-derived brain maps of arousal. Taking stimulants rescued the effects of sleep deprivation on brain connectivity and school grades. The combined noradrenergic and dopaminergic effects of stimulants may drive brain organization towards a more wakeful and rewarded configuration, explaining improved task effort and persistence without direct effects on attention networks.
... These assessments may be complemented by statistical testing. Among other possibilities, the presence of autocorrelation in the residuals may be detected through the Durbin-Watson test; heteroscedasticity (i.e., non-constant variance of the residuals) may be detected using the Breusch-Pagan and the White tests (Breusch and Pagan, 1979;White, 1980). On R software, these tests can be found in the 'lmtest', 'car' or 'skedastic' packages (Farrar, 2024;Zeileis and Hothorn, 2002). ...
Article
Full-text available
While empirical modelling remains a popular practice in environmental sciences, an alarming number of misuses of correlation- and regression-based techniques are encountered in recent research, although these techniques are described in courses and textbooks. This position paper reviews the most common issues, and provides theoretical background for understanding the interests and limitations of these methods, based on their underlying assumptions. We call for a reconsideration of misleading practices, including: the application of linear regression to data points that do not display a linear pattern, the failure to pinpoint influential points, the inappropriate extrapolation of empirical relationships, the overrated search for “statistical significance”, the pooling of data belonging to different populations, and, most importantly, calculations without data visualization. We urge reviewers to be vigilant on these aspects. We also recall the existence of alternative approaches to overcome the highlighted shortcomings, and thus contribute to a more accurate interpretation of the results.
Article
We investigate intraday return dynamics in currency markets around FOMC announcements. Using comprehensive high-frequency exchange rate data, we reveal that post-FOMC announcement returns are significantly low, cancelling out approximately 65% of positive pre-FOMC announcement drifts. These post-announcement reversals mainly result from uncertainty resolution and are mostly realized between 12 and 24 hours after FOMC announcements. This return behavior is significantly related to the negative jump volatilities driven by FOMC announcements. Our findings suggest that our signed jump volatility measures capture informational shocks and uncertainty resolutions and tend to be high under illiquid market conditions. (JEL G14, G15)
Article
Full-text available
How the price of giving affects charitable donations has been subject to extensive scrutiny in the literature, but the empirical evidence so far has been inconsistent. We conduct a meta-analysis to synthesize the empirical findings on the price-donation relationship, estimate a generalized effect and explore underlying moderators. After combining 386 effect sizes from 52 existing studies, we find that the price of giving generally has a significant, negative association with the level of charitable donations. Further meta-regression analysis suggests that this price effect on charitable donations is moderated by donor type and data year. Overall, donors are sensitive to the price of giving, and the price effect varies under certain circumstances.
Article
Reproducibility and replicability of study results are crucial for advancing scientific knowledge. However, achieving these goals is often challenging, which can compromise the credibility of research and incur immeasurable costs for the progression of science. Despite efforts to standardize reporting with guidelines, the description of statistical methodology in manuscripts often remains insufficient, limiting the possibility of replicating scientific studies. A thorough, transparent, and complete report of statistical methods is essential for understanding study results and mimicking statistical strategies implemented in previous studies. This review outlines the key statistical reporting elements required to replicate statistical methods in most current veterinary pharmacology studies. It also offers a protocol for statistical reporting to aid in manuscript preparation and to assist trialists and editors in the collective strive for advancing veterinary pharmacology research.
Article
Full-text available
Tacit knowledge is considered a source of competitive advantage for organizations. The present paper used an international esports competition—the Fortnite Champion Series (FNCS)—to determine the contexts in which shared tacit knowledge aids performance. We used censored-panel-data tobit models to analyze a sample of 4277 observations relating to teams that qualified for the European final of the 2022 FNCS Duo tournament. The tournament format allowed us to determine whether the nature (degree of coordination required) and context (pressure on the teams, strength of the competition) of a task modify the effect of tacit knowledge on team performance. Shared tacit knowledge had a positive effect on overall team performance but a negative effect on individual tasks such as kills. In addition, the positive link between shared tacit knowledge and team performance became weaker as the pressure and stakes of the competition increased. Variables such as player talent were more important than tacit knowledge in high-pressure, high-stakes situations. Results showed that several factors impact tacit knowledge’s effect on organizational performance. These factors include the pressure and stakes of the context in which a task is carried out and the degree of coordination a task requires. Esports is a valuable new setting in which to test hypotheses relating to organizational performance, as it provides new and precise ways of controlling for the effects of variables such as individual talent.
Article
Full-text available
The perception of a voice in the absence of an external auditory source—an auditory verbal hallucination—is a characteristic symptom of schizophrenia. To better understand this phenomenon requires integration of findings across behavioural, functional, and neurochemical levels. We address this with a locally adapted MEGA-PRESS sequence incorporating interleaved unsuppressed water acquisitions, allowing concurrent assessment of behaviour, blood-oxygenation-level-dependent (BOLD) functional changes, Glutamate + Glutamine (Glx), and GABA, synchronised with a cognitive (flanker) task. We acquired data from the anterior cingulate cortex (ACC) of 51 patients with psychosis (predominantly schizophrenia spectrum disorder) and hallucinations, matched to healthy controls. Consistent with the notion of an excitatory/inhibitory imbalance, we hypothesized differential effects for Glx and GABA between groups, and aberrant dynamics in response to task. Results showed impaired task performance, lower baseline Glx and positive association between Glx and BOLD in patients, contrasting a negative correlation in healthy controls. Task-related increases in Glx were observed in both groups, with no significant difference between groups. No significant effects were observed for GABA. These findings suggest that a putative excitatory/inhibitory imbalance affecting inhibitory control in the ACC is primarily observed as tonic, baseline glutamate differences, rather than GABAergic effects or aberrant dynamics in relation to a task. Supplementary Information The online version contains supplementary material available at 10.1038/s41598-025-03644-x.
Chapter
In this chapter we translate the theoretical concepts from earlier chapters into practical application by developing and evaluating multiple regression models. The objective is to expand our theoretical knowledge into empirical analysis by implementing CAPM and APT followed by various statistical analysis. Each model undergoes a series of diagnostic checks, including multicollinearity assessment, linearity verification, residuals versus fitted plots, tests for homoscedasticity, and analysis of error independence. These diagnostic procedures are critical to ensure the validity and robustness of the regression outcomes. This chapter serves as a bridge between theory and real-world data analysis, reinforcing the value of rigorous statistical methods in economic and financial research.
Article
Although there is broad consensus on a robust momentum effect in Australia, the interaction between momentum and capital structure has been underexplored in the literature. This paper explicitly examines whether capital structure promotes momentum trading in the Australian stock market. The data sample includes over 1800 stocks listed on the Australian Stock Exchange from 2000 to 2023. We construct momentum portfolios using the monthly rolling and overlapping techniques. Two ratios are calculated to measure the firms’ capital structure: the book‐value and market‐value financial leverages. Irrespective of the capital structure measure, the superior returns of the Winner quintile are concentrated in highly leveraged stocks. In contrast, the high‐leverage Loser performs worst among the Loser quintile. The return of momentum strategy enhanced with capital structure is more than 1.5 times the original momentum profit. The risk‐adjusted analysis paints a similar return pattern. Additionally, we observe high volatility in earnings and cash flows for highly leveraged stocks, leading to significant mispricing. Thus, the interaction between momentum and capital structure may stem from increased misvaluation, consistent with a behavioral explanation.
Article
Full-text available
Background Mammary adipocyte size reflects both local excess of adiposity and adipose tissue dysfunction relevant to breast cancer biology. Objective To identify modifiable factors that are associated with both traditional adiposity indicators and mammary adipocyte size in women with breast cancer, and to compare the individual and simultaneous effect of these factors. Methods Data were collected prospectively from 160 consecutive breast cancer patients (biobank of a breast cancer reference center): factors that may influence body weight and composition (telephone interview), dietary intakes (DHQ-I) and adiposity measurements (anthropometric indices and mammary adipocyte size). Relationships between determinants of adiposity identified in the literature were summarized in a directed acyclic graph. Principal component analysis was conducted to capture dietary intakes from major nutrient intakes. Robust univariate and multivariable linear regression models were used to estimate the associations. Results Menopausal status, ever smoking, tumour grade and higher weight at 18 years old were consistently associated with higher adiposity. Higher animal fat intakes was consistently associated with higher body mass index (BMI). High educational attainment was consistently associated with lower BMI and waist-to-height ratio. Higher physical activity was associated with lower adiposity and adipocyte cell size, whereas higher age was associated with higher adiposity and adipocyte cell size only in univariate models. Only menopausal status was consistently associated with higher mammary adipocyte size. Conclusions While excess adiposity is a complex condition that cannot be attributed to a single factor, menopausal status seems to be the main determinant of excess adiposity in women with breast cancer and the only independent determinant of mammary adipocyte size. Among lifestyle factors, ever smoking was the strongest independent determinant of higher adiposity, followed by high intakes of fats, particularly animal fats. If targeted efficiently, some of these modifiable factors could reduce the burden among breast cancer patients.
Article
This study aimed to create clusters of elite athletes based on their perceptions of organisational culture. Elite athletes’ development is framed by an interplay of several stakeholders who provide a social environment in which an organisational culture emerges. Perceptions of this culture are studied. The competing values framework with market, clan, hierarchy, and adhocracy organisational cultures informed this study. Data from German elite athletes were collected using an online survey (n = 1,115). A two-step cluster analysis revealed that four distinct organisational culture clusters were prevalent. The findings revealed that all four values existed in the four organisational culture clusters. However, one or two cultural types (market, clan, hierarchy, and adhocracy) were emphasised. The first derived cluster was driven by all cultures and favoured an Analytical Environment. The second cluster, labelled Competitive-Personal Environment, was very much characterised by the market culture and also the clan culture. Elements of the clan, adhocracy, and hierarchy cultures – less strong than in the first culture – were found in Cluster 3 which was named Development Environment. The fourth cluster perceived their environment as performance-driven (market culture), yet, with an insufficient provision of adequate resources. Athletes in this cluster were more likely to end their sporting career. This cluster was named Competitive-Disillusioning Environment. This work contributes to the growing discourse on how organisational culture can be leveraged to create sustainable, athlete-centred development systems, guiding policymakers in crafting evidence-based strategies for elite sport environments.
Article
Research Question/Issue This study investigates the effects of independent outside directors on the performance of new venture firms. We also study several factors moderating the relationship between the percentage of independent directors on boards and the new ventures' performance. Research Findings/Insights Using a large‐scale sample of 5183 Danish new ventures active in 2015–2020, we find support for our hypothesis that the percentage of independent directors has an inverted U‐shaped effect on new venture performance. The results also show that firm‐level factors, that is, founder equity and venture age, cushion and amplify this nonlinear effect. Theoretical/Academic Implications Drawing on resource dependence theory as an exchange theory, we propose that increasing the proportion of independent directors yields linearly increasing gains (i.e., resource‐provisioning benefits) but also induces costs of sacrificing managerial control, which accelerate nonlinearly. Our findings show that a smaller proportion of independent directors benefits new ventures, but as their proportion grows, the performance gains become smaller, eventually leading to negative marginal effects. Practitioner/Policy Implications Practitioners will find our findings of interest because, contrary to the recommendations of policymakers and many prior studies, they show that having a high proportion of independent directors on the boards of new ventures does not always improve new venture performance. Founders of new ventures should be aware of the trade‐off between the benefits of obtaining additional resources and the costs of sacrificing managerial control.
Article
Full-text available
The international Financial Crisis of 2008/2009 is used as a case study, with a unique dataset of 210 countries to examine potential resilience factors and with particular focus on population size, along with other pre-crisis determinants. The cross-section regressions suggest that smaller country size was associated with higher vulnerability (ceteris paribus), which is reflected in a larger initial impact magnitude of the Financial Crisis shock. The smallness disadvantage began to unfold for countries with a population size of between 1 and 10 million, and was particularly severe for the very small states with considerably less than 1 million inhabitants. This result emphasizes that small state characteristics drive economic resilience and volatility beyond factors already identified in the literature. There is also significant evidence that the shock impact persistence was prolonged by smaller country size. Thus, the disadvantage of smallness in terms of higher exposure has dominated the advantages of small countries in terms of flexibility and adaptation speed. Moreover, smaller states were on average hit earlier.
Article
Full-text available
Over a billion of the world's population reside in slums. However, the configurations of their social and structural ties in these environments remain unsubstantiated. Drawing on a psychosociological rational choice perspective, we introduce theoretical explanations distinguishing slum dwellers' behavior, actions, and attitudes in community asset voucher (CAV) networks. Regression and equation modeling results, based on 19,892 slum dwellers' CAV transactions, reveal intersecting socioeconomic dynamics suggesting that twenty-five percent (25%) of these dwellers are entrepreneurially-minded. They leverage CAVs for consumption and enterprise purposes. They also time when to transact in their slum-based CAV networks. Their central position indicates that they influence CAV transactions. But seventy-five percent (75%) of these dwellers are passive. They accept that they cannot change their seemingly unyielding poverty situation; hence, they only receive and use CAVs for consumption purposes. This stark contrast in behavior and attitude carries academic, economic, and social ramifications.
Article
Introduction Medicare Merit-based Incentive Payment System (MIPS), established by CMS to transition Medicare reimbursement toward value-based care, has faced criticism for its administrative complexity and potential inequities affecting safety-net providers (SNPs). Methods This study analyzed five-year data (2018-2022) to evaluate the performance and financial outcomes of clinicians consistently participating in MIPS, focusing on disparities between SNPs and non-SNPs. Results We found that safety-net specialists were 31% more likely than non–safety-net specialists to consistently receive positive payment adjustments and earned modestly higher average adjustment rates (0.35 percentage points). However, despite this superior performance, safety-net specialists did not achieve greater cumulative financial rewards due to MIPS’s percentage-based adjustment structure, which disadvantages clinicians with smaller billing volumes. Our analysis also showed MIPS financial incentives were generally modest—ranging from 300to300 to 4,000 over five years—far below the estimated $12,000 in annual administrative compliance costs per physician reported in prior research. Conclusion To address these disparities and inefficiencies, policymakers should consider alternative models such as the American Medical Association’s proposed Data-Driven Performance Payment System, which reduces administrative burden by simplifying the reporting process and ensures fairer financial rewards by uncoupling incentive payments from billing volume—thereby improving equity for safety-net clinicians.
Chapter
The prices of financial securities are often shaken by large and time-varying shocks. The amplitudes of these price movements are not constant over time. There are periods of high volatility and periods of low volatility. Within these periods, volatility seems to be positively autocorrelated: high amplitudes are likely to be followed by high amplitudes and low amplitudes by low amplitudes. This observation which is particularly relevant for high-frequency data, such as daily stock market returns, implies that the conditional variance of the one-period forecast error is no longer constant (homoskedastic), but time-varying (heteroskedastic) and autocorrelated.
Article
Purpose This study aims to investigate the impact of financial technology (FinTech) disclosure on financial stability for Tunisian conventional banks. Design/methodology/approach This study examines 122 annual reports of Tunisian-listed conventional banks from 2010 to 2022. It uses the z -score to assess financial stability and content analysis to measure FinTech disclosure. Findings The empirical findings reveal that FinTech disclosure differs widely among Tunisian conventional banks. The findings show that the banks were overall financially stable during the study period. Moreover, FinTech disclosure has a negative impact on the bank’s financial stability. Originality/value Setting appropriate keywords for the Tunisian context to assess the FinTech application adds to the existing research trying to measure FinTech in different contexts. This study provides the first empirical evidence of the impact of FinTech disclosure on financial stability of Tunisian banks. It has theoretical and practical implications for both regulators and financial institutions.
Article
It is shown that in a standard linear regression model ordinary least-squares estimators are best linear unbiased if and only if the errors have the same variance and the same nonnegative coefficient of correlation between each pair.
Article
In regression analysis with heteroscedastic and/or correlated errors, the usual assumption is that the covariance matrix σ of the errors is completely specified, except perhaps for a scalar multiplier. This condition is relaxed in this paper by assuming only that σ has a certain pattern; for example, that σ is diagonal or partitionable into a diagonal matrix of sub-matrices. The method used for estimating σ is the standard procedure of equating certain quadratic forms of the observations (in this case, squares and products of residuals from regression) to their expectations, and solving for the unknown variances and covariances. A numerical example illustrates the method.
Article
A new solution to the problem of variance estimation with one unit per stratum is presented. This method may lead to smaller bias in variance estimation, in many situations, than the methods of ‘collapsed strata’. It requires that we can associate with the strata concomitant variables which are correlated with the strata means. Several numerical examples with one or two concomitant variables are considered.
Article
It is shown that in a standard linear regression model ordinary least-squares estimators are best linear unbiased if and only if the errors have the same variance and the same nonnegative coefficient of correlation between each pair.
Article
Two exact tests are presented for testing the hypothesis that the residuals from a least squares regression are homoscedastic. The results can be used to test the hypothesis that a linear [ratio] model explains the relationship between variables as opposed to the alternative that the ratio [linear] specification is correct. The first test is parametric and uses the F-statistic. The second test is nonparametric and uses the number of peaks in the ordered sequence of unsigned residuals. In conclusion, the results of some experimental calculations of the powers of the tests are discussed.
Article
Let Y = Xβ + e be a Gauss-Markoff linear model such that E(e) = 0 and D(e), the dispersion matrix of the error vector, is a diagonal matrix Δ whose ith diagonal element is σi2, the variance of the ith observation yi. Some of the σi2 may be equal. The problem is to estimate all the different variances. In this article, a new method known as MINQUE (MInimum Norm Quadratic Unbiased Estimation) is introduced for the estimation of the heteroscedastic variances. This method satisfies some intuitive properties: (i) if S1 is the MINQUE of Σ piσi2 and S2 that of Σ qiσi2, then S1 + S2 is the MINQUE of Σ(pi + qi)σi2, (ii) it is invariant under orthogonal transformation, etc. Some sufficient conditions for the estimation of all linear functions of the σi2 are given. The use of estimated variances in problems of inference on the β parameters is briefly indicated.
Article
A method is suggested for obtaining a multiple linear regression equation which permits the variance, as well as the mean, of normally distributed random variables, Y to be a function of known constants X1 … Xp. The method is applicable to large samples, and yields an approximation of maximum likelihood estimates of regression coefficients, which may be used to construct confidence intervals for the parameters of the normal distribution and tolerance intervals for individual Y. A likelihood ratio test may be used to test this model against the usual homoscedastic least squares model. Two exmples of this type of analysis on data from the literature are presented.
Article
The quite general test for heteroskedasticity presented here regresses the absolute values of the residuals obtained by ordinary least-squares on some variable(s). Denoting the O.L.S. residuals by , one obtains, for instance, a regression like where z is a variable, a and b regression coefficients and the residuals of the new regression. We call the acceptance of a non-zero value for both a and b a case of “mixed heteroskedasticity”, which we deem frequent in practice though neglected in handbooks.The paper also summarizes another test due to S. M. Goldfeld and R. E. Quandt and examines the powers of the two by using Monte-Carlo simulations: the new test seems to compare favourably, except perhaps in the case of large samples.
Article
The linear model given by yt=k=1Kztk(βk+vtk)t=1,2,,Ty_t = \sum_{k=1}^K z_{tk}(\beta_k + v_{tk})\quad t = 1, 2, \cdots, T is considered. The βk represent average responses of yt the dependent variable to unit changes in the independent variables, ztk. The vtk are independently distributed random errors. A number of consistent estimators of the coefficients, βk, and the variances of the errors are developed and a few properties of the estimators are noted. Further investigations of sampling properties are needed.
Article
This paper deals with linear regressions \begin{equation*}\tag{1.1}y_k = x_{k1}\beta_1 + \cdots + x_{kq}\beta_q + \epsilon_k, \quad k = 1, 2, \cdots\end{equation*} with given constants xkmx_{km} and with error random variables ϵk\epsilon_k that are (a) uncorrelated or (b) independent. Let Eϵk=0,0<Eϵk2<E\epsilon_k = 0, 0 < E\epsilon^2_k < \infty for all k. The individual error distribution functions (d.f.'s) are not assumed to be known, nor need they be identical for all k. They are assumed, however, to be elements of a certain set F of d.f.'s. Consider the family of regressions associated with the family of all the error sequences possible under these restrictions. Then conditions on the set F and on the xkmx_{km} are obtained such that the least squares estimators (LSE) of the parameters β1,,βq\beta_1, \cdots, \beta_q are consistent in Case (a) (Theorem 1) or asymptotically normal in Case (b) (Theorem 2) for every regression of the respective families. The motivation for these theorems lies in the fact that under the given assumptions statements based only on the available knowledge must always concern the regression family as a whole. It will be noticed moreover that the conditions of the theorems do not require any knowledge about the particular error sequence occurring in (1.1). Most of the conditions are necessary as well as sufficient, with the consequence that they cannot be improved upon under the limited information assumed to be available about the model. Since the conditions are very mild, the results apply to a large number of actual estimation problems. We denote by F(F)\mathfrak{F}(F) the set of all sequences {ϵk}\{\epsilon_k\} that occur in the regressions of a family as characterized above. Thus, F(F)\mathfrak{F}(F) comprises all sequences of uncorrelated (Case (a)) or independent (Case (b)) random variables whose d.f.'s belong to F but are not necessarily the same from term to term of the sequence. For each GεFG \varepsilon F the relations xdG=0\int x dG = 0 and 0<x2dG<0 < \int x^2 dG < \infty hold. In this paper, F(F)\mathfrak{F}(F) may be looked upon as a parameter space. A parameter point then is a sequence of F(F)\mathfrak{F}(F). Correspondingly, we say that a statement holds on F(F)\mathfrak{F}(F) (briefly on F) if it holds for all {ϵk}εF(F)\{\epsilon_k\} \varepsilon \mathfrak{F}(F). The statements of Theorems 1 and 2 are of this kind. The proof of Theorem 1, as well as the proof of the sufficiency in Theorem 2, is elementary and straight forward. Theorem 2 is a special case of a central limit theorem (holding uniformly on F(F)\mathfrak{F}(F)) for families of random sequences [3]. Some similarity between the roles of the parameter spaces F(F)\mathfrak{F}(F) in our theorems and of the parameter spaces that occur, e.g., in the Gauss-Markov and related theorems may be seen in the fact that these theorems remain true only as long as the conclusions in the theorems hold for every parameter point in the respective spaces. As is well known, the statements in the Gauss-Markov and related theorems hold for every parameter vector β1,,βq\beta_1, \cdots, \beta_q in a q-dimensional vector space (see e.g. Scheffe 1959, p. 13, 14). A result in the theory of linear regressions that bears some resemblance with the theorems of this paper has been obtained by Grenander and Rosenblatt (1957, p. 244). Let the error sequence {ϵk}\{\epsilon_k\} in (1.1) be a weakly stationary random sequence with piecewise continuous spectral density, and let the regression vectors admit a joint spectral representation. Under these assumptions Grenander and Rosenblatt give necessary and sufficient conditions for the regression spectrum and for the family of admissible spectral densities in order that the LSE are asymptotically efficient for every density of the family. In Sections 3 and 6 we discuss some examples relevant to Theorems 1 and 2.
Article
This paper reviews a number of cost studies conducted by the Cost Finding Section of the ICC purporting to show that the elasticity of total railroad costs with respect to output (percent variable) is 0.8. These studies are criticized primarily for using deflated and also irrelevant data. Deflation by miles of track leads them to evaluate this elasticity as the conceptually wrong "average" railroad level. In the course of the paper the statistical reasons for and against using deflated variables are explored in some detail and alternative computations are presented indicating that the same data are consistent with constant or only mildly increasing returns to scale to a proportionate expansion of traffic. Several published econometric studies of railroad costs are also reviewed and found to be consistent with the absence of significant increasing returns to scale to an indiscriminate expansion of traffic.
Article
Points out the severe limitations of the Taylor series approximation interpretation for OLS and to provide in its place an approxmation interpretation with general validity. - from Author
Article
This paper investigates the impact of international migration on technical efficiency, resource allocation and income from agricultural production of family farming in Albania. The results suggest that migration is used by rural households as a pathway out of agriculture: migration is negatively associated with both labour and non-labour input allocation in agriculture, while no significant differences can be detected in terms of farm technical efficiency or agricultural income. Whether the rapid demographic changes in rural areas triggered by massive migration, possibly combined with propitious land and rural development policies, will ultimately produce the conditions for a more viable, high-return agriculture attracting larger investments remains to be seen.
Article
This paper is a revised version of a paper originally en.titled "Asymptotic Properties of Nonlinear Weighted Least Squares Estimators with Independem not Identically Distributed Regressors." In Section 2, the strong consistency of a class of weighted least squares (WLS) estimators is proven under general conditions, as well as the strong consistency of weighted least squares with estimated weights (EWLS). Conditions which ensure asymptotic normality of the estimators are provided in Section 3, and a general statistic for testing hypotheses is given. In Section 4, consequences of misspecification are discussed and a test for misspecification is given. Section 5 contains a summary and concluding remarks. As should be expected, the condi- tions obtained are natural extensions of those found in the fixed regressor case. Also, the unconditional covariance matrix of the parameter estimates has a more general form than the usual conditional covariance matrix
Some Tests for Specification Error
  • L G Godrey
GODREY, L. G.: "Some Tests for Specification Error," unpublished paper, University of York, February, 1978.