Article

Enhancing market forecast accuracy: A structural equation model analysis of technical indicators in the Bank Nifty index

Authors:
  • University of Technology and Applied Sciences Salalah Oman
  • University of technology and Applied Sciences-Salalah (Oman)
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The growing intricacy of international financial markets requires sophisticated approaches to managing investments and minimizing losses. This paper evaluates the use of Structural Equation Modeling (SEM) to improve forecast accuracy by integrating multiple technical indicators within the Bank Nifty Index. The study employs SEM to estimate the effect of key technical indicators such as the Simple Moving Average (SMA), Relative Strength Index (RSI), Volume Weighted Average Price (VWAP), and Moving Average Convergence Divergence (MACD) on trading volumes and closing values. The model considers both direct and indirect relationships among these indicators to determine their overall impact. The study highlights the significance of certain technical indicators in predicting market trends. It demonstrates SEM’s effectiveness in estimating interrelationships among these indicators and formulating predictive models. This study underscores SEM’s effectiveness in financial forecasting by showing that incorporating multiple technical indicators enhances prediction accuracy and improves decision-making in financial markets. Investors and traders can use these findings to develop better trading strategies, improve market stability, and maximize returns. This analysis supports the case for a multi-indicator approach in forecasting models.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Many countries worldwide have adopted Egovernment as a result of its advantages. However, there isn’t enough research to support a measurement model for gauging Electronic Government (E-government) uptake in developing nations. The objective of this research is to create and Verify a measurement model that will evaluate the adoption of electronic government in developing countries from the viewpoint of the people living there. The data was utilized to conduct normality testing, descriptive analysis of latent variables, constructs validity, convergent validity, average variance extracted (AVE), reliability test, discriminant validity, heterotrait monotrait ratio (HTMT), and collinearity test data was gathered from a total of 560 participants through the utilization of a survey questionnaire. A measuring model was employed in Iraq for the purpose of validation. This study further substantiates the measurement model by means of empirical evidence. The utilization of structural equation modeling (SEM) and careful analysis of the data demonstrates A set of nine constructs, 48 items, has been deemed valid for the purpose of conducting an investigation of the Electronic government implementation in emerging nations. The survey tool presented herein offers a valuable collection of improved measures that can be effectively employed in analogous settings to examine the implementation of electronic government.
Article
Full-text available
Purpose: The objective is to employ a stochastic model to develop a new technical analysis indicator that could compute the variation of any index. We demonstrate the superiority and applicability of our proposed model and show that our proposed indicator could help investors and market analysts to anticipate the market trend in the short term and make better trading decisions by using our proposed model to analyze the variation of the NASDAQ Composite Index (IXIC). Design/methodology/approach: This study uses a stochastic process without mean-reverting property to develop a stochastic model that could compute the variation of any index. To show the superiority and applicability of our proposed model in computing the variation of any index, we employ our proposed model to compute the daily closing values of the IXIC over 10 years and derive the variation of the IXIC index. Findings: Our findings indicate that, based on the mean absolute percentage error, the calibrated model we proposed provides a more accurate estimate of the short-term index that outperforms both the simple moving average and the MACD in predictive accuracy. It delivers a robust anticipation of the overall market trend by offering a 95% confidence interval for the value of the composite NASDAQ index. Practical Implications: Our proposed indicator could help investors and market analysts to anticipate the market trend in the short term and make better trading decisions. Our proposed model provides market analysts with a forecasting tool by using our proposed technical analysis indicator to anticipate the market trend, which outperforms some traditional indicators of technical analysis, including Simple Moving Averages and Moving Average Convergence Divergence. Originality/value: Our approach, results, and conclusions are original and new in the literature. Our proposed model is a new technical indicator for predicting any index based on a stochastic process, which has been found to outperform some classical indicators. This research makes significant contributions to the field of decision sciences because the indicator we have developed plays a crucial role. It enables better buying and selling decisions based on market trend predictions estimated by using our proposed model. In this way, the indicator offers added value to professionals in making investment decisions. The results of this research work contribute to the development of new technical analysis indicators. Here, the IXIC index is an example, the use of this indicator is wider and could concern any stock market index and any share. So, this work enriches the literature and opens up new avenues for any researcher who wants to use stochastic processes to develop new technical indicators for different financial assets.
Article
Full-text available
Purpose: The study examined the effect of e-compensation management on organizational performance of some selected brewery firms in Southwest Nigeria. Method: The case study research was carried out with a sampledrawn from the population, through a multistage sampling procedure to arrive at 332 employees of the selected firms using Slovin's (1960) sample size determination formulae. The questionnaire was used as a research instrument to sourcedata from employeesas respondents from the selected firms. Partial Least Squares-Structural Equation Modelling (PLS-SEM) was deployed as the statistical tool for data analysis. Results: The findings revealed that e-compensation has a significant positive effect on organizational performance in the Brewery sector of Nigeria. Policy Implication: This study implies that organizations should ensure that compensation data of employees are progressively managed electronically to enhance their satisfaction which in turn influences the performance of the organizations.
Article
Full-text available
Aim The aim was to investigate the validity of evidence of the Perceived Efficacy and Goal Setting System second edition for Brazilian children. Methods 258 children participated, both sexes (n = 133 girls; 51.6%), 5 to 9 years old (total sample Mage = 7.1, SD = 1.4), from four regions of Brazil. The Perceived Efficacy and Goal Setting System – second edition, was used. Results Experts showed agreement about the high clarity and practical pertinence of the items (content validity coefficient from 98.4 to 100%; Gwet’s agreement coefficient from 0.85 to 1.00, p < 0.001). Confirmatory factorial analysis showed adequate adjustment indexes (RMSEA [0.048, 90% C.I. = 0.043 to 0.053], SRMR [0.243], CFI [0.91], RNI [0.91], TLI [0.91], ꭓ²/df [1.962]). The multigroup analysis showed configural, metric and scalar invariance of two models for gender (CFI = 0.97; RMSEA, [90%C.I.] = 0.05 [0.03 to 0.07]; metric: ΔRMSEA = 0.001; scalar: ΔRMSEA = −0.004) and age band (5–7 years-old and 8–9 years-old; CFI = 0.94; RMSEA, [90%C.I.] = 0.05 [0.03 to 0.07]; metric: ΔRMSEA = 0.002; scalar: ΔRMSEA = 0.010). The Heterotrait-Monotrait ratio test showed adequate discriminant validity among three dimensions (self-care and productivity [value = 0.76]; self-care and leisure [value = 0.57], productivity and leisure [value = 0.76]). Alpha for polychoric correlations showed an adequate internal consistency for all items and total scale (all α values >0.70). Composite reliability (Self-care = 0.8; Productivity = 0.81; Leisure = 0.8) reinforce evidence about reliability. Percentage agreement showed adequate item-level test-retest reliability (values between 76 and 92%). Conclusion This scale showed adequate content and internal structure validity evidence to assess the perceived self-efficacy for Brazilian children.
Article
Full-text available
Prediction of the economy in global markets is of crucial importance for individuals, decisionmakers, and policies. To this end, effectiveness in modeling and forecasting the directions of such leading indicators is of crucial importance. For this purpose, we analyzed the Baltic Dry Index (BDI), Investor Sentiment Index (VIX), and Global Stock Market Indicator (MSCI) for their distributional characteristics leading to proposed econometric methods. Among these, the BDI is an economic indicator based on shipment of dry cargo costs, the VIX is a measure of investor fear, and the MSCI represents an emerging and developed county stock market indicator. By utilizing daily data for a sample covering 1 November 2007–30 May 2022, the BDI, VIX, and MSCI indices are investigated with various methods for nonlinearity, chaos, and regime-switching volatility. The BDS independence test confirmed dependence and nonlinearity in all three series; Lyapunov exponent, Shannon, and Kolmogorov entropy tests suggest that series follow chaotic processes. Smooth transition autoregressive (STAR) type nonlinearity tests favored two-regime GARCH and Asymmetric Power GARCH (APGARCH) nonlinear conditional volatility models where regime changes are governed by smooth logistic transitions. Nonlinear LSTAR-GARCH and LSTAR-APGARCH models, in addition to their single-regime variants, are estimated and evaluated for in-sample and out-of-sample forecasts. The findings determined significant prediction and forecast improvement of LSTAR-APGARCH, closely followed by LSTAR-GARCH models. Overall results confirm the necessity of models integrating nonlinearity and volatility dynamics to utilize the BDI, VIX, and MSCI indices as effective leading economic indicators for investors and policymakers to predict the direction of the global economy.
Article
Full-text available
In 2020 and 2021, the cryptocurrency market attracted millions of new traders and investors. Lack of regulation, high liquidity, and modern exchanges significantly lowered the entry threshold for new market participants. In 2021, over 5 million Americans were regularly involved in cryptocurrency trading. At that time, the interest in market indicators and trading strategies remained low, leading to the conclusion that most investors did not use decision-support indicators. The correct and backtested use of technical analysis signals can give the trader a significant advantage over most market participants. This work introduces an algorithmic approach to examining the effectiveness of the signals generated by one of the most popular market indicators, the Relative Strength Index (RSI). A model corresponding to an actual cryptocurrency exchange was used to backtest the strategies. The results show that the RSI as a momentum indicator in the cryptocurrency market involves high risk. Using alternative RSI applications can allow traders to gain an advantage in the cryptocurrency market. Comparing the results with the traditional buy and hold strategy shows the credible potential of the indicated method and the usage of signals generated by the technical analysis indicators.
Article
Full-text available
Predicting equities market trends is one of the most challenging tasks for market participants. This study aims to apply machine learning algorithms to aid in accurate Nifty 50 index trend predictions. The paper compares and contrasts four forecasting methods: artificial neural networks (ANN), support vector machines (SVM), naive bayes (NB), and random forest (RF). In this study, the eight technical indicators are used, and then the deterministic trend layer is used to translate the indications into trend signals. The principal component analysis (PCA) method is then applied to this deterministic trend signal. This study's main influence is using the PCA technique to find the essential components from multiple technical indicators affecting stock prices to reduce data dimensionality and improve model performance. As a result, a PCA-machine learning (ML) hybrid forecasting model was proposed. The experimental findings suggest that the technical factors are signified as trend signals and that the PCA approach combined with ML models outperforms the comparative models in prediction performance. Utilizing the first three principal components (percentage of explained variance=80%), experiments on the Nifty 50 index show that support vector classifier (SVC) with radial basis function (RBF) kernel achieves good accuracy of (0.9968) and F1-score (0.9969), and the RF model achieves an accuracy of (0.9969) and F1-Score (0.9968). In area under the curve (AUC) performance, SVC (RBF and Linear kernels) and RF have AUC scores of 1. Keywords: National stock exchange fifty Principle component analysis Stock market Technical indicators Time series forecast This is an open access article under the CC BY-SA license.
Article
Full-text available
This paper proposes an EEMD-Hurst-LSTM prediction method based on the ensemble learning framework, which is applied to the prediction of typical commodities in China’s commodity futures market. This method performs ensemble empirical mode decomposition (EEMD) on commodity futures prices, and incorporates the components obtained by EEMD decomposition and the adaptive fractal Hurst index calculated by using intraday high-frequency data as new features into the LSTM model to decompose its correlation with the external market to detect changes in market conditions. The results show that the EEMD-Hurst-LSTM method has better predictive performance compared to other horizontal single models and longitudinal deep learning combined models. Meanwhile, the trading strategy designed according to this ensemble model can obtain more returns than other trading strategies and have the best risk control level. The research of this paper provides important implications for the trend following of commodity markets and the investment risk management of statistical arbitrage strategies.
Article
Full-text available
The paper seeks to answer the question of how price forecasting can contribute to which techniques gives the most accurate results in the futures commodity market. A total of two families of models (decision trees, artificial intelligence) were used to produce estimates for 2018 and 2022 for 21- and 125-day periods. The main findings of the study are that in a calm economic environment, the estimation accuracy is higher (1.5% vs. 4%), and that the AI-based estimation methods provide the most accurate estimates for both time horizons. These models provide the most accurate forecasts over short and medium time periods. Incorporating these forecasts into the ERM can significantly help to hedge purchase prices. Artificial intelligence-based models are becoming increasingly widely available, and can achieve significantly better accuracy than other approximations.
Article
Full-text available
Correctly predicting the stock price movement direction is of immense importance in the financial market. In recent years, with the expansion of dimension and volume in data, the nonstationary and nonlinear characters in finance data make it difficult to predict stock movement accurately. In this article, we propose a methodology that combines technical analysis and sentiment analysis to construct predictor variables and then apply the improved LASSO-LASSO to forecast stock direction. First, the financial textual content and stock historical transaction data are crawled from websites. Then transfer learning Finbert is used to recognize the emotion of textual data and the TTR package is taken to calculate the technical indicators based on historical price data. To eliminate the multi-collinearity of predictor variables after combination, we improve the long short-term memory neural network (LSTM) model with the Absolute Shrinkage and Selection Operator (LASSO). In predict phase, we apply the variables screened as the input vector to train the LASSO-LSTM model. To evaluate the model performance, we compare the LASSO-LSTM and baseline models on accuracy and robustness metrics. In addition, we introduce the Wilcoxon signed rank test to evaluate the difference in results. The experiment result proves that the LASSO-LSTM with technical and sentiment indicators has an average 8.53% accuracy improvement than standard LSTM. Consequently, this study proves that utilizing historical transactions and financial sentiment data can capture critical information affecting stock movement. Also, effective variable selection can retain the key variables and improve the model prediction performance.
Article
Full-text available
Portfolio optimization is one of the most complex problems in the financial field, and technical analysis is a popular tool to find an optimal solution that maximizes the yields. This paper establishes a portfolio optimization model consisting of a weighted unidirectional dual-layer LSTM model and an SMA-slope strategy. The weighted unidirectional dual-layer LSTM model is developed to predict the daily prices of gold/Bitcoin, which addresses the traditional problem of prediction lag. Based on the predicted prices and comparison of two representative investment strategies, simple moving average (SMA) and Bollinger bands (BB), this paper adopts a new investment strategy, SMA-slope strategy, which introduces the concept of k-slope to measure the daily ups and downs of gold/Bitcoin. As two typical financial products, gold and Bitcoin are opposite in terms of their characteristics, which may represent many existing financial products in investors’ portfolios. With a principle of $1000, this paper conducts a five-year simulation of gold and Bitcoin trading from 11 September 2016 to 10 September 2021. To compensate for the SMA and BB that may miss buying and selling points, 4 different parameters’ values in the k-slope are obtained through particle swarm optimization simulation. Also, the simulation results imply that the proposed portfolio optimization model contributes to helping investors make investment decisions with high profitability.
Article
Full-text available
Stock price prediction is a significant research field due to its importance in terms of benefits for individuals, corporations, and governments. This research explores the application of the new approach to predict the adjusted closing price of a specific corporation. A new set of features is used to enhance the possibility of giving more accurate results with fewer losses by creating a six-feature set (that includes High, Low, Volume, Open, HiLo, OpSe), rather than the traditional four-feature set (High, Low, Volume, Open). The study also investigates the effect of data size by using datasets (Apple, ExxonMobil, Tesla, Snapchat) of different sizes to boost open innovation dynamics. The effect of the business sector in terms of the loss result is also considered. Finally, the study included six deep learning models, MLP, GRU, LSTM, Bi-LSTM, CNN, and CNN-LSTM, to predict the adjusted closing price of the stocks. The six variables used (High, Low, Open, Volume, HiLo, and OpSe) are evaluated according to the model's outcome, showing fewer losses than the original approach, which utilizes the original feature set. The results show that LSTM-based models improved using the new approach, even though all models showed a comparative result wherein no model showed better results or continuously outperformed other models. Finally, the added new features positively affected the prediction models' performance.
Article
Full-text available
This research is the first attempt to create machine learning (ML) algorithmic systems that would be able to automatically trade precious metals. The algorithm uses three forecast methodologies: linear regression (LR), Darvas boxes (DB), and Bollinger bands (BB). Our data consists of 20 years of daily price data concerning five precious metals futures: gold, silver, copper, platinum, and palladium. We found that all of the examined precious metals’ current daily returns are negatively autocorrelated to their former day’s returns and identified lagged interdependencies among the examined metals. Silver futures prices were found to be best forecasted by our systems, and platinum the worst. Moreover, our system better forecasts price-up trends than downtrends for all examined techniques and commodities. Linear regression was found to be the best technique to forecast silver and gold prices trends, while the Bollinger band technique best fits palladium forecasting.
Article
Full-text available
Stock price prediction is one of the major challenges for investors who participate in the stock markets. therefore, different methods have been explored by practitioners and academicians to predict stock price movement. Artificial intelligence models are one of the methods that attracted many researchers in the field of financial prediction in the stock market. is study investigates the prediction of the daily stock prices for Commerce International Merchant Bankers (CIMB) using technical indicators in a NARX neural network model. e methodology employs comprehensive parameter trails for different combinations of input variables and different neural network designs. e study seeks to investigate the optimal artificial neural networks (ANN) parameters and settings that enhance the performance of the NARX model. erefore, extensive parameter trails were studied for various combinations of input variables and NARX neural network configurations. e proposed model is further enhanced by preprocessing and optimising the NARX model's input and output parameers. e prediction performance is assessed based on the mean squared error (MSE), R-squared, and hit rate. e performance of the proposed model is compared with other models, and it is shown that the utilisation of technical indicators with the NARX neural network improves the accuracy of one-step-ahead prediction for CIMB stock in Malaysia. e performance of the proposed model is further improved by optimising the input data and neural network parameters. e improved prediction of stock prices could help investors increase their returns from investment in stock markets.
Article
Full-text available
In the financial market, commodity prices change over time, yielding profit opportunities. Various trading strategies have been proposed to yield good earnings. Pairs trading is one such critical, widely-used strategy with good effect. Given two highly correlated paired target stocks, the strategy suggests buying one when its price falls behind, selling it when its stock price converges, and operating the other stock inversely. In the existing approach, the genetic Bollinger Bands and correlation-coefficient-based pairs trading strategy (GBCPT) utilizes optimization technology to determine the parameters for correlation-based candidate pairs and discover Bollinger Bands-based trading signals. The correlation coefficients are used to calculate the relationship between two stocks through their historical stock prices, and the Bollinger Bands are indicators composed of the moving averages and standard deviations of the stocks. In this paper, to achieve more robust and reliable trading performance, AGBCPT, an advanced GBCPT algorithm, is proposed to take into account volatility and more critical parameters that influence profitability. It encodes six critical parameters into a chromosome. To evaluate the fitness of a chromosome, the encoded parameters are utilized to observe the trading pairs and their trading signals generated from Bollinger Bands. The fitness value is then calculated by the average return and volatility of the long and short trading pairs. The genetic process is repeated to find suitable parameters until the termination condition is met. Experiments on 44 stocks selected from the Taiwan 50 Index are conducted, showing the merits and effectiveness of the proposed approach.
Article
Full-text available
When estimating path coefficients among psychological constructs measured with error, structural equation modeling (SEM), which simultaneously estimates the measurement and structural parameters, is generally regarded as the gold standard. In practice, however, researchers usually first compute composite scores or factor scores, and use those as observed variables in a path analysis, for purposes of simplifying the model or avoiding model convergence issues. Whereas recent approaches, such as reliability adjustment methods and factor score regression, has been proposed to mitigate the bias induced by ignoring measurement error in composite/factor scores with continuous indicators, those approaches are not yet applicable to models with categorical indicators. In this article, we introduce the two-stage path analysis (2S-PA) with definition variables as a general framework for path modeling to handle categorical indicators, in which estimation of factor scores and path coefficients are separated. It thus allows for different estimation methods in the measurement and the structural path models and easier diagnoses of violations of model assumptions. We conducted three simulation studies, ranging from latent regression to mediation analysis with categorical indicators, and showed that 2S-PA generally produced similar estimates to those using SEM in large samples, but gave better convergence rates, less standard error bias, and better control of Type I error rates in small samples. We illustrate 2S-PA using data from a national data set, and show how researchers can implement it in Mplus and OpenMx. Possible extensions and future directions of 2S-PA are discussed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Article
Full-text available
Many studies have been proposed to prove that technical analysis can help investors make trading decisions. The moving average (MA) is a widely used technical indicator that plays an important role in this field since it directly reflects stock fluctuations. However, most studies ignore the parameter settings of the MA, which leads to underestimation of the potential of the MA. Therefore, this paper is the first attempt to remove all restrictions and extend the limits of the MA to take advantage of the MA’s capability well. It also uses different kinds of MA, such as the weighted moving average (WMA) and the exponential moving average (EMA), to compose trading strategies. Our system proposes the global best-guided quantum-inspired tabu search algorithm (GQTS), which is better at searching than traditional algorithms, to optimize trading strategies based on the MA. Furthermore, an innovative 2-phase sliding window is invented to consider more investment situations in changeable stock markets. In summary, this paper intended to investigate the ability of MA and proposed dynamic and intelligent trading strategies based on MA, GQTS, and 2-phase sliding window to assist investors to make trading decisions. The experiments show that the proposed system flexibly discovers better trading points. Our system outperforms traditional methods and beats the buy-and-hold strategy to yield significant profits both in developed and emerging stock markets.
Article
Full-text available
Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature. The stock data is usually non-stationary, and attributes are non-correlative to each other. Several traditional Stock Technical Indicators (STIs) may incorrectly predict the stock market trends. To study the stock market characteristics using STIs and make efficient trading decisions, a robust model is built. This paper aims to build up an Evolutionary Deep Learning Model (EDLM) to identify stock trends' prices by using STIs. The proposed model has implemented the Deep Learning (DL) model to establish the concept of Correlation-Tensor. The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange (NSE)-India, a Long Short Term Memory (LSTM) is used. The datasets encompassed the trading days from the 17 th of Nov 2008 to the 15 th of Nov 2018. This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends. The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one. The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%, 56.25%, and 57.95% on the datasets of HDFC, Yes Bank, and SBI, respectively. Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.
Article
Full-text available
Previous research in the area of using deep learning algorithms to forecast stock prices was focused on news headlines, company reports, and a mix of daily stock fundamentals, but few studies achieved excellent results. This study uses a convolutional neural network (CNN) to predict stock prices by considering a great amount of data, consisting of financial news headlines. We call our model N-CNN to distinguish it from a CNN. The main concept is to narrow the diversity of specific stock prices as they are impacted by news headlines, then horizontally expand the news headline data to a higher level for increased reliability. This model solves the problem that the number of news stories produced by a single stock does not meet the standard of previous research. In addition, we then use the number of news headlines for every stock on the China stock exchange as input to predict the probability of the highest next day stock price fluctuations. In the second half of this paper, we compare a traditional Long Short-Term Memory (LSTM) model for daily technical indicators with an LSTM model compensated by the N-CNN model. Experiments show that the final result obtained by the compensation formula can further reduce the root-mean-square error of LSTM.
Article
Full-text available
This study compares the two widely used methods of Structural Equation Modeling (SEM): Covariance based Structural Equation Modeling (CB-SEM) and Partial Least Squares based Structural Equation Modeling (PLS-SEM). The first approach is based on covariance, and the second one is based on variance (partial least squares). It further assesses the difference between PLS and Consistent PLS algorithms. To assess the same, empirical data is used. Four hundred sixty-six respondents from India, Saudi Arabia, South Africa, the USA, and few other countries are considered. The structural model is tested with the help of both approaches. Findings indicate that the item loadings are usually higher in PLS-SEM than CB-SEM. The structural relationship is closer to CB-SEM if a consistent PLS algorithm is undertaken in PLS-SEM. It is also found that average variance extracted (AVE) and composite reliability (CR) values are higher in the PLS-SEM method, indicating better construct reliability and validity. CB-SEM is better in providing model fit indices, whereas PLS-SEM fit indices are still evolving. CB-SEM models are better for factor-based models like ours, whereas composite-based models provide excellent outcomes in PLS-SEM. This study contributes to the existing literature significantly by providing an empirical comparison of all the three methods for predictive research domains. The multi-national context makes the study relevant and replicable universally. We call for researchers to revisit the widely used SEM approaches, especially using appropriate SEM methods for factor-based and composite-based models.
Article
Full-text available
Purpose One popular method to assess discriminant validity in structural equation modeling is the heterotrait-monotrait ratio of correlations (HTMT). However, the HTMT assumes tau-equivalent measurement models, which are unlikely to hold for most empirical studies. To relax this assumption, the authors modify the original HTMT and introduce a new consistent measure for congeneric measurement models: the HTMT2. Design/methodology/approach The HTMT2 is designed in analogy to the HTMT but relies on the geometric mean instead of the arithmetic mean. A Monte Carlo simulation compares the performance of the HTMT and the HTMT2. In the simulation, several design factors are varied such as loading patterns, sample sizes and inter-construct correlations in order to compare the estimation bias of the two criteria. Findings The HTMT2 provides less biased estimations of the correlations among the latent variables compared to the HTMT, in particular if indicators loading patterns are heterogeneous. Consequently, the HTMT2 should be preferred over the HTMT to assess discriminant validity in case of congeneric measurement models. Research limitations/implications However, the HTMT2 can only be determined if all correlations between involved observable variables are positive. Originality/value This paper introduces the HTMT2 as an improved version of the traditional HTMT. Compared to other approaches assessing discriminant validity, the HTMT2 provides two advantages: (1) the ease of its computation, since HTMT2 is only based on the indicator correlations, and (2) the relaxed assumption of tau-equivalence. The authors highly recommend the HTMT2 criterion over the traditional HTMT for assessing discriminant validity in empirical studies.
Article
Full-text available
A stock market crash is a drop in stock prices more than 10% across the major indices. Stock crisis prediction is a difficult task due to more volatility in the stock market. Stock price sell-offs are due to various reasons such as company earnings, geopolitical tension, financial crisis, and pandemic situations. Crisis prediction is a challenging task for researchers and investors. We proposed a stock crisis prediction model based on the Hybrid Feature Selection (HFS) technique. First, we proposed the HFS algorithm to removes the irrelevant financial parameters features of stock. The second is the Naive Bayes method is considered to classify the strong fundamental stock. The third is we have used the Relative Strength Index (RSI) method to find a bubble in stock price. The fourth is we have used moving average statistics to identify the crisis point in stock prices. The fifth is stock crisis prediction based on Extreme Gradient Boosting (XGBoost) and Deep Neural Network(DNN) regression method. The performance of the model is evaluated based on Mean Squared Error(MSE), Mean Absolute Error(MAE), and Root Mean Square Error(RMSE). HFS based XGBoost method was performed better than HFS based DNN method for predicting the stock crisis. The experiments considered the Indian datasets to carried out the task. In the future, the researchers can explore other technical indicators to predict the crisis point. There is more scope to improve and fine-tune the XGBoost method with a different optimizer.
Article
Full-text available
Structural equation modeling (SEM) is a widespread approach to test substantive hypotheses in psychology and other social sciences. Whenever hypothesis tests are performed, researchers should ensure that the sample size is sufficiently large to detect the hypothesized effect. Power analyses can be used to determine the required sample size to identify the effect of interest with a desired level of statistical power (i.e., the probability to reject an incorrect null hypothesis). Vice versa, power analyses can also be used to determine the achieved power of a test, given an effect and a particular sample size. However, most studies involving SEM neither conduct a power analysis to inform sample size planning nor evaluate the achieved power of the performed tests. In this tutorial, we show and illustrate how power analyses can be used to identify the required sample size to detect a certain effect of interest or to determine the probability of a conducted test to detect a certain effect. These analyses are exemplified regarding the overall model as well as regarding individual model parameters, whereby both, models referring to a single group as well as models assessing differences between multiple groups are considered.
Article
Full-text available
The stock market is an aggregation of investor sentiment that affects daily changes in stock prices. Investor sentiment remained a mystery and challenge over time, inviting researchers to comprehend the market trends. The entry of behavioral scientists in and around the 1980s brought in the market trading's human dimensions. Shortly after that, due to the digitization of exchanges, the mix of traders changed as institutional traders started using algorithmic trading (AT) on computers. Nevertheless, the effects of investor sentiment did not disappear and continued to intrigue market researchers. Though market sentiment plays a significant role in timing investment decisions, classical finance models largely ignored the role of investor sentiment in asset pricing. For knowing if the market price is value-driven, the investor would isolate components of irrationality from the price, as reflected in the sentiment. Investor sentiment is an expression of irrational expectations of a stock's risk-return profile that is not justified by available information. In this context, the paper aims to predict the next-day trend in the index prices for the centralized Indian National Stock Exchange (NSE) deploying machine learning algorithms like support vector machine, random forest, gradient boosting, and deep neural networks. The training set is historical NSE closing price data from June 1st, 2013-June 30th, 2020. Additionally, the authors factor technical indicators like moving average (MA), moving average convergence-divergence (MACD), K (%) oscillator and corresponding three days moving average D (%), relative strength indicator (RSI) value, and the LW (R%) indicator for the same period. The predictive power of deep neural networks over other machine learning techniques is established in the paper, demonstrating the future scope of deep learning in multi-parameter time series prediction.
Article
Full-text available
We provide evidence that the use of technical trading rules provides traders the opportunity to generate profits from actively buying and selling individual stocks across Asian markets. We test the trading performance of three widely used technical trading strategies, the Arithmetic Moving Average, the Relative Strength Index, and the Stochastic Oscillator, as well as variations to each trading strategy. We compare the results of these trading rules to a long-term buy-and-hold strategy across 4822 stocks traded in 39 Asian countries. Our results, when applying a simple behavior intervention filter of only selling a position when a trade is profitable, show that these technical trading rules, on average, were able to outperform the buy-and-hold strategy for 66% of the stocks listed in our sample. Additionally, given any of the listed Asian stocks, we found that, on average, a trader could apply any technical trading strategy and have a greater than 50–50 chance of outperforming the buy-and hold strategy for that stock for 63% of all stocks.
Article
Full-text available
We employ data‐based approaches to identify the transmissions of structural shocks among investor attention measured by Google search queries, realized volatilities and trading volumes in the US, the UK and the German stock market. The two identification approaches adopted for the structural VAR analysis are based on independent component analysis and the informational content of disproportional variance changes. Our results show robust evidence that investors’ attention affects both volatilities and trading volumes contemporaneously, whereas the latter two variables lack immediate impacts on investors’ attention. Some movements in investors’ attention can be traced back to market sentiment. This article is protected by copyright. All rights reserved.
Article
Full-text available
We develop a heterogeneous autoregressive model of downside volatility (HAR‐DV) model with structural changes (HAR‐DV‐SC) model to investigate the effects of structural changes on predicting downside volatility. Then we employ HAR‐DV and HAR‐DV‐SC models to forecast downside volatilities in S&P 500 index, crude oil, gold, copper, and soybean futures markets. The in‐sample analysis shows that structural changes contain in‐sample information for predicting downside volatility. The out‐of‐sample analysis indicates that the HAR‐DV‐SC model outperforms the HAR‐DV model, suggesting that structural changes contain incremental out‐of‐sample information on future downside volatility. These results are robust and have important implications for risk management of stakeholders.
Article
Full-text available
The optimal execution of stock trades is a relevant and interesting problem as it is key in maximizing profits and reducing risks when investing in the stock market. In the case of large orders, the problem becomes even more complex as the impact of the order in the market has to be taken into account. The usual solution is to split large orders into a set of smaller suborders that must be executed within a prescribed time window. This leads to the problem of deciding when in the time window execute each suborder. There are popular ways of executing the trading of these split orders like those which try to track the “Time Weighted Average Price” and the “Volume Weighted Average Price”, usually called TWAP and VWAP orders. This paper presents a strategy to optimize the splitting of large trade orders over a given time window. The strategy is based on the solution of an optimization problem that is applied following a receding horizon approach. This approach reduces the impact of prediction errors due to the uncertain market dynamics, by using new values of the price time series as they are available as time goes on. Suborder size constraints are taken into account in both market and limit orders. The strategy relies on price and traded volume forecast but it is independent of the prediction technique used. The performance index weighs not only the financial cost of the suborders, but also the impact on the market and the forecasting accuracy. A tailored optimization algorithm is proposed for efficiently solving the corresponding optimization problem. Most of the computations of the algorithm can be parallelized. Finally, the proposed approach has been tested through a case study composed by stocks of the Chinese A-share market.
Article
From External Quality Assessment data, current harmonization of CRP measuring systems appears to be satisfactory, the inter-assay CV being well below 10%. The inter-method variability is even better (close to 3%) when the widely used measuring systems are compared at CRP concentrations employed as cut-off for detecting sub-clinical infection (i.e., 10.0 mg/L) and measurement variability estimated, according to ISO 20914:2019 Technical Specification, from the intermediate within-lab reproducibility of 6-month consecutive measurement data. According to the state-of-the-art model (which is better suited for CRP), the maximum allowable measurement uncertainty (MAU) for CRP measurement on clinical samples with 10.0 mg/L concentrations is 3.76% (desirable quality). As measurement uncertainty (MU) of the only available reference material (ERM-DA474/IFCC) is ∼3%, to fulfil desirable MAU on clinical samples, IVD manufacturers should work to keep the contribution of remaining MU sources (commercial calibrator and intermediate within-lab reproducibility) lower than 2.3%.
Article
Jöreskog’s covariance-based approach (JCA) has been considered a standard method for structural equation modeling. However, JCA is prone to the occurrence of improper solutions and cannot make probabilistic inferences about the true factor scores. To address the enduring issues of JCA, we propose a data matrix-based alternative, termed structured factor analysis (SFA). Given a data matrix of indicators, SFA begins by estimating both measurement model parameters and factor scores by minimizing a single cost function via an alternating least squares algorithm, which mathematically guarantees convergence to proper solutions. It then employs the factor score estimates to estimate structural model parameters. Once all parameters are estimated, SFA further estimates the probability distribution of the factor scores that can generate the data matrix of indicators, which can be used for probabilistic inferences about the true factor scores. We investigate SFA’s performance and empirical utility through simulated and real data analyses.
Article
In this study, we consider an on-line monitoring procedure to detect a parameter change in general time series models, featuring location-scale heteroscedastic time series models and their conditional quantiles. To resolve this statistical process control (SPC) problem, we employ a residual-based cumulative sum (CUSUM) process specially designed to effectively detect both upward and downward changes in the conditional mean, variance, and quantiles of time series. To attain control limits analytically, limit theorems are provided for the proposed CUSUM monitoring process. A simulation study and real data analysis are conducted to illustrate its validity empirically.
Article
Large volume, random fluctuations and distractive patterns in raw price data lead to overfitting in stock price prediction. Thus research papers in this area suffer from multiple limitations: Very short prediction period from one day to one week, consideration of few stocks only instead of whole of stock market spectrum, exploration of more suitable machine learning algorithms. By overcoming the problems of raw data these limitations can be conquered. Proposed work uses a supervised machine learning approach on statistically learned macro features obtained from gist of input data, free from raw data drawbacks, to predict the price band for the upcoming month and a half for almost all NIFTY50 stocks. The predicted bands are tested for precision in comparison with actual stock price bands. Motivating outcomes so obtained were used for automated sensing of opportunity to make buy / sell / wait decision using fuzzy logic. The results show that the price bands are quite accurate with reasonable tolerance. Monetization capability of the predicted bands has also been enhanced by using an opportunity controller k.
Article
For enhancing the charging infrastructure network coverage in populated areas, photovoltaic pavement (PVP) has been considered an innovative option that caters to distributed power generation and electric vehicles (EVs) charging in motion. However, the promotion of PVP has not yet served its intended purpose. This paper develops a gross net present value (GNPV) model to investigate the optimal timing to invest in PVP under stochastic electricity prices and time-dependent investment costs. Based on the photovoltaic highway project in Shandong, China, our numerical results reveal that the volatility of electricity prices, construction investment costs, and solar panels’ energy conversion efficiency significantly restricts the diffusion of PVP, resulting in the current dilemma of PVP adoption. However, the rapid decline in investment costs and the rapid increase in the number of EVs are not always effective in accelerating the PVP investment process, especially in some high electricity price regions, such as the southern areas of China. The findings of this paper will provide theoretical support and decision-making recommendations for the management evaluation of PVP projects, the optimization of charging facility networks, and the formulation of EV incentive measures.
Article
Researchers often stress the predictive goals of their partial least squares structural equation modeling (PLS-SEM) analyses. However, the method has long lacked a statistical test to compare different models in terms of their predictive accuracy and to establish whether a proposed model offers a significantly better out-of-sample predictive accuracy than a naïve benchmark. This paper aims to address this methodological research gap in predictive model assessment and selection in composite-based modeling. Recent research has proposed the cross-validated predictive ability test (CVPAT) to compare theoretically established models. This paper proposes several extensions that broaden the scope of CVPAT and explains the key choices researchers must make when using them. A popular marketing model is used to illustrate the CVPAT extensions’ use and to make recommendations for the interpretation and benchmarking of the results. This research asserts that prediction-oriented model assessments and comparisons are essential for theory development and validation. It recommends that researchers routinely consider the application of CVPAT and its extensions when analyzing their theoretical models. The findings offer several avenues for future research to extend and strengthen prediction-oriented model assessment and comparison in PLS-SEM. Guidelines are provided for applying CVPAT extensions and reporting the results to help researchers substantiate their models’ predictive capabilities. This research contributes to strengthening the predictive model validation practice in PLS-SEM, which is essential to derive managerial implications that are typically predictive in nature.
Article
Generalized structured component analysis (GSCA) and partial least squares path modeling (PLSPM) are two key component-based approaches to structural equation modeling that facilitate the analysis of theoretically established models in terms of both explanation and prediction. This study is the first to offer a comparative evaluation of GSCA and PLSPM in a predictive modeling framework. A simulation study compares the predictive performance of GSCA and PLSPM under various simulation conditions and different prediction types of correctly specified and misspecified models. The results suggest that GSCA with reflective composite indicators (GSCA_R) is the most versatile approach. For observed prediction, which uses the component scores to generate prediction for the indicators, GSCA_R performs slightly better than PLSPM with mode A. For operative prediction, which considers all parameter estimates to generate predictions, both methods perform equally well. GSCA with formative composite indicators and PLSPM with mode B generally lag behind the other methods. Future research may further assess the methods’ prediction precision, considering more experimental factors with a wider range of levels, including more extreme ones. When prediction is the primary study aim, researchers should generally revert to GSCA_R, considering its performance for observed and operative prediction together. This research is the first to compare the relative efficacy of GSCA and PLSPM in terms of predictive power.
Article
This paper contributes to the literature on forecasting the realized volatility of oil and gold by (i) utilizing the Infinite Hidden Markov (IHM) switching model within the Heterogeneous Autoregressive (HAR) framework to accommodate structural breaks in the data and (ii) incorporating, for the first time in the literature, various sentiment indicators that proxy for the speculative and hedging tendencies of investors in these markets as predictors in the forecasting models. We show that accounting for structural breaks and incorporating sentiment-related indicators in the forecasting model does not only improve the out-of-sample forecasting performance of volatility models but also has significant economic implications, offering improved risk-adjusted returns for investors, particularly for short-term and mid-term forecasts. We also find evidence of significant cross-market information spilling over across the oil, gold, and stock markets that also contributes to the predictability of short-term market fluctuations due to sentiment-related factors. The results highlight the predictive role of investor sentiment-related factors in improving the forecast accuracy of volatility dynamics in commodities with the potential to also yield economic gains for investors in these markets.
Article
To analyze the psychometric performance of Resilience Scale for Chinese Adolescent (RSCA) for undergraduates in Guangdong. Stratified random sampling method was used to select 1628 undergraduates from 7 colleges in Guangdong. RSCA and Academic Burnout Scale for Chinese Undergraduates (ABSCU) were used to investigate them. Cronbach's αcoefficient and split-half reliability were used to analyze the internal consistency of RSCA. Convergent validity, discriminant validity, factor analysis and criterion validity were used to evaluate its validity. Celling and floor effect were used to analyze its sensitivity. Cronbach's α coefficient of the total questionniar, 2 domains and 5 factors were0.72-0.86, which met the requirements of the group comparison. Spearman-Brown split-half coefficient of the total questionniar, 2 domains and 5 factors were 0.71-0.89. The calibration success rate of convergent and discriminant validity of 5 factors were all 100%. The calibration success rate of convergent and discriminant validity of 2 domains were all above 86.7%. Five principal components obtained from 27 items, witha cumulative variance contribution rate of48.79% and two principal components obtained from 5 factors, withacumulative variance contribution rateof 65.23%,which basically metthe theoretical conception of RSCA. The total score of RSCA significantly predicted the total score of ABSCU (Radj2=0.158, P<0.001). The total score, scores of 2 domains and 5 factors of RSCA were all normal distribution, without any celling or floor effect. The psychometric performance of RSCA for undergraduates in Guangdong were valid and reliable.
Article
Traffic flow prediction is a basic aspect to be considered in transportation management and modeling. Attaining precise information on near and current traffic flows has an extensive range of appliances and it further aids in managing the congestion. Numerous conventional models failed at offering precise prediction results due to “shallow in architecture and hand engineered in features”. Moreover, the raw traffic flow information contains noise that might lead to the worst prediction results. Therefore, this paper intends to design an enhanced prediction model on traffic flow using Optimized Deep Convolutional Neural Network (DCNN). The input features or the technical indicators subjected to the optimized CNN are Average True Range (ATR), Exponential Moving Average (EMA), Relative Strength Indicator (RSI) and Rate of Change (ROC), respectively. Moreover, for precise prediction, the weights of DCNN are optimally tuned using a new Improved Lion Algorithm (LA) termed as Lion with New Territorial Takeover Update (LN-TU) model. In the end, the betterment of implemented work is compared and proved over the conventional models in terms of error analysis and prediction analysis.
Article
The impact of COVID-19 on the stock markets of US, UK, and India has been analyzed. Daily market returns of the stock indices (Dow Jones Industrial Average, FTSE-100, Nifty 50 Index, and Nifty Bank Index) have been examined using paired t-test for 40 days before and after the reporting of the first case. Index performance has also been investigated for the quarter ending June 2020 along with comparative performance analysis of the indices with Nifty Bank Index. The results showed that markets have borne substantially negative returns, but they are not statistically significant. This indicates the resilience of these markets to restore to previous index levels after taking a short-term hit. This paper adds value to the literature by acting as a resource for academia as well as industry by spelling out changes in markets during this pandemic and supporting evidence from Indian banks that are catalysts of growth for businesses in uncertain times.
Article
The prediction of stock price return volatilities is important for financial companies and investors to help to measure and managing market risk and to support financial decision-making. The literature points out alternative prediction models - such as the widely used heterogeneous autoregressive (HAR) specification - which attempt to forecast realized volatilities accurately. However, recent variants of artificial neural networks, such as the echo state network (ESN), which is a recurrent neural network based on the reservoir computing paradigm, have the potential for improving time series prediction. This paper proposes a novel hybrid model that combines HAR specification, the ESN, and the particle swarm optimization (PSO) metaheuristic, named HAR-PSO-ESN, which combines the feature design of the HAR model with the prediction power of ESN, and the consistent PSO metaheuristic approach for hyperparameters tuning. The proposed model is benchmarked against existing specifications, such as autoregressive integrated moving average (ARIMA), HAR, multilayer perceptron (MLP), and ESN, in forecasting daily realized volatilities of three Nasdaq (National Association of Securities Dealers Automated Quotations) stocks, considering 1-day, 5-days, and 21-days ahead forecasting horizons. The predictions are evaluated in terms of r-squared and mean squared error performance metrics, and the statistical comparison is made through a Friedman test followed by a post-hoc Nemenyi test. Results show that the proposed HAR-PSO-ESN hybrid model produces more accurate predictions on most of the cases, with an average R² (coefficient of determination) of 0.635, 0.510, and 0.298, an average mean squared error of 5.78x10⁻⁸, 5.78x10⁻⁸, and 1.16x10⁻⁷, for 1, 5, and 21 days ahead on the test set, respectively. The improvement is statistically significant with an average rank of 1.44 considering the three different datasets and forecasting horizons.
Article
Within the area of stock market prediction, forecasting price values or movements is one of the most challenging issue. Because of this, the use of machine learning techniques in combination with technical analysis indicators is receiving more and more attention. In order to tackle this problem, in this paper we propose a hybrid approach to generate trading signals. To do so, our proposal consists of applying a technical indicator combined with a machine learning approach in order to produce a trading decision. The novelty of this approach lies in the simplicity and effectiveness of the hybrid rules as well as its possible extension to other technical indicators. In order to select the most suitable machine learning technique, we tested the performances of Linear Model (LM), Artificial Neural Network (ANN), Random Forests (RF) and Support Vector Regression (SVR). As technical strategies for trading, the Triple Exponential Moving Average (TEMA) and Moving Average Convergence/Divergence (MACD) were considered. We tested the resulting technique on daily trading data from three major indices: Ibex35 (IBEX), DAX and Dow Jones Industrial (DJI). Results achieved show that the addition of machine learning techniques to technical analysis strategies improves the trading signals and the competitiveness of the proposed trading rules.