Article

Time Series Analysis, Forecasting, and Control

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... AR(m) and MA(n) models require many data structures when modeling separately; thus making these models complex. Box, Jenkins and Reinsel [12] introduced ARMA model as a combination of AR(m) and MA(n) models to handle the dependencies in the series. ARMA(m,n) model is represented as [6] ( ...
... C. GARCH analysis based on counties Table 5: Summary of GARCH analysis based on the ARMA(1,1)-GARCH(2,2) model presented in Table 3 for climatic variables and tea production for Counties. Table 6 suggest that we transform Eq. (7) to (8) to show the volatility relationship between tea production and climatic variables in Embu as ∑ (8) Table 7 suggest that we transform Eq. (7) to Eq. (9) to show the volatility relationship between tea production and climatic variables in Embu as ∑ (9) Table 9: Summary of GARCH analysis based on the ARMA(1,1)-GARCH(2,2) model presented in Table 3 Table 9 suggest that we transform Eq. (7) to Eq. (11) to show the volatility relationship between tea production and climatic variables in Kisii as ∑ (11) Table 10 suggest that we transform Eq. (7) to Eq. (12) to show the volatility relationship between tea production and climatic variables in meru as (12) Tables 10-11 post similar results, suggesting the GARCH volatility effect of data were unchanged between the two counties (Meru and Nyeri). Table 11 suggest that we transform Eq. (7) to Eq. (13) to show the volatility relationship between tea production and climatic variables in Nyeri as (13) ...
... C. GARCH analysis based on counties Table 5: Summary of GARCH analysis based on the ARMA(1,1)-GARCH(2,2) model presented in Table 3 for climatic variables and tea production for Counties. Table 6 suggest that we transform Eq. (7) to (8) to show the volatility relationship between tea production and climatic variables in Embu as ∑ (8) Table 7 suggest that we transform Eq. (7) to Eq. (9) to show the volatility relationship between tea production and climatic variables in Embu as ∑ (9) Table 9: Summary of GARCH analysis based on the ARMA(1,1)-GARCH(2,2) model presented in Table 3 Table 9 suggest that we transform Eq. (7) to Eq. (11) to show the volatility relationship between tea production and climatic variables in Kisii as ∑ (11) Table 10 suggest that we transform Eq. (7) to Eq. (12) to show the volatility relationship between tea production and climatic variables in meru as (12) Tables 10-11 post similar results, suggesting the GARCH volatility effect of data were unchanged between the two counties (Meru and Nyeri). Table 11 suggest that we transform Eq. (7) to Eq. (13) to show the volatility relationship between tea production and climatic variables in Nyeri as (13) ...
Article
Full-text available
The influence of many climatic variables such as NDVI, minimum and maximum humidity, rainfall, minimum and maximum temperature, and solar radiation make tea production volatile. The clustering of low and high volatility periods values makes determining a suitable GARCH and ARIMA model difficult. Climatic variables are risk measures useful in understanding tea production data. A proper understanding of variables to help monitor and forecast the volatility in tea production output is paramount in applied statistics. The existence of affirmative consent on the standard performance of GARCH(1,1) can be misguiding due to variation in data volatility. The nonlinear nature of tea production and climatic variables creates everlasting interest to scholars to model a forecast of future tea production based on the volatile climatic conditions. We use Box and Jenkins model to outline 63 combinations of ARMA(m,n)-GARCH(p,q) models in tables with m and n are either o, 1, or 2. We use AIC, BIC, and LogL criteria to select the best model. The results based on the rubric indicated that the ARMA(1,1)-GARCH(2,2) is the suitable model.
... A recent study [10,11,16] has shown that ANN model can be more advantageous compared to other SVM or LR models and the advantages increase in accuracy with multiple attributes [3]. Works well even if attributes and output do not have a clear relation. ...
... The criteria of choose most appropriate model is less values of AIC, SIC, HQC and DW value is less than 2. According to Durbin-Watson (DW) test and AIC, SIC and HQC represent that the best fitted model is ARMA (2, 1) of cycles 1st, 6th, 7th, 9th, 10th and 11th. Cycles 2nd, 3rd, 4th and 5th are represented that the most appropriate model is ARMA (3,1). Cycle 8th expresses that Table 3 shows the forecasted evolution of each KSE-100 index cycle. ...
... Descriptive Statistics for KSE-100 Index (1-12) cycles MAPE) and Theil's U-Statistics are used to calculate the forecast accuracy of the best-fitted model of KSE-100 index cycles (1st -12th). Most of cycle's follows the most appropriate model is ARMA (2, 1) and some cycles shows the best adequate model is ARMA(3, 1) ...
Article
Full-text available
The main financial markets of every country are stock exchange and consider as an imperative cause for the corporations to increase capital. The novelty of this study to explore machine learning techniques when applied to financial stock market data, and to understand how machine learning algorithms can be applied and compare the result with time series analysis to real lifetime series data and helpful for any investor. Investors are constantly reviewing past pricing history and using it to influence their future investment decisions. The another novelty of this study, using news sentiments, the values will be processed into lists displaying and representing the stock and predicting the future rates to describe the market, and to compare investments, which will help to avoid uncertainty amongst the investors regarding the stock index. Using artificial neural network technique for prediction for KSE 100 index data on closing day. In this regard, six months' data cycle trained the data and apply the statistical interference using a ARMA (p, q) model to calculate numerical result. The novelty of this study to find the relation between them either they are strongly correlated or not, using machine learning techniques and ARMA (p, q) process to forecast the behavior KSE 100 index cycles. The adequacy of model describes via least values Akaike information criterion (AIC), Bayesian Schwarz information criterion (SIC) and Hannan Quinn information criterion (HIC). Durbin-Watson (DW) test is also applied. DW values (< 2) shows that all cycles are strongly correlated. Most of the KSE-100 index cycles expresses that the appropriate model is ARMA (2,1). Cycle's 2nd,3rd,4th and 5th shows that ARMA (3,1) is best fitted. Cycle 8th is shows ARMA (1,1) best fit and cycle 12th shows that the most appropriate model is ARMA (4,1).
... In fact, noisy, nonlinear and random financial time characteristics exist between financial data, and the influencing factors are numerous and complex [32]. However, Edwards R D et al. [14] proved that the trends of financial time series would be repeated, and some special time series trends would appear in the trends of future time series in a very similar way [4]. Historical trading data, such as opening prices, closing prices, maximum prices, minimum prices and trading volumes, directly reflect the changes in the financial market and generate other technical indicators to assist in judging the trends of stocks. ...
... Historical trading data, such as opening prices, closing prices, maximum prices, minimum prices and trading volumes, directly reflect the changes in the financial market and generate other technical indicators to assist in judging the trends of stocks. The autoregressive (AR), autoregressive moving average (ARMA), and autoregressive integrated moving average (ARIMA) models and optimization models use trading time series data for linear analysis [4,10,38], and some models, such as RNNs, CNNs and long short-term memory (LSTM), process historical trading data and derivative indexes into tensors for nonlinear analysis and the prediction of stock market fluctuations [19]. Roondiwala et al. [42] used an RNN and LSTM to predict Nifty50 stocks. ...
Article
Full-text available
The study of the prediction of stock market volatility is of great significance to rationally control financial market risks and increase excessive investment returns and has received extensive attention from academic and commercial circles. However, as a dynamic and complex system, the stock market is affected by multiple factors and has a comprehensive capability to include complex financial data. Given that the explanatory variables of influencing factors are diverse, heterogeneous and complex, the existing intelligent algorithms have great limitations for the analysis and processing of multi-source heterogeneous data in the stock market. Therefore, this study adopts the edge weight and information transmission mechanism suitable for subgraph data to complete node screening, the gate recurrent unit (GRU) and long short-term memory (LSTM) to aggregate subgraph nodes. The compiled data contain the metapaths of three types of index data, and the introduction of the association relationship attention dimension effectively mines the implicit meanings of multi-source heterogeneous data. The metapath attention mechanism is combined with a graph neural network to complete the classification of multi-source heterogeneous graph data, by which the prediction of stock market volatility is realized. The results show that the above method is feasible for the fusion of heterogeneous stock market data and the mining of implicit semantic information of association relations. The accuracy of the proposed method for the prediction of stock market volatility in this study is 16.64% higher than that of the dimensional reduction index and 14.48% higher than that of other methods for the fusion and prediction of heterogeneous data using the same model.
... Autocorrelation can be used to determine whether a time series is stationary. On the other hand, decorrelation time is defined as the autocorrelation function's first zerocrossings (Box et al. 2008). When the period between the first zero-crossing and the second zero-crossing approaches zero, the signal samples are less correlated. ...
... For seizure prediction, the spectral power in different frequency bands of the EEG was also taken into account. The transfer of power from lower to higher frequencies was seen before the seizure onset, according to Box et al. (2008). Each sub-spectral band's power is computed by performing a Fast Fourier Transform (FFT) on the segmented signals and then summing the resulting Fourier coefficients within that sub-band. ...
Article
Full-text available
Predicting seizures before they happen can help prevent them through medication. In this research, first, a total of 22 features were extracted from 5-s segmented EEG signals. Second, tensors were developed as inputs for different deep transfer learning models to find the best model for predicting epileptic seizures. The effect of Pre-ictal state duration was also investigated by selecting four different intervals of 10, 20, 30, and 40 min. Then, nine models were created by combining three ImageNet convolutional networks with three classifiers and were examined for predicting seizures patient-dependently. The Xception convolutional network with a Fully Connected (FC) classifier achieved an average sensitivity of 98.47% and a False Prediction Rate (FPR) of 0.031 h⁻¹ in a 40-min Pre-ictal state for ten patients from the European database. The most promising result of this study was the patient-independent prediction of epileptic seizures; the MobileNet-V2 model with an FC classifier was trained with one patient’s data and tested on six other patients, achieving a sensitivity rate of 98.39% and an FPR of 0.029 h⁻¹ for a 40-min Pre-ictal scheme.
... The definition of the autoregressive integrated moving average (ARIMA) model was suggested by [37]. A stationary time series x t is called an ARIMA model of order (p, d, q) and represented by (ARIMA(p, d, q)) if ...
... Model fitting information criterion is commonly used in model selection. Typically applied information criteria are Akaike information criterion (AIC), Schwarz Bayesian information criterion (BIC), Corrected AIC (AICc), and Hannan-Quinn (HQ) information criterion [37], [47]- [49]. The optimal model demonstrates the minimum AIC, BIC, AICc, and HQ criterion values. ...
... a [2] . . . ...
... r yy [2] . . . ...
Preprint
Full-text available
Change detection is an important synthetic aperture radar (SAR) application, usually used to detect changes on the ground scene measurements in different moments in time. Traditionally, change detection algorithm (CDA) is mainly designed for two synthetic aperture radar (SAR) images retrieved at different instants. However, more images can be used to improve the algorithms performance, witch emerges as a research topic on SAR change detection. Image stack information can be treated as a data series over time and can be modeled by autoregressive (AR) models. Thus, we present some initial findings on SAR change detection based on image stack considering AR models. Applying AR model for each pixel position in the image stack, we obtained an estimated image of the ground scene which can be used as a reference image for CDA. The experimental results reveal that ground scene estimates by the AR models is accurate and can be used for change detection applications.
... The data on hand injuries from 2012 to 2018 was used as the forecasting dataset to derive an R 2 value. We established and selected the best SARIMA model (p, d, q) × (P, D, Q) according to the steps introduced by Box and Jenkins [21,34] (Fig. 1), including minimizing the Bayesian information criterion (BIC). Autoregressive lags, moving average lags, seasonal autoregressive lags, and seasonal moving average lags are indicated by p, q, P, and Q, respectively [35]. ...
... While there are several methods for modeling in time series forecasting, including SARIMA and exponential smoothing methods, SARIMA models have shown better performance in predicting road traffic injuries in relation to gender [19]. Advantages of the SARIMA model include autoregressive, moving average and seasonal functions for trend, auto-correlation, smoothing and season [34]. Previously acknowledged disadvantages of the SARIMA model include the requirement of longitudinal data with a large sample size [19]. ...
Article
Full-text available
Purpose This study aims to analyse the correlation between the incidence rate of hand injuries and various major economic indicators in Singapore. We hypothesise that the number of hand injuries is correlated to activity in the construction and manufacturing industries in Singapore. Methods Twenty thousand seven hundred sixty-four patients who underwent hand surgeries in a tertiary institution between 2012 to 2018 were reviewed. Two independent, blinded observers extracted the frequency of hand surgeries performed from Electronic Medical Records. Economic indicators pertinent to Singapore’s economic activity were collected and smoothed by simple moving average of the prior 3 months. Results were analysed using IBM SPSS v25.0. Results Significant independent univariate variables were Purchasing-Manager-Index and Industrial-Production-Index. Multiple linear regression of quarterly reported figures showed that Total-Livestock-Slaughtered, Total-Seafood-Handled, Purchasing-Manger-Index, Industrial-Production-Index, Gas-Tariffs, Construction-Index, Consumer-Price-Index, Total-Air-Cargo-Handled, Total-Container-Throughput, Total-Road-Traffic-Accident-Casualties, Food-&-Beverage-Services-Index were significantly correlated ( p < 0.05) with hand injuries, with R ² = 62.3%. Conclusion Quarterly economic indicators from major economic industries can be used to predict the incidence of hand injuries with a 62.3% correlation. These findings may be useful for anticipating healthcare resource allocation to treat hand injuries. Type of study and level of evidence Economic and decision, Level II.
... A general reference for weather data requirements for index-based insurance applications is to have at least 20 years of historical weather data (Dick et al., 2011). This paper uses a statistical approach to build an ARIMA model (e.g., Box et al., 1994) to forecast monthly and yearly solar PV power generation in Changhua, Taiwan, based on the daily solar irradiation data obtained from NASA over 35 years. The estimation results help to understand the impact of solar irradiation on the energy production of the solar PV power plant. ...
... Considering a stochastic seasonal process, we employ the multiplicative seasonal ARIMA model, denoted as ARIMA (p,d,q) × (P,D,Q)s, to handle the complicated fluctuations. For details about this model, please refer to Chapter 9 of the study by Box et al. (1994). One of the optimal models for fitting the solar energy data during the period 01/1984-12/2017 is ARIMA (0,0,2) × (0,1,1) 12 : Table 1 lists the estimation result using R (https://www.r-project.org/). ...
Article
Solar energy is one of the fastest‐growing renewable energy resources globally, with solar photovoltaic (PV) technology being a promising application designed to add usable solar power to the national energy mix of several countries. The Taiwan government announced that the targeted amount of electricity generated from renewable sources shall increase to 20% of its total energy supply by 2025, in which 20 GW capacity is expected to be achieved by solar PV power. This study aims to design index‐based insurance to manage the volatility risk of solar radiation on energy production. We apply auto‐regressive integrated moving average models to predict monthly and annual energy production for a solar PV power plant in Taiwan and use the estimated results to calculate the pure premium rates for the designed insurance product. The daily data on solar irradiation are from NASA surface meteorology and solar energy, spanning 35 years from January 1984 to December 2018. The analyzed results reveal that the index‐based insurance approach can protect against the impact of insufficient solar radiation on solar PV energy production to strengthen investment security for solar PV projects.
... This assumption implies multiple means ( ) and, consequently, various scales or standard deviations ( ). The first developments in statistics and econometrics assumed that a given time series ( , ) could be modeled with a linear conditional mean model (such as a time series model or a multifactor regression one [35][36][37][38]), along with either a linear ( ) scale parameter or a non-linear, as is the case of a Generalized Autoregressive Conditional Hetesokedastic (GARCH) one [39][40][41][42]: ...
... For this purpose, we used the weighting method in(35): ...
Article
Full-text available
Abstract: In this paper, we tested the benefit of using Markov-Switching (M-S) models to forecast the views of the 26 most traded stocks in the U.S. in a Black-Litterman (B-L) optimal selection context. With weekly historical data of these stocks from January 1, 1980, we estimated and simulated (from January 7, 2000, to February 7, 2022) three portfolios that used M-S views in each stock and blended them with the market equilibrium views in a B-L context. Our position was that the B-L optimal portfolios could generate alpha (extra return) against a buy-and-hold and an actively managed portfolio with sample portfolio parameters (à la Markowitz, SampP). Our results suggest that the outperformance of the B-L managed portfolios holds only in the short term. In the long-term, the performance of the B-L portfolios, the SampP, and the market portfolio are statistically equal in terms of returns or their mean-variance efficiency in an ex-ante or ex-post analysis. Keywords: Markov-Switching; optimal portfolio selection; Black-Litterman, active portfolio management; algorithmic trading; mean-variance portfolio efficiency; optimal portfolio selection uncertainty
... , x n ; t 1 + τ, . . . , t n + τ ) (A. 29) for any τ , e.g., [127]. For the first-order distribution applies ...
... As stated in [117], a random process is SSS if the ACF and autocovariance function do not depend on the time t, i.e., It is worth mentioning that a linear combination of stationary processes is also stationary [127]. Ergodicity is (almost) always assumed in physical applications, e.g., for the analysis of time series. ...
Thesis
Full-text available
URL for Download: https://doi.org/10.22032/dbt.52114 This thesis addresses two topics that play a significant role in modern control theory: design of experiments (DoE) and parameter estimation methods for continuous-time (CT) models. In this context, DoE focuses on the impact of experimental design regarding the accuracy of a subsequent estimation of unknown model parameters and applying the theory to real-world applications and its detailed analysis. We introduce the Fisher-information matrix (FIM), consisting of the parameter sensitivities and the resulting highly nonlinear optimization task. By a first-order system, we demonstrate the computation of the information content, its visualization, and an illustration of the effects of higher Fisher information on parameter estimation quality. After that, the topic optimal input design (OID), a subarea of DoE, will be thoroughly explored on the practice-relevant linear and nonlinear model of a 1D-position servo system. Comparison with standard excitation signals shows that the OID signals generally provide higher information content and lead to more accurate parameter estimates using leastsquares methods. Besides, this approach allows taking into account constraints on input, output, and state variables. In the second major topic of this thesis, we treat parameter estimation methods for CT systems, which provide several advantages to identify discrete time (DT) systems, e.g., allows physical insight into model parameters. We focus on modulating function method (MFM) or Poisson moment functionals (PMF) and least squares to estimate unknown model parameters. In the case of noisy measurement data, the problem of biased parameter estimation arises immediately. That is why we discuss the computation and compensation of the so-called estimation bias in detail. Besides the detailed elaboration of a bias compensating estimation method, this work’s main contribution is, based on PMF and least squares for linear systems, the extension to at least slightly nonlinear systems. The derived bias-compensated ordinary least-squares (BCOLS) approach for obtaining asymptotically unbiased parameter estimates is tested on a nonlinear 1D-servo model in the simulation and measurement. A comparison with other methods for bias compensation or avoidance, e.g., total least squares (TLS), is performed. Additionally, the BC-OLS method is applied to the more general MFM. Furthermore, a practical issue of parameter estimation is discussed, which occurs when the system behavior leaves and re-enters the space covered by the identification equation. Using the 1D-servo system, one can show that disabling and re-enabling the PMF filters with appropriate initialization can solve this problem.
... Autocorrelation can be used to determine whether a time series is stationary. On the other hand, decorrelation time is defined as the autocorrelation function's first zerocrossings (Box et al. 2008). When the period between the first zero-crossing and the second zero-crossing approaches zero, the signal samples are less correlated. ...
... For seizure prediction, the spectral power in different frequency bands of the EEG was also taken into account. The transfer of power from lower to higher frequencies was seen before the seizure onset, according to Box et al. (2008). Each sub-spectral band's power is computed by performing a Fast Fourier Transform (FFT) on the segmented signals and then summing the resulting Fourier coefficients within that sub-band. ...
Preprint
Predicting seizures before they happen can help prevent them through medication. In this research, first, a total of 22 features were extracted from 5-s segmented EEG signals. Second, tensors were developed as inputs for different deep transfer learning models to find the best model for predicting epileptic seizures. The effect of Pre-ictal state duration was also investigated by selecting four different intervals of 10, 20, 30, and 40 min. Then, nine models were created by combining three ImageNet convolutional networks with three classifiers and were examined for predicting seizures patient-dependently. The Xception convolutional network with a Fully Connected (FC) classifier achieved an average sensitivity of 98.47% and a False Prediction Rate (FPR) of 0.031 h-1 in a 40-min Pre-ictal state for ten patients from the European database. The most promising result of this study was the patient-independent prediction of epileptic seizures; the MobileNet-V2 model with an FC classifier was trained with one patient's data and tested on six other patients, achieving a sensitivity rate of 98.39% and an FPR of 0.029 h-1 for a 40-min Pre-ictal scheme.
... The temporal autocorrelation characteristic of daily mean precipitation amounts can be accurately represented by a parametric first-order autoregressive (AR) model (Hannachi, 2014;Kim and Ryu, 2015;Wiuff, 2020), and for modeling the spatial cross-correlation characteristics of multisite precipitation amounts, the copula has become the most popular method (Bárdossy and Pegram, 2009;Ben Alaya et al., 2016;Lee, 2018;Li and Babovic, 2019). Recently, the vector autoregressive (VAR) model (Box et al., 2008) has received much attention since it can simultaneously model both the temporal autocorrelation and spatial cross-correlations of multisite precipitation amounts (Ben Alaya et al., 2015;Serinaldi and Kilsby, 2014). ...
... Among multisite time series models, the vector autoregressive (VAR) model is most widely used in practice due to its relative ease of implementation (Box et al., 2008), which is suitable for modeling the spatiotemporal correlations of multisite time series processes both contemporaneously and across time lags. In this study, the 1-order vector autoregressive model, or VAR(1) model is used to reproduce the spatiotemporal correlations of actual multisite daily mean precipitation amounts. ...
Article
Full-text available
The nonhomogeneous hidden Markov model (NHMM) is a popular statistical downscaling (SD) approach that can be used for multisite precipitation estimation. However, the NHMM assumes that once the days with the occurrence of precipitation have been determined, the precipitation amounts at each station on wet days can be conditionally independently generated from a parametric family of probability distribution functions without considering impacts from the past precipitation amounts and the neighboring stations. Such assumptions may lead to underestimation of the spatiotemporal correlations (i.e., temporal autocorrelation and spatial cross-correlations) of actual multisite precipitation amounts. Thus, this study developed a stepwise downscaling method called the nonhomogeneous hidden Markov model-vector autoregressive (NHMM-VAR), in which the NHMM is used to determine the probabilities for dry days and marginal distributions of multisite daily mean precipitation amounts on wet days based on the large-scale atmospheric covariates, then a VAR(1) model is used to reproduce the spatiotemporal correlations of actual multisite daily mean precipitation amounts. A comparison experiment between the NHMM and the NHMM-VAR models is conducted against outputs from eight global climate models (GCMs) of Phase 6 of the Coupled Model Intercomparison Project (CMIP6) over the Pearl River basin (PRB) of South China. The results show that the NHMM-VAR model performs well not only in modeling the statistical properties but also in reproducing the spatiotemporal correlations of actual multisite daily mean precipitation amounts, which indicates that the VAR model does add value to the NHMM downscaling method.
... The econometrics applied in this study include the Box-Jenkins technique (Box et al., 1994), or the seasonal autoregressive moving average (SARIMA). The forecasting method can be divided into five steps as follows Rueangrit et al., 2020;Cumroon et al., 2021;Ratsaminet et al., 2021). ...
... Step 2: To identify the forecasting model, the SARIMA of Box-Jenkins (Box et al., 1994) is applied using the correlogram. The orders of autoregression and seasonal autoregression or AR(p) and SAR(P) have been identified using the Partial Autocorrelation Function (PACF). ...
Article
Full-text available
This study aims to forecast the import demand for chemical fertilizer in Thailand, namely, nitrogen fertilizer, potassium fertilizer, and compound fertilizer. Secondary time series is employed, covering the period from January 2008 to December 2021, or 168 months. The forecasting technique used for the study is the seasonal autoregressive moving average (SARIMA) or the Box-Jenkins method. The empirical results show that the appropriate models for forecasting the import demand of nitrogen fertilizer, potassium fertilizer, and compound fertilizer based on the lowest value of the Akaike criterion and Schwarz criterion are SARIMA(0,0,0)(0,1,1)12 , SARIMA(2,0,1)(0,1,1)12 , and SARIMA(1,0,0)(2,1,0)12 , respectively. The forecasts for the next 12 months (January to December 2022) reveal that the total import demand compared to the previous year for nitrogen fertilizer increased by FORECASTING THE IMPORT DEMAND 62 5.12%, potassium fertilizer decreased by 8.74%, and compound fertilizer increased by 4.74%.
... [61, 69,70] , [191,192], , . ...
Book
Full-text available
The book further developed the theory of spectral analysis of currents and voltages of converters based on the Fourier series of several variables and the calculation of parameters of electricity quality in a condensed analytical form. Taken together, the results are a theoretical generalization and a new solution to the problem of improving the analysis and control of semiconductor converters based on the theory of spectral analysis and synthesis of control laws based on the use of Fourier series of several variables.
... The EDM-Simplex algorithm was first introduced to distinguish chaotic determinism from randomness. In time-series analysis, models are traditionally evaluated by checking the residual distribution and their linear autocorrelation (Box et al., 2008). Regardless of the model type used to forecast the recession or other important hydrological dynamics, EDM-Simplex can be used to forecast the residuals and check if there remains any low-dimensional deterministic pattern. ...
Thesis
Full-text available
Hydrological systems seem simple, "everything flows", but prove to be even more complex when one tries to differentiate and characterize flows in detail. Hydrology has thus developed a plurality of models that testify to the complexity of hydrological systems and the variety of their causal representations, from the simplest to the most sophisticated. Beyond a subjective complexity linked to our difficulty in understanding or our attention to detail, hydrological systems also present an intrinsic complexity. These include the nonlinearity of processes and interactions between variables, the number of variables in the system, or dimension, and how they are organized to further simplify or complicate the system's dynamics. The thesis addresses these aspects of hydrological complexity. An epistemological and historical analysis of the concept of causality explores the human understanding of hydrological systems. Based on empirical approaches applied to the limestone karstic system of the Lhomme at Rochefort in Belgium, the thesis then studies methods to analyze the nonlinearity of the Lhomme river recession and associate it with the geomorphological complexity of the watershed. The thesis also handles the discrimination of dominant dynamic behaviors in the hydrological continuum of the Rochefort caves subsurface based on an electrical resistivity model of the subsurface and clustering methods grouping time-series according to their similarity. Ref: Delforge, Damien. Causal analysis of hydrological systems : case study of the Lhomme karst system, Belgium . Prom. : Vanclooster, Marnik ; Van Camp, Michel Permalink: http://hdl.handle.net/2078.1/240635
... Para o estudo de séries temporais foram propostos os modelos de suavização exponencial e os modelos da metodologia de Box e Jenkins (BOX et al., 2008). Dentre estes modelos destacam-se também os modelos autorregressivo e de médias móveis (ARMA), utilizados para séries estacionárias. ...
... Models with a multiplicative error are only useful when the data are positive, but proven to be numerically unstable when the data contains zero or negative. Thus, when a time series is not strictly positive, only the additive models can be applied [38,[20][21][22][23]. This instability is caused by the need for logarithmic transformation of the purely multiplicative model. ...
Preprint
Full-text available
As its capital, Jakarta plays a critical role in boosting Indonesia's economic growth and setting the precedent for broader change outside of the city. One crucial avenue of inquiry to better understand, and prepare for, the future of a country so heavily impacted by disastrous weather events is understanding the effects of climate change through data. This study investigates meteorological data collected from 1996 to 2021 and compares the application of the SARIMA and the Holt-Winters methods to predict the future influence of climatic parameters on Jakarta's weather. The performance of the SARIMA method is proven to provide better results than the Holt-Winter models and both methods showed the best performances when forecasting the humidity data. The results of the forecast are able to demonstrate the characteristic of the climate in Jakarta, with dry season ranging from May to October and wet season ranging from November to April.
... Since the expectations of the base and spike component in (20) are calculated with respect to the real time t and the simulation time k, then their notation need to change to E u k|t tM u and E u k|t t Ă M u to reflect time dependence. Further, it follows that at the real time t and the simulation time ką1 the expected values compute as predictions E u k|t tM u" x M k|t , which are generated by an autoregressive integrated moving average (ARIMA) [30] model. For k"1 the expected value of the base price is equal to the observed market price at time t, i.e. ...
Article
Full-text available
This paper presents a control strategy for residential battery energy storage systems exposed to a volatile electricity market and a daily cyclic load. The proposed control strategy aims to achieve optimal economic benefits for customers in changeable conditions. A modification of a conventional model predictive control of battery energy storage systems is at the core of the proposed control strategy. For daily cycling load, the equality terminal constraint on the variable length horizon is considered. The proposed optimization method is convex, and it guarantees a globally optimal solution. For better modelling of the effects of volatility on the customer benefit, a new cost function is introduced. The newly presented cost function models a probabilistic relation between power exchanged with grid, the net load and electricity market. The probabilistic calculation of the cost function shows dependence on the mathematical expectation of the market price and the net load. Computational techniques to calculate this value are presented. The proposed strategy differs from stochastic and robust model predictive control because the cost is calculated across the market price and the net load variations rather than model constraints and parameter variations. Index Terms-Optimal control, model predictive control, energy market, nonlinear constrained optimization , revenue for battery energy storage system, a Gaussian mixture model, an autoregressive integrated moving average model.
... This test has good power and allows for the consistent 3 To model the long memory features, different models have been proposed to capture the slow power-decay characteristics. The fractionally differenced white noise model and its ARFIMA extension, displaying a hyperbolic decay is one of most widely adopted models [Box, Jenkins, & Reinsel, 2008]. The fractionally differenced model [Granger and Granger & Joyeux, 1980, Hosking, 1981, Palma, 2007, and Baillie, 1996. ...
Article
In this paper, we study the long memory behavior of Bitcoin, Litecoin, Ethereum, Ripple, Monero, and Dash with a focus on the COVID-19 period. Initially, we apply a time-varying Lifting method to estimate the Hurst exponent for each cryptocurrency. Then we test for a change in persistence over time. To model the multivariate connectivity, the wavelet-based multivariate long memory approach proposed by Achard and Gannaz (2016) is implemented. Our results indicate a change in the long-range dependence for the majority of cryptocurrencies, with a noticeable downward trend in persistence after the 2017 bubble and then a dramatic drop after the outbreak of COVID-19. The drop in persistence after COVID-19 is further illustrated by the Fractal connectivity matrix obtained from the Wavelet long-memory model. Our findings provide important implications regarding the evolution of market efficiency in the cryptocurrency market and the associated fractal structure and dynamics of the crypto prices over time.
... The corpus descriptive analysis revealed that the authors and co-authors are documents. Regarding the co-citation, the following authors stand out: Lu (1999), Box (1994), Alwan (1988), and Montgomery (1991) with 10, 6, 11, and 9 citations, respectively. ...
Article
Full-text available
This research aims to present a literature review (LR) on control charts for autocorrelated processes, intending to contribute to the scientific knowledge of the process management area. The article was constructed having a research question defined ex-ante, elaborated based on the literature, where a research protocol adapted from Tranfield, Denyer, and Smart (2003) was systematized according to the methodological rigor demanded in the literature review, which resulted in the composition of the research corpus. The research corpus was evaluated in detail using the bibliometric packages HistCite, VOSviewer, package R and Iramuteq. It was evaluated the validity of the 3 bibliometric laws were verified: Lotka's Law, Bradford's Law, and Zipf's Law based on the authors' citation and co-citation techniques. This study is considered as relevant and original since it is the first literature review on control charts for autocorrelated processes. The results confirmed the 3 classical bibliometric laws for the researched corpus. As a practical implication, this review provides support for quality management scholars to unlock potential research gaps. The main limitation of the work refers to the composition of the textual corpus, given that databases of national journals were not consulted due to the difficulty in terms of compatibility of the software used in this research. RESUMO Esta pesquisa tem por objetivo apresentar uma revisão de literatura (RL) sobre cartas de controle para processos autocorrelacionados, com a pretensão de contribuir para o conhecimento científico da área de gestão de processos. O artigo foi construído tendo uma pergunta de pesquisa definida ex-ante, elaborada com base na literatura, onde foi sistematizado um protocolo de pesquisa adaptado de Tranfield, Denyer e Smart (2003) condizente com o rigor metodológico exigido na revisão de literatura, que resultou na composição do corpus da pesquisa. Avaliou-se o corpus da pesquisa de maneira detalhada valendo-se dos pacotes bibliométricos HistCite, VOSviewer, pacote R e Iramuteq. Avaliou-se a validade das 3 leis bibliométricas: Lei de Lotka, Lei de Bradford e Lei de Zipf com base nas técnicas de citação e cocitação de autores. Este estudo pode ser considerado como relevante e original em função de ser a primeira Revisão Sistemática de Literatura sobre cartas de controle para processos autocorrelacionados. Os resultados sugerem a confirmação das 3 leis clássicas bibliométricas para o corpus pesquisado. Como implicação prática, esta revisão serve de apoio para que os acadêmicos da área de gestão de qualidade possam desvendar potenciais lacunas de pesquisa. A principal limitação do trabalho refere-se a composição do corpus textual, dado que bases de periódicos nacionais não foram consultadas em função da dificuldade em termos de compatibilidade dos softwares usados nesta pesquisa. Palavras-chave: Controle estatístico de processos; Dados autocorrelacionados; Gestão da Qualidade
... Autoregressive integrated moving average (ARIMA) model was introduced by Box and Jenkins (Box and Jenkins 1976). This model is one of the robust linear models in predicting time series modeling. ...
Article
Full-text available
In this study, statistical and soft-computing methods are compared in forecasting groundwater levels under Shared Socioeconomic Pathways (SSPs) SSP1-2.6, SSP2-4.5, and SSP5-8.5 from Coupled Model Intercomparison Project Phase 6 (CMIP6) in Tasuj Plain, Iran, for a near future period (2022–2027). A combination of general circulation models (GCMs) was used in the projection of precipitation and temperature in the future period. The estimation of climate variables for 2020–2044 period indicated that the temperature will increase, while the precipitation will decline. In simulation temporal groundwater level, wavelet-nonlinear autoregressive network with exogenous inputs (NARX), autoregressive integrated moving average (ARIMA), and wavelet-adaptive neuro-fuzzy inference system (ANFIS) models were used. The comparison of performance criteria for these models in the simulation of groundwater level demonstrated that the wavelet-NARX model with the R² of 0.99 has shown better efficiency. Finally, the simulation of groundwater level through the wavelet-NARX model was carried out for different scenarios. The results indicated that future groundwater levels in Tasuj Plain would continue to decline by 3.12 m, 3.96 m, and 4.79 m, for SSP1-2.6, SSP2-4.5, and SSP5-8.5, respectively.
... Also, the lag should be a fraction of the sequence length. For example, the Stata implementation [19] uses the rule of m=min(n/2,40), while Box et al. [20] suggest m=20, and Tsay [21] suggests m=ln(n) warning that when seasonal behavior is expected, this behavior needs to be taken into consideration and lag values at multiples of the seasonality are more important. Escanciano and Lobato [22] present a portmanteau test ...
Conference Paper
Full-text available
Organizations collect data from various sources, and these datasets may have characteristics that are unknown. Selecting the appropriate statistical and machine learning algorithm for data analytical purposes benefits from understanding these characteristics, such as if it contains temporal attributes or not. This paper presents a theoretical basis for automatically determining the presence of temporal data in a dataset given no prior knowledge about its attributes. We use a method to classify an attribute as temporal, non-temporal, or hidden temporal. A hidden (grouping) temporal attribute can only be treated as temporal if its values are categorized in groups. Our method uses a Ljung-Box test for autocorrelation as well as a set of metrics we proposed based on the classification statistics. Our approach detects all temporal and hidden temporal attributes in 15 datasets from various domains.
... Statistical properties analysis of image noise was carried out for each set of fixed power levels. In order to verify the hypothesis of uncorrelation (necessary for independence), the unidimensional Ljung-Box Q-test (LBQ) test [49,50] was applied on both the fast axis and slow axis (see figure 3a) of each image (k = 1, ... ,10). In order to accept the hypothesis of uncorrelation for the considered directions (fast axis and slow axis), the percentage of confirmation was fixed at 95%. ...
Article
Full-text available
In the literature of SRS microscopy, the hardware characterization usually remains separate from the image processing. In this paper, we consider both these aspects and statistical properties analysis of image noise, which plays the vital role of joining links between them. Firstly, we perform hardware characterization by systematic measurements of noise sources, demonstrating that our in‐house built microscope is shot noise limited. Secondly, we analyze the statistical properties of the overall image noise, and we prove that the noise distribution can be dependent on image direction, whose origin is the use of a lock‐in time constant longer than pixel dwell time. Finally, we compare the performances of two widespread general algorithms, i.e., Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT), with a method, i.e. Singular Spectrum Analysis (SSA), which has been adapted for SRS images. In order to validate our algorithms, in our investigations lipids droplets (LDs) have been used and we demonstrate that the adapted SSA method provides an improvement in image denoising This article is protected by copyright. All rights reserved.
... To solve it, a forecasting model has to deal with short-and long-term dynamics as well as a trend and variable variance. Classical statistical methods such as autoregressive moving average (ARMA) and exponential smoothing methods can be extended to multiple seasonal cycles [1,2] but they suffer from many drawbacks. The most important of these are: their linear nature, limited adaptability, limited ability to model complex seasonal patterns, problems with capturing long-term dependencies and problems with introducing exogenous variables. ...
Preprint
Full-text available
This paper compares recurrent neural networks (RNNs) with different types of gated cells for forecasting time series with multiple seasonality. The cells we compare include classical long short term memory (LSTM), gated recurrent unit (GRU), modified LSTM with dilation, and two new cells we proposed recently, which are equipped with dilation and attention mechanisms. To model the temporal dependencies of different scales, our RNN architecture has multiple dilated recurrent layers stacked with hierarchical dilations. The proposed RNN produces both point forecasts and predictive intervals (PIs) for them. An empirical study concerning short-term electrical load forecasting for 35 European countries confirmed that the new gated cells with dilation and attention performed best.
... An intrinsic feature of the time-domain approach is that, typically, adjacent points in time are correlated and that future values are related to past and present values. Autoregressive integrated moving average (ARIMA) modeling is one of the most widely implemented methods for analyzing univariate time series data (Box and Jenkins, 1976). In order to understand the modelling procedure, it is useful to briefly introduce the following basic models. ...
Article
Full-text available
This paper investigates the exchange strength of Nigerian naira with respect to United States dollar and fit an appropriate model to the data using Box Jenkins approach, the data spans the period 1972 to 2014. The result revealed that the exchange rate of naira to a U.S dollar has been relatively stable from 1972 to 1985, and then a continuous upward trend from 1985 to 2014.The series was slightly stationary after 1st difference and sufficiently stationary after 2nd difference, meaning that the series was either I (1) or I (2). Base on the selection criteria AIC and SIC, the best model that explains the series was found to be ARIMA (0, 2, 1). Test of model adequacy confirmed that the model is adequate (i.e the errors were white noise) and a forecast for period of six (6) years terms was made and the forecasted values were all within the confidence limits and it indicates if positive measure is not taken, the value of naira will continue to depreciate.
... ∆t was selected by calculating the ACF cutoff time (t cut ) for the HR time series. ACF defines how data points in a time series are related, on average, to the preceding data points [27]. The red point is the intersection between ACF and the upper border of the confidence interval. ...
Article
Full-text available
VO2max index has a significant impact on overall health. Its estimation through wearables notifies the user of his level of fitness but cannot provide a detailed analysis of the time intervals in which heartbeat dynamics are changed and/or fatigue is emerging. Here, we developed a multiple modality biosignal processing method to investigate running sessions to characterize in real time heartbeat dynamics in response to external energy demand. We isolated dynamic regimes whose fraction increases with the VO2max and with the emergence of neuromuscular fatigue. This analysis can be extremely valuable by providing personalized feedback about the user’s fitness level improvement that can be realized by developing personalized exercise plans aimed to target a contextual increase in the dynamic regime fraction related to VO2max increase, at the expense of the dynamic regime fraction related to the emergence of fatigue. These strategies can ultimately result in the reduction in cardiovascular risk.
... used the start_value_rho() function from the itsadug package(van Rij et al., 2017) to determine the starting lag values(Box et al., 1994).Models were fitted using restricted maximum-likelihood estimation (REML) with Gaussian or gamma error distributions and identity or log-link functions. Each covariate was fitted with a single penalised smoothing function based on thin plate regression splines (specified by bs='ts') representing an average fixed-effect across all fishID levels. ...
Article
Internal seiches are common in stratified lakes, with significant effects on stratification patterns, hydrodynamics and vertical nutrient transport. In particular, seiches can change the vertical distribution of the thermocline and the cold hypolimnetic and warm epilimnetic water masses by several metres on a timescale of a few hours, leading to rapid and strong changes in temperature profiles and oxygen availability, with profound effects on mobile and sessile organisms. This could affect fish communities directly, through physiological stress and elevated mortality, and indirectly, through prey distribution. The aim of this study was to analyse the effects of internal seiche dynamics on lacustrine fish behaviour, and to characterise fish reaction patterns, with the main focus on vertical movement of fish in the vicinity of a shifting thermocline, and avoidance of cold hypolimnetic water. The analysis was based on acoustic telemetry data from Lake Milada, a post‐mining lake in the Czech Republic, with a total of 61 tracked individuals of four species: northern pike (Esox lucius), wels catfish (Silurus glanis), tench (Tinca tinca) and rudd (Scardinius erythropthalmus). The effects of seiche dynamics on the four species studied were weak but significant during the day, while at night they affected only rudd. Upward seiches elicited stronger responses in fish than downward seiches, and the impacts occurred only during the strongest seiche events. Thermocline shifting during seiche events may induce a transient reduction in habitat for seiche‐reacting species, and thus affect predation and other inter‐ and intra‐specific interactions, as well as fish community dynamics. Seiche had a significant effect on the four studied species during the day, but only on rudd during the night.
... SARIMA is a general multiplicative seasonal ARIMA model which was presented by Box et al. (1994). In other words, SARIMA is a type of ARIMA p; d; q ð Þ P; D; Q ð Þ s model, in which p; d; q ð Þ is the non-seasonal part of the model and P; D; Q ð Þ s is the seasonal part of the model (Equation 9): ...
... A common category of non-static models of time series is the models of the moving average integrated auto-regression. This model consists of three main parameters including parameter of auto-regression p, parameter of moving average q, and parameter of differentiation d,and through putting the two parameters of q and d as equal to zero, they turn to static auto-regression model by [20]. ...
Article
Full-text available
The multi-fractal analysis has been applied to investigate various stylized facts of the financial market including market efficiency, financial crisis, risk evaluation and crash prediction. This paper examines the daily return series of stock index of NASDAQ stock exchange. Also, in this study, we test the efficient market hypothesis and fractal feature of NASDAQ stock exchange. In the previous studies, most of the technical analysis methods for stock market, including K-line chart, moving average, etc. have been used. These methods are generally based on statistical data, while the stock market is in fact a nonlinear and chaotic system which depends on political, economic and psychological factors. In this research we modeled daily stock index in NASDAQ stock exchange using ARMA-GARCH model from 2000 until the end of 2016. After running the model, we found the best model for time series of daily stock index. In next step, we forecasted stock index values for 2017 and our findings show that ARMA-GARCH model can forecast very well at the error level of 1%. Also, the result shows that a correlation exists between the stock price indexes over time scales and NASDAQ stock exchange is efficient market and non-fractal market.
... Apart from this way, with the development of artificial intelligence and computer technology, many researchers have recently paid tremendous attention to develop neural network techniques. Neural networks have been broadly used in many research fields such as pattern recognition [23,24], speech recognition [10,16], image processing [12,27,30], forecasting [2,3], classification [20,26], etc. For this reason, lots of neural network methods are currently developed and widely used. ...
Article
Full-text available
In this paper, we present a numerical method to solve ordinary differential equations (ODEs) by using neural network techniques in a deferred correction method framework. Similar to the deferred or error correction techniques, a provisional solution of the ODE is preferentially calculated by any lower-order scheme to satisfy given initial conditions, and the corresponding error is investigated by fully connected neural networks and structured to obtain sufficient magnitude of the error. Numerical examples are illustrated to demonstrate the efficiency of the proposed scheme.
... The second set, consisting of 13 data points and from the period of January 2019 to January 2020, were used to compare the accuracy of the forecasting models; the lowest mean absolute percentage error and root mean square error were used as the comparison criteria. The result show that the most accurate model is SARIMA (1,2,1) (1,1,0) (Box, et al. 1994 ...
Article
Full-text available
The objectives of this study was to study the guidelines for promotion and development of fighting fish culture: a case study of Siamese fighting fish farmers group, Bang Muang Sub district, Mueang district, Nakhon Sawan Province. A set of questionnaires was used for data collection administered to a population of 55 farmers. Obtained data were analyzed by using descriptive statistics, as performed by statistical software were package , statistically analyzed using frequency distribution, percentage, mean and standard deviation. The results showed that the average age of famers were 64.45 years old. The average household labor used in the fish culture was 2.39 persons. The average monthly household income was 36,189 Baht. The average culture area was 532.14 m2 which included concrete pond 12.78 ponds and plastic buckets, average 1,424 buckets. The results revealed that frequent problems included lack and/or discontinuity of support from the government, unknown causes of fish death, fish diseases and undersold price. The need for promotion and development of Fighting fish culture at a high level was prevention and treatment of disease. There was individual promotion and development of Fighting fish culture, and the officials in government should have visited their farm and given them suggestions regularly.
... After the treatment of hourly marginals, we now have a look at the dependence structure of the time series (M d,h ). In time series analysis, one very often uses Gaussian processes like ARMA processes after transforming the data such that one can assume the marginal distributions to be Gaussian, see Box et al. (2008). However, this approach has serious drawbacks in this case, as Gaussian processes always have the property of tail independence, as we will describe below. ...
Article
The increasing importance of solar power for electricity generation leads to increasing demand for probabilistic forecasting of local and aggregated photovoltaic (PV) yields. Based on publicly available irradiation data, this paper uses an indirect modeling approach for hourly medium to long-term local PV yields. We suggest a time series model for global horizontal irradiation that allows for multivariate probabilistic forecasts for arbitrary time horizons. It features several important stylized facts. Sharp time-dependent lower and upper bounds of global horizontal irradiations are estimated. The parameters of the beta distributed marginals of the transformed data are allowed to be time-dependent. A copula-based time series model is introduced for the hourly and daily dependence structure based on simple vine copulas with so-called tail dependence. Evaluation methods based on scoring rules are used to compare the model’s power for multivariate probabilistic forecasting with other models used in the literature showing that our model outperforms other models in many respects.
... For this purpose, a graphical inspection of the socalled autocorrelation (ACF) and partial autocorrelation function (PACF) is most often used, which shows a specific shape in the case of non-stationarity. Thus, for a non-stationary time series, the first PACF value is close to one while the other values are close to zero; on the contrary, the ACF values show a slow linear decrease from the first value, which is of course also close to one (Box et al., 2008). ...
Article
Public libraries represent a specific sector of public service provision, where library management is limited in its ability to influence consumers’ perceptions of the value of borrowed books. This study expands previous research on consumers’ perceived value and its measurement and focuses on the nature of the data examined, which has not yet received much attention. We fill this research gap and examine whether the perceived value of book borrowing services remains stationary over time by considering a sample of readers from the Municipal Library in Prague, Czech Republic. Moreover, we analyse whether the Covid-19 pandemic has affected the perceived value of book borrowing services. Our results contribute to the discussion an important finding that consumers’ perceptions of book borrowing services are stable and do not change over time. Interestingly, we also find that the Covid-19 pandemic has not led to a change in consumers’ perceived value. This study thus creates both theoretical and practical contributions and leads to the definition of several practical implications for managers of (public) library organisations. JEL L86, H39, H44
... One way to devise counterfactuals that account for autocorrelation is to use time series methods widely applied in engineering and in the natural and social sciences to systematically detect and mathematically model temporal patterning (22,23). Patterns detected by these methods include seasonality and other cycles as well as linear trends. ...
Article
The epidemiologic literature estimating the indirect or secondary effects of the COVID-19 pandemic on pregnant people and gestation continues to grow. Our assessment of this scholarship, however, leads us to suspect that the methods most commonly used may lead researchers to spurious inferences. This suspicion arises because the methods do not account for temporal patterning in perinatal outcomes when deriving counterfactuals, or estimates of the outcomes had the pandemic not occurred. We illustrate the problem in two ways. First, using monthly data from US birth certificates, we describe temporal patterning in five commonly used perinatal outcomes. Notably, for all but one outcome, temporal patterns appear more complex than much of the emerging literature assumes. Second, using data from France, we show that using counterfactuals that ignore this complexity produces spurious results. We recommend that subsequent investigations on COVID-19 and other perturbations use widely available time-series methods to derive counterfactuals that account for strong temporal patterning in perinatal outcomes.
... Initially, approaches that enable the reduction of dimensional complexity in timeseries data (Ali et al., 2019) including ARIMA (Autoregressive Integrated Moving Average) (McCleary and Hay, 1980;Box et al., 2008), Discrete Wave Analysis (Percival and Walden, 2000;Abuadbba and Khalil, 2015;Aldrich, 2020) and change point analysis (Killick and Eckley, 2014) were considered. Additionally, Discrete Fourier Transforms (DFT) and Wavelet Transform methods were reviewed as these can detect periodicity, yet neither technique can reveal long-term trends or anomalies (Zhu and Guo, 2017). ...
Thesis
Full-text available
The UK high street is constantly changing and evolving in response to, for example, online sales, out-of-town developments, and economic crises. With over 10 years of hourly footfall counts from sensors across the UK, this study was an opportunity to perform a longitudinal and quantitative investigation to diagnose how these changes are reflected in the changing patterns of pedestrian activity. Footfall provides a recognised performance measure of place vitality. However, through a lack of data availability due to historic manual counting methods, few opportunities to contextualise the temporal patterns longitudinally have existed. This study therefore investigates daily, weekly, and annual footfall patterns, to diagnose the similarities and differences between places as social activity patterns from UK high streets evolve over time. Theoretically, footfall is conceptualised within the framework of Territorology and Assemblage Theory, conceptually underpinning a quantitative approach to represent the collective meso-level (street and town-centre) patterns of footfall (social) activity. To explore the data, the periodic signatures of daily, weekly, and annual footfall are extracted using STL (seasonal trend decomposition using Loess) algorithms and the outputs are then analysed using fuzzy clustering techniques. The analyses successfully identify daily, weekly, and annual periodic patterns and diagnose the varying social activity patterns for different urban place types and how places, both individually and collectively are changing. Footfall is demonstrated to be a performance measure of meso-scale changes in collective social activity. For place management, the fuzzy analysis provides an analytical tool to monitor the annual, weekly, and daily footfall signatures providing an evidence-based diagnostic of how places are changing over time. The place manager is therefore better able to identify place specific interventions that correspond to the usage patterns of visitors and adapt these interventions as behaviours change.
... Meanwhile, the challenges in forecasting neural networks include determining the right pre-processing method and the appropriate neural network architecture, and finding the optimal parameter values. Thus, the most reasonable method during the existence of the time series model is to conduct pre-processing or inputting using the partial autocorrelation function (PACF) by justifying the correlation of a stationary time series with its own lagged values [20][21][22]. ...
Article
Full-text available
Background: The generalized space-time autoregressive (GSTAR) model is one of the most widely used models for modeling and forecasting time series and location data. Methods: In the GSTAR model, there is an assumption that the research locations are heterogeneous. In addition, the differences between these locations are shown in the form of a weighting matrix. The novelty of this paper is that we propose the hybrid time-series model of GSTAR uses the cascade neural network and obtains the best parameters from particle swarm optimization. Results and conclusion: This hybrid model provides a high accuracy value for forecasting PM2.5, PM10, NOx, and SO2 with high accuracy forecasting, which is justified by a mean absolute percentage error (MAPE) accuracy of around 0.01%.
... The main objective of an AR model is to re-construct a specified time series. The linear AR model (Wold, 1938;Box et al., 1994) is given by: ...
Thesis
The safety of a car occupant depends on several factors during an accident such as the seatbelt condition, the number of occupants in the car, the structure of the car. Usually, in the case of a frontal crash, a large part of the initial kinetic energy is dissipated in the compression of the crash boxes and the dissipated energy may be uncertain due to several uncertain parameters. A crash box is investigated in this thesis considering several uncertain parameters. The main challenging task for the thesis is the propagation of the uncertain parameters through a crash problem. The conventional approach for the uncertainty quantification (UQ) is the Monte Carlo simulation (MCS). However, MCS requires a large number of model evaluations which prohibits to apply this approach in a complex problem (i.e. a crash problem). To overcome this issue, the surrogate modeling approach is investigated in this thesis, which maintains a trade-off between accuracy and efficiency. As the crash problem is a dynamic problem, {a first surrogate model called sparse KNARX is developed in this thesis: it combines the Kriging model and a Nonlinear Auto-Regressive with eXogenous input (NARX) model. The sparse KNARX model performs very well for UQ of several nonlinear dynamical systems using low number of model evaluations. However, the sparse KNARX model is unable to identify an impact oscillator (a representative system for a crash problem) due to the non-smooth behavior. For the impact problem, a new surrogate model is formulated, which decouples the time-domain and the randomness by the proper orthogonal decomposition (POD) and the uncertain parameters are propagated by the polynomial chaos expansion (PCE) approach: the resulting surrogate model is called POD-PCE model. Further, it is applied to an impact oscillator for UQ. The POD-PCE model and the PCE model perform well with quite low number of model evaluations as compared to the MCS approach. The PCE model is constructed only for the reduced number of proper orthogonal modes in case of the POD-PCE model whereas the coefficients must be computed at each time-step in case of the PCE model. Although the results are quite good, some non-physical negative contact forces are predicted by the POD-PCE model, which may be reduced by using a high degree polynomial. At the same time, the use of high degree polynomial is prohibitive for the PCE model as it requires a large number of model evaluations. For that reason, a sparse variational Bayesian (SVB) based PCE model is proposed in this thesis: it selects the important terms in the polynomial basis and subsequently reduces the chances of overfitting with a low number of model evaluations. It is observed that the non-physical negative forces can be reduced to some extent using the POD-SVB-PCE model. However, it is impossible to mitigate the non-physical forces due to the non-smooth behavior of the impact oscillator. Further, it is important to formulate an adaptive framework such that the number of model evaluations and the polynomial degree are selected adaptively. For that reason, an adaptive SVB-PCE model is formulated and furthermore, it is coupled with the POD approach to formulate an adaptive POD-SVB-PCE model. This adaptive framework is applied to a crash box under impact loading for UQ and global sensitivity analysis (GSA). The results show that the adaptive POD-SVB-PCE model has the capability to predict a good result with a low number of model evaluations for most of the responses. However, it is quite difficult to achieve a good accuracy for the contact force with the adaptive POD-SVB-PCE model even using the maximum allocated number of model evaluations; the predicted accuracy for the contact force remains however acceptable. The time-dependent GSA is performed by post-processing the adaptive POD-SVB-PCE model parameters which is quite efficient without any additional computational cost.
... In fact, the carbon emissions in a country do not necessarily depend on its income level alone; financial development may be another source. In order to evaluate this, goodness of fit tests [67] have been carried out to compare the adequacy of the models with or without the variable GDP. Those models with both variables are preferred according to the adjusted R-squared, AIC-Akaike information criterion, BIC-Bayesian information criterionand Log-likelihood (see Tables 5-10). ...
Article
Full-text available
This paper analyses the impact of financial development on the environmental quality and sustainability for the group of G7 countries over the period 1990–2019 based on static panel data-fixed effect models. The objective is to explore if there exists a non-linear relationship between the whole financial system development and a wide array of measures of environmental sustainability and degradation, namely adjusted net savings, greenhouse gas, CO2, methane, nitrous oxide emissions and ecological footprint. We define a new Financial Environmental Kuznets Curve (FEKC) by introducing the square term of financial development on the environment-finance relationship. Empirical results prove the existence of non-linear relationships between the composite index of financial development and environmental degradation for the group of advanced economies. In the case of methane, we validate the presence of an inverted-U shape association in line with the FEKC hypothesis, while for greenhouse gas and CO2 the link follows a U-shaped pattern. The impact of financial development on environmental sustainability is monotonically positive and statistically significant while the ecological footprint is not statistically linked with the level of financial development within G7 countries. Economic growth, human capital, population density and primary energy consumption appear as significant drivers of environmental quality and sustainability.
Chapter
In view of the characteristics of the dynamic distribution of space big data with super long time series, the traditional time series model is difficult to effectively approximate or completely realize the state prediction and trend analysis of telemetry data covering the whole period. In this paper, the concept drift is combined with the autoregressive theory in time series analysis, and the hidden rules in telemetry data are mined from the perspective of multi-level and multi time scale, and a new solution to telemetry sequence prediction is proposed. This paper designs the prediction algorithm of telemetry sequence based on concept drift. By detecting the concept drift point of telemetry sequence, and on this basis, the multivariable autoregressive model is used to predict the future concept and the duration of telemetry sequence, so as to realize the prediction of telemetry sequence. The experimental results show that the proposed method has more accurate prediction effect.
Book
Full-text available
Six Sigma has come a long way since its introduction in the mid-1980s. Our association with the subject began in the 1990s when a number of multinational corporations in Singapore began to deploy Six Sigma in pursuit of business excellence. Prior to this, some of us had been working on statistical quality improvement techniques for more than two decades. It was apparent at the outset that the strength of Six Sigma is not in introducing new statistical techniques as it relies on well-established and proven tools; Six Sigma derives its power from the way corporate mindsets are changed towards the application of statistical tools, fromtop business leaders to those on the production floor. We are privileged to be part of this force for change through our involvement in Six Sigma programs with many companies in the Asia-Pacific region.
Article
Full-text available
Patients with acquired brain injury (ABI) often experience symptoms of anxiety and depression. Until now, evidence-based treatment is scarce. This study aimed to investigate the effectiveness of Acceptance and Commitment Therapy (ACT) for patients with ABI. To evaluate the effect of ACT for people with ABI, a non-concurrent multiple baseline design across four cases was used. Participants were randomly assigned to a baseline period, followed by treatment and then follow-up phases. Anxiety and depressive symptoms were repeatedly measured. During six measurement moments over a year, participants filled in questionnaires measuring anxiety, depression, stress, participation, quality of life, and ACT-related processes. Randomization tests and NAP scores were used to calculate the level of change across phases. Clinically significant change was defined with the Reliable Change Index. Three out of four participants showed medium to large decreases in anxiety and depressive symptoms (NAP = 0.85 till 0.99). Furthermore, participants showed improvements regarding stress, cognitive fusion, and quality of life. There were no improvements regarding psychological flexibility, value-driven behaviour, or social participation. This study shows that ACT is possibly an effective treatment option for people experiencing ABI-related anxiety and depression symptoms. Replication with single case or large scale group studies is needed to confirm these findings.
Conference Paper
Full-text available
zet Su canlı yaşamının devamının sağlanabilmesi için zorunlu olan doğal kaynakların başında gelmektedir. Bu nedenle su kaynaklarının yönetimi, su talebinin karşılanması, tarım ve endüstri faaliyetleri, sel ve kuraklık gibi doğal afetlerin engellenmesi, bu afetlere karşı önlem alınması gibi durumlar oldukça önemlidir. Nehir akımı tahmini de bu faaliyetlerin gerçekleştirilebilmesi için araştırılmaya ve geliştirilmeye ihtiyaç duyan bir konu haline gelmiştir. Ancak nehir akımı için tek bir yöntem mevcut değildir, bu nedenle de daha iyi sonucun elde edilebilmesi için sürekli olarak yeni yöntemler geliştirilmekte veya mevcut yöntemler geliştirilmektedir. Bu çalışmada da Dicle Havzasında yer alan üç istasyona ait akım verileri için öncelikle oto-korelasyon durumu incelenmiş sonrasında da zayıf tahmin modellerinin bir araya gelmesi ile güçlü karar ağalarının oluşturulduğu Gradient Boosting algoritması (Gradient Boosting Machine-GBM) ve GBM algoritmasının çeşitli düzenlemeler ile optimize edilmiş hali olan Extreme Gradient Boosting (XGBoost) algoritmaları kıyaslanmıştır. Ayrıca XGBoost ve GBM algoritmalarının hiperparametreleri optimize edilerek hem aşırı öğrenme (overfitting) sorununu ortadan kaldırmaya hem de en iyi sonuca ulaşılmaya çalışılmıştır. Model performaslarının değerlendirilmesi için ise en sık kullanılan ve doğruluğu test edilmiş olan metrikler olan RMSE (Root Mean Square Error), MAE (Mean Absolute Error) ve korelasyon katsayısı R kullanılmıştır. Bu metriklerin hepsi model performansı için eşit düzeyde anlamlılık ifade ettiği için rank analizi yapılarak en iyi model yapısının ve veri kombinasyonun bulunması amaçlanmıştır. Yapılan çalışma sonucunda her iki algoritmanın da nehir akımı modellemesi için yeterli olduğu ancak XGBoost modelinin daha iyi performans gösterdiği sonucuna ulaşılmıştır. Abstract Water is one of the natural resources necessary for the continuation of life. For this reason, it is very important to manage water resources, meet water demand, agricultural and industrial activities, prevent natural disasters such as floods and droughts, and take precautions against these disasters. River flow estimation has also become a subject that needs research and development to carry out these activities. However, there is no single method for river flow, so new methods are constantly being developed or existing
Article
Background Mountain pine beetle (MPB) is a native disturbance agent across most pine forests in the western US. Climate changes will directly and indirectly impact frequencies and severities of MPB outbreaks, which can then alter fuel characteristics and wildland fire dynamics via changes in stand structure and composition. To investigate the importance of MPB to past and future landscape dynamics, we used the mechanistic, spatially explicit ecosystem process model FireBGCv2 to quantify interactions among climate, MPB, wildfire, fire suppression, and fuel management under historical and projected future climates for three western US landscapes. We compared simulated FireBGCv2 output from three MPB modules (none, simple empirical, and complex mechanistic) using three focus variables and six exploratory variables to evaluate the importance of MPB to landscape dynamics. Results We found that inclusion of MPB (empirical or mechanistic) in the simulations significantly changed past and future landscape dynamics and that the mechanistic MPB module had more cross-scale interactions that increased variability, and perhaps realism, of simulation results. We also evaluated impacts of fire and fuel management on MPB dynamics and found that fire suppression influenced fuel loadings more than MPB disturbance, but at a landscape scale, most fuel treatment programs did little to change fuel loadings, MPB dynamics, and burned area, except under high fire suppression. Conclusions Synergistic interactions of climate, MPB, and wildfire catalyzed landscape-scale changes in vegetation distributions, fuels, and fire regimes in FireBGCv2 simulations. Models that simulate climate change on pine-dominated landscapes may be improved by including mechanistic MPB simulations to account for potentially important ecological interactions.
Article
Many technical means are used to transform organizational inputs into outputs that contribute to knowledge management and development. Technologies are the most important determinant of knowledge management. Institutions that employ technologies in the best way to manage knowledge will have the best ability to survive and continuity in light of the current competition in the knowledge services market. The use of knowledge management techniques works to collect, classify, store, communicate or share knowledge between people and institutions, as well as improving the ability of employees to communicate with each other because there are no barriers that exist due to place, time, and job level, in addition to providing more flexibility in knowledge sharing. In light of this, this study seeks to shed light on recent trends in employing knowledge management techniques in Saudi higher education institutions. Disciplinary: Knowledge Management.
Article
Full-text available
The purposes of this research were to (1) study the situation of production, marketing, and trade of green soybean, and (2) forecast the export quantity of green soybean by Lanna Agro Industry Co., Ltd. to Japan. The quantitative research was considered using the secondary data in terms of monthly time series from January 2013 to December 2020, which included 96 months in total. The data analysis applied the econometric methods, including stationary tests and the Box-Jenkins forecasting approach. The results of the research showed that (1) Japan had imported more than 75,000 tons of green soybean per year. In this regard, Lanna Agro Industry Co., Ltd. exported 11,076 tons or 14.76 percent of those green soybean to Japan in 2020. This amount was accounted for 70 percent of the total export by this company. For the other exporting green soybean markets of the company, there were exported to America, Europe, and Middle East countries, and (2) the appropriate forecasting model was SARIMA(0,1,1)(0,1,1)12. The results obtained from the forecast of the green soybean exported by Lanna Agro Industry Co., Ltd. to Japan showed that, in 2021, it was decreased by 4.44% compared to the year 2020. This reduction was caused by the Japanese economy slowed down due to the COVID-19 outbreak, which reduced the expenditure of the consumers and, hence, the reduction of green soybean imported from Lanna Agro Industry Co., Ltd.
Article
Full-text available
Background: This research addresses two questions: (1) how El Niño Southern Oscillation (ENSO) affects climate variability and how it influences dengue transmission in the Metropolitan Region of Recife (MRR), and (2) whether the epidemic in MRR municipalities has any connection and synchronicity. Methods: Wavelet analysis and cross-correlation were applied to characterize seasonality, multiyear cycles, and relative delays between the series. This study was developed into two distinct periods. Initially, we performed periodic dengue incidence and intercity epidemic synchronism analyses from 2001 to 2017. We then defined the period from 2001 to 2016 to analyze the periodicity of climatic variables and their coherence with dengue incidence. Results: Our results showed systematic cycles of 3-4 years with a recent shortening trend of 2-3 years. Climatic variability, such as positive anomalous temperatures and reduced rainfall due to changes in sea surface temperature (SST), is partially linked to the changing epidemiology of the disease, as this condition provides suitable environments for the Aedes aegypti lifecycle. Conclusion: ENSO may have influenced the dengue temporal patterns in the MRR, transiently reducing its main way of multiyear variability (3-4 years) to 2-3 years. Furthermore, when the epidemic coincided with El Niño years, it spread regionally and was highly synchronized.
Article
Background: The impact of the COVID-19 pandemic on the management of ambulatory care sensitive conditions (ACSCs) remains unknown. Objectives: To compare observed and expected (projected based on previous years) trends in all-cause mortality and healthcare use for ACSCs in the first year of the pandemic (March 2020 to March 2021). Design, setting and participants: We conducted a population-based study using provincial health administrative data on general adul population (Ontario, Canada). Outcomes and measures: Monthly all-cause mortality, and hospitalizations, emergency department (ED) and outpatient visit rates (per 100,000 people at-risk) for seven combined ACSCs (asthma, chronic obstructive pulmonary disease, angina, congestive heart failure, hypertension, diabetes, and epilepsy) during the first year were compared with similar periods in previous years (2016-2019) by fitting monthly time series autoregressive integrated moving-average models. Results: Compared to previous years, all-cause mortality rates increased at the beginning of the pandemic (observed rate in March to May 2020 of 79.98 vs. projected of 71.24 [66.35-76.50]) and then returned to expected in June 2020-except among immigrants and people with mental health conditions where they remained elevated. Hospitalization and ED visit rates for ACSCs remained lower than projected throughout the first year: observed hospitalization rate of 37.29 versus projected of 52.07 (47.84-56.68); observed ED visit rate of 92.55 versus projected of 134.72 (124.89-145.33). ACSC outpatient visit rates decreased initially (observed rate of 4299.57 vs. projected of 5060.23 [4712.64-5433.46]) and then returned to expected in June 2020.
ResearchGate has not been able to resolve any references for this publication.