Article

Time Series Analysis, Forecasting, and Control

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For time-series forecasting, algorithms such as ARIMA, SARIMA, and LSTM are commonly employed due to their ability to handle various patterns in data. ARIMA and SARIMA, being statistical models, excel in capturing linear trends and seasonal variations, while LSTM, a type of recurrent neural network, is adept at modeling complex and non-linear dependencies in time-series data (Hyndman & Athanasopoulos, 2018;Box, Jenkins, & Reinsel, 2015). ...
... Accurate forecasting of transaction volumes is crucial for effective resource management and operational planning. Traditional forecasting methods, such as ARIMA (AutoRegressive Integrated Moving Average) (Hyndman & Athanasopoulos, 2018) and SARIMA (Seasonal ARIMA) models, have been widely used for predicting time series data due to their robust statistical properties (Box et al., 2015). ...
... Hyndman and Athanasopoulos (2018) offer foundational insights into time-series forecasting, discussing a variety of models, including ARIMA and SARIMA, that are widely used in predicting trends and seasonality in e-commerce data. Similarly, Box, Jenkins, and Reinsel (2015) delve into the mathematical intricacies of these models, emphasizing their robustness in capturing patterns in univariate time-series data. While ARIMA and SARIMA models have been successful in traditional forecasting applications, they often face limitations in capturing complex, nonlinear patterns, particularly in fast-evolving e-commerce environments. ...
Article
Full-text available
Purpose: The purpose of this study is to enhance forecasting accuracy and optimize the performance of Payment Service Providers (PSPs) in the e-commerce sector. By integrating Management Information Systems (MIS) with advanced AI-driven time-series models such as ARIMA, SARIMA, and LSTM, this research aims to improve transaction volume forecasts and key performance indicators, ultimately contributing to better decision-making and operational efficiency. Methodology: This study employs a quantitative research methodology using historical transaction data to develop and test forecasting models. The approach integrates ARIMA, SARIMA, and LSTM models for comparative analysis. AI algorithms, particularly LSTM networks, are utilized for their ability to capture complex, non-linear dependencies in time-series data. Additionally, MIS is employed to systematically gather, process, and analyze data, providing real-time insights for decision-making. Sensitivity analysis is conducted to assess the robustness and adaptability of the AI-driven LSTM model in various scenarios. Findings: The analysis reveals that AI-powered LSTM outperforms ARIMA and SARIMA, achieving a Mean Absolute Percentage Error (MAPE) of 2.9%, compared to 5.1% and 4.8% for ARIMA and SARIMA, respectively. The integration of MIS contributes to a 5.7% increase in approval rates and a reduction in business and technical declines by 2.5% and 2.0%, respectively. These findings demonstrate that leveraging AI-driven LSTM models combined with MIS enhances forecasting accuracy and operational efficiency, leading to optimized PSP performance in e-commerce. Originality: This study is original in its approach by integrating AI-driven time-series forecasting models, MIS, and predictive analytics to create a comprehensive framework for PSP optimization in e-commerce. While previous research has explored these components individually, this paper is one of the first to combine them in an integrated manner for a holistic impact on e-commerce transaction management. Research limitations: The main limitation of this study is the reliance on historical transaction data from a specific e-commerce context, which may not fully represent all market conditions. Future research could expand to include a variety of data sources and apply the model to different e-commerce sectors to generalize findings. Practical implications: Practically, the study provides e-commerce managers and PSPs with actionable insights on utilizing advanced AI-driven forecasting models and MIS to make data-driven decisions that improve transaction approval rates and reduce declines. This approach can lead to better resource allocation and operational planning. Social implications: The improvements in transaction volume forecasting and operational efficiency, driven by AI, could contribute to a more reliable and seamless online shopping experience for consumers, enhancing trust and engagement in digital payment systems.
... This interaction between HPC and the cortex is yet to be described in spectral and causal terms, although some progress has been made. Maingret et al. [3] identified a temporal coordination of activity between SWRs in HPC followed by delta waves (0.1-4 Hz) and spindles (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) in PFC, which when electrically stimulated improved the performance of a rodent executing a learned task. Furthermore, Rothschild et. ...
... The model order can be determined by calculating the Akaike information criterion (AIC) [17] as a function of P . ...
Thesis
Full-text available
Memory consolidation is a complex process which involves the interaction between the hippocampus and cortical regions of the brain. It is known that these brain regions interact during slow-wave sleep driven by a dynamical process in the hippocampus called the sharp-wave ripple (SWR). Currently, the interactions between the neural oscillations of these regions have not been sufficiently described, in particular, their variations in the presence and absence of learning are not well known yet. In this study we make use of several spectral analysis techniques to describe the changes of power and causal interactions between the memory-related areas during memory consolidation after spatial learning compared to a non-learning condition. The results show a significant power increase of high gamma power for Parietal lobe and a decrease in the delta range for prefrontal cortex whenever learning took place. A spectral Granger causality analysis revealed causal influences of the prefrontal cortex and parietal lobe over the hippocampus. Other observations were a lesser amount of ripples and less high gamma activity on the hippocampus for the learning session.
... To demonstrate the practical contribution of the MS−BL model in financial time series modelling, we examine the daily closing prices of International Business Machines (IBM) common stock from May 17, 1961, to November 2, 1962, comprising 369 observations sourced from [40]. This financial time series is nonlinear and has been analyzed previously by Box et al. [40], Wong and Li [41], Aknouche and Rabehi [16], among others. ...
... To demonstrate the practical contribution of the MS−BL model in financial time series modelling, we examine the daily closing prices of International Business Machines (IBM) common stock from May 17, 1961, to November 2, 1962, comprising 369 observations sourced from [40]. This financial time series is nonlinear and has been analyzed previously by Box et al. [40], Wong and Li [41], Aknouche and Rabehi [16], among others. Figure 3 displays both the original series Y t and the first-differenced series X t = Y t − Y t−1 (referred to as DIBM hereafter), along with their respective histograms. ...
... Similarly, Hepbasli and Kalinci (2009) reviewed water heating systems and underscored the inadequacy of static models in dynamic scenarios. Time-series forecasting models, discussed by Box et al. (1994), have been widely applied but remain limited by their dependence on historical data trends. Multiobjective optimization techniques, introduced by Rahman Azari (2016), provided a more holistic approach to system design but were constrained by the computational demands of integrating these methods with conventional practices. ...
... Multiobjective optimization techniques, introduced by Rahman Azari (2016), provided a more holistic approach to system design but were constrained by the computational demands of integrating these methods with conventional practices. These studies highlight the limitations of traditional approaches, which often necessitate a transition toward more adaptive and efficient methodologies (Bejan, 1995;Abdelaziz and Shen, 2012;Hepbasli and Kalinci, 2009;Box et al., 1994; Rahman Azari, 2016). ...
Article
Full-text available
The integration of artificial intelligence (AI) has revolutionized engineering design and optimization, offering unparalleled precision and efficiency. This research focuses on AI-enhanced optimization of heat pump sizing and design for application-specific needs, addressing the limitations of conventional methods. The study systematically combines advanced machine learning techniques with domain-specific heuristics to optimize heat pump configurations for performance, energy efficiency, and cost. By leveraging a dataset representing diverse applications, we demonstrate that AI algorithms significantly outperform traditional approaches in achieving tailored solutions. Key results indicate up to a 25% improvement in energy efficiency and a notable reduction in computational time, underscoring AI's transformative potential in thermal system design. This work provides a comprehensive framework for implementing AI-driven optimization, supported by comparative analyses, visualizations, and real-world validations.
... , i m ) model may be selected by examining a plot of ζ k ± 1.96 EstSd (ζ k ) vs k. This modified partial autocorrelation plot is generally more useful than the customary one (Box, Jenkins and Reinsel, 1994). The use of this partial autocorrelation plot is illustrated in §3.1−3.2. ...
... This diagnostic plot is illustrated in §3.1. Cleveland (1971) identified an AR φ (1, 2, 7) and Unnikrishnan (2004) identified an AR φ (1, 3, 7) model for Series A (Box, Jenkins and Reinsel, 1994). Either directly from the partial autocorrelation plot in Figure 1 or using the BIC ζ algorithm in §2.3 with L = 20 and M = 10, an AR ζ (1, 2, 7) subset is selected. ...
Preprint
A new version of the partial autocorrelation plot and a new family of subset autoregressive models are introduced. A comprehensive approach to model identification, estimation and diagnostic checking is developed for these models. These models are better suited to efficient model building of high-order autoregressions with long time series. Several illustrative examples are given.
... O método ARIMA é um modelo estatístico que leva em consideração componentes autorregressivas, de média móvel e de diferenciação para capturar padrões temporais e sazonalidades nos dados. Segundo Box et al. (2015), neste método é exigido que a série seja estacionária (séries estacionárias são conjuntos de dados temporais em que as propriedades estatísticas, como média e variância, permanecem constantes ao longo do tempo, facilitando a análise e modelagem estatística). ...
... A abordagem pelo Modelo de Box-Jenkins (ARIMA) para realizar previsões é composta por três passos, conforme descrito por Análise Macro (2023) e Box et al. (2015): I -Identificação de estacionariedade: verifica-se se a série é estacionária. Caso não seja, são realizadas diferenciações até que a série se torne estacionária. ...
Article
A previsão de preços de ações é um desafio crítico no mundo financeiro, pois envolve inúmeras variáveis e complexidades. Nesse contexto, este estudo abordou a problemática da escolha entre os métodos ARIMA e Redes Neurais Artificiais (RNA) para prever os preços de ações da Bolsa de Valores de São Paulo (BOVESPA), buscando identificar qual destes apresentou maior acurácia e menores erros de previsão. Foram coletados dados históricos semanais de preços das ações entre 2018 e 2022, juntamente com informações do Dólar, SELIC, Dow Jones, Ibovespa e IPCA. Os modelos ARIMA e RNA foram aplicados, com extensos testes e ajustes realizados para otimizá-los. Os resultados revelaram que o método ARIMA superou a RNA em termos de acurácia e erro de previsão. Essa superioridade pode estar relacionada à configuração específica do modelo de RNA, que não foi completamente explorada. O estudo destaca a importância de uma análise minuciosa do modelo de RNA, levando em consideração a complexidade dos dados e as particularidades do problema ao escolher entre os métodos de previsão. Além disso, enfatiza a necessidade de pesquisas contínuas para aprimorar a interoperabilidade dos modelos de RNA e a qualidade das previsões financeiras. A busca por modelos de previsão mais robustos e precisos é crucial em um ambiente financeiro dinâmico e altamente competitivo. Palavras-chave: Mercado Financeiro. Previsão Financeira. Análise de Modelos. Abstract PrAMBEVedicting stock prices is a critical challenge in the financial world due to its involvement with numerous variables and complexities. In this context, this study addresses the dilemma of choosing between Autoregressive Integrated Moving Average (ARIMA) and Artificial Neural Networks (ANN) methods to forecast São Paulo stock exchange (BOVESPA) stock prices, to identify which one demonstrates higher accuracy and lower prediction errors. Weekly historical stock price data from 2018 to 2022, along with information on the US Dollar, SELIC interest rate, Dow Jones, Ibovespa, and IPCA, were collected. Both ARIMA and ANN models were applied, with extensive testing and adjustments conducted for optimization. The findings revealed that the ARIMA method outperformed ANN in terms of accuracy and prediction error. This superiority may be linked to the specific configuration of the ANN model, which was not fully explored. This study underscores the importance of a thorough analysis of the ANN model, considering data complexity and problem intricacies when choosing between prediction methods. Furthermore, it emphasizes the need for ongoing research to enhance the interoperability of ANN models and the quality of financial predictions. The pursuit of more robust and accurate forecasting models is crucial in a dynamic and highly competitive financial environment. Keywords: Financial Market. Financial Forecasting. Model Analysis.
... Each data point depends on previous ones, influencing future values. Time series analysis aims to understand these dependencies, identify patterns or trends, and make forecasts based on historical data [9]. Techniques in time series analysis include identifying trends, recognizing seasonal patterns, forecasting, and analyzing temporal correlations. ...
Article
Full-text available
p class="IJASEITAbtract">Stock price movements in the dynamic economic world require investors and companies to be able to predict future price changes. One method that can be used to predict is Autoregressive Integrated Moving Average (ARIMA). The application of the ARIMA method to forecasting the share price of PT Aneka Tambang Tbk (ANTM) for 4 weeks produces equation obtained from ARIMA (3,1,0) as the best model. </p
... It enables the prediction of future sales based on historical trends and has been shown to be particularly effective in handling time-based data, such as retail sales. One of the key advantages of the ARIMA model is its ability to forecast future trends, allowing businesses to accurately schedule marketing activities (Box, Jenkins, & Reinsel, 2015). Analyzing retail sales data not only evaluates the current situation but also predicts future trends. ...
Chapter
Full-text available
This study aims to develop an approach to enhance the effectiveness of marketing strategies by analyzing retail sales data and utilizing the ARIMA model. Through exploratory data analysis, the structure and trends of the dataset will be uncovered, while predictive modeling will be employed to forecast future sales. The findings will provide businesses with strategic insights to improve the effectiveness of their marketing activities, enabling significant enhancements in areas such as marketing, inventory management, and supplier relationships. As the results are based on the specific dataset used in this study, they may not be generalizable to all sectors and businesses. However, the methodology employed can serve as a roadmap for businesses to evaluate their own data effectively.
... Moreover ,the study try to show the advantages in modeling and forecasting through de-noising the series using wavelet shrinkage and try to get the ability of lowering the order of the estimated model using wavelet shrinkage. Batch of chemical process data [3] was used for analysis. ...
Article
Full-text available
In this paper , the estimated linear models of Box-Jenkins has been compared from time series observations , before and after wavelet shrinkage filtering and then reducing the order of the estimated model from filtered observations (with preserving the accuracy and suitability of the estimated models) and re-compared with the estimated linear model of original observations , depending on some statistical criteria through taking a practical application of time series and using statistical programs such as Statgraphics , N CSS and M AT LAB. The results of the paper showed the efficiency of wavelet shrinkage filters in solving the noise problem and obtaining the efficient estimated models , and specifically the wavelet shrinkage filter (dmey) with Soft threshold which estimated it's level using the Fixed Form method of filtered observations , and the possibility of obtaining linear models of the filterd observations with lower orders and higher efficiency compared with the corresponding estimated model of original observations
... In some applications, the AR or MA models become cumbersome because one may need a higher-order model with many parameters to adequately describe the dynamic structure of the series. An autoregressive moving average (ARMA) model mixes the AR and MA models into compact form so that the number of parameters used is kept small (Box et al., 1994). A mixed ARMA model with p autoregressive terms and q moving average terms, denoted by ARMA(p, q) , can be expressed as: ...
... The Autoregressive Moving Average (ARMA) model is a special case of Autoregressive Moving Average with Extra Input (ARMAX) model with no input signals (Box et al. 1994;Wold 1939). The ARMA single output model structure is given by the following equation (Mathworks 2023a) For the ARMA time-series model, which has no input, the polynomial orders and delays for the ARMAX model [ na nb nc nk ] is reduced to [ na nc ] (Mathworks 2023a). ...
Article
Full-text available
Groundwater is an essential resource for agriculture and domestic use in drought-prone regions, particularly in northwestern Bangladesh. Accurate forecast of groundwater level (GWL) fluctuations is crucial for sustainable water regulation. This work investigates the application of deep learning and dynamic system response models to forecast GWL changes in this vulnerable area. The models employed include Long Short-Term Memory (LSTM) networks, Autoregressive Moving Average (ARMA), Discrete-Time State-Space Model (n4sid), Continuous-Time State-Space Model (SSEST), Discrete-Time State-Space Model through a Regularized ARX Model Reduction (SSREGEST), and coupled ARMA-state-space models. A total of eight models were trained and tested on historical GWL data from 19 observation wells. The top-performing models at various locations delivered satisfactory results, with C, IOA, NRMSE, and MAD values ranging from 0.53 to 0.92, 0.62 to 0.95, 0.01 to 0.25, and 0.08 m to 1.09 m, respectively. Model comparison using the Entropy-Distance from Average Solution (Entropy-EDAS) method revealed that LSTM networks outperformed traditional time series (ARMA), system dynamic (n4sid, SSEST, SSREGEST), and coupled ARMA-state-space models (ARMA-n4sid, ARMA-SSEST, and ARMA-SSREGEST) in most locations, while other models exhibited varying performances across different observation wells. The varying performance across different observation wells highlights that prediction accuracy depends not only on the modeling algorithms but also on the quantity and quality of the learning and testing data. The projections generated by the best models effectively captured historical trends, providing the first-ever five-year forecasts of GWL fluctuations for the region. These projections offer valuable insights for water resource management and planning in areas vulnerable to drought and climate variability.
... Statistical measures such as Colour Co-occurrence Matrix (CCM) [1], Local Binary Patterns (LBP) [2], Grey Level Cooccurrence Matrix (GLCM) [3], and Spatial Grey Level Dependence Matrix (SGLDM) [4] may be used to derive textural features. Model-based methods such as Auto-Regressive (AR) [5] and Markov Random Field (MRF) [6] structures are also used to extract useful features. ...
Preprint
Full-text available
Over the past decade, several image-processing methods and algorithms have been proposed for identifying plant diseases based on visual data. DNN (Deep Neural Networks) have recently become popular for this task. Both traditional image processing and DNN-based methods encounter significant performance issues in real-time detection owing to computational limitations and a broad spectrum of plant disease features. This article proposes a novel technique for identifying and localising plant disease based on the Quad-Tree decomposition of an image and feature learning simultaneously. The proposed algorithm significantly improves accuracy and faster convergence in high-resolution images with relatively low computational load. Hence it is ideal for deploying the algorithm in a standalone processor in a remotely operated image acquisition and disease detection system, ideally mounted on drones and robots working on large agricultural fields. The technique proposed in this article is hybrid as it exploits the advantages of traditional image processing methods and DNN-based models at different scales, resulting in faster inference. The F1 score is approximately 0.80 for four disease classes corresponding to potato and tomato crops.
... In some applications, the AR or MA models become cumbersome because one may need a higher-order model with many parameters to adequately describe the dynamic structure of the series. An autoregressive moving average (ARMA) model mixes the AR and MA models into compact form so that the number of parameters used is kept small (Box et al., 1994). A mixed ARMA model with p autoregressive terms and q moving average terms, denoted by ARMA(p, q) , can be expressed as: ...
... Various time series models, such as ARIMA, SARIMA (Seasonal ARIMA), and exponential smoothing, have been widely applied to air quality data to forecast trends, assess seasonal variations, and predict future pollutant concentrations [17,5]. ARIMA models are particularly useful in capturing temporal dependencies and providing accurate forecasts by incorporating the autoregressive (AR), moving average (MA), and differencing (I) components of the time series [4]. In urban air quality studies, ARIMA models have been applied successfully in cities like Beijing [5], New Delhi [17], and São Paulo [15] to predict pollution trends and provide actionable insights for public health and policy planning. ...
Article
Full-text available
Air pollution is a significant environmental and public health issue in rapidly urbanizing cities, particularly in developing countries like Nigeria. This study analyzes air quality trends in five major Nigerian cities Abuja, Lagos, Kano, Port Harcourt, and Enugu using satellite-based remote sensing data from January 2021 to December 2023. Key pollutants, including PM2.5, PM10, CO, NO2, SO2, and O3, were analyzed using time series models (ARIMA, SARIMA), seasonal decomposition (STL), and correlation analysis. The results reveal that Lagos and Kano experience the highest pollution levels, particularly during the Harmattan season, when Saharan dust exacerbates particulate matter. Abuja also sees significant pollution spikes, while Port Harcourt and Enugu show moderate pollution driven by industrial emissions and traffic. The study underscores the need for better air quality monitoring, seasonal interventions, and policies to reduce pollution, particularly during Harmattan.
... Against the backdrop of global energy transition, this paper uses China as a case study to reasonably estimate future power supply trends using ARIMA -GM model and forecast the trends in PV and other clean energy sources from 2024 to 2060 using GM model. This paper reflects on the broader context of energy forecasting's importance, as emphasized by Box and Jenkins through the development of the ARIMA model [6], and the subsequent refinement with Grey theory by Deng [7], showcasing the evolution and impact of forecasting methodologies in energy management, contributing to global energy development and environmental conservation. ...
Article
This paper proposes a predictive outlook for the future of China's energy development based on the current development status of photovoltaic power stations. This paper uses GM (1,1) and ARIMA models to predict power generation, and then the ARIMA - GM combination prediction model obtained by weighting method was conducted. The residual p-value of ARIMA model is 0.953, and the rank ratio interval of GM model is (0.917, 1.091). At the same time, the relative error rate of the ARIMA - GM model used in this article is 0.207, and its fitting effect is better than the ARIMA model (0.249) and GM model (0.262), indicating the effectiveness of the method used and results showed in this paper.
... Trends in frequency were computed using ordinary least squares regression, with the significance of those trends relative to the null hypothesis of zero trend evaluated using a two-tailed Student's t-test at the 95% confidence level. Though the length of the time series at each gridbox S = 70 years, a reduced effective independent sample size S′ = S(1 ρ)/ (1 + ρ), where ρ is the lag-1 autocorrelation, which was used for significance testing of both trends and correlations (Box et al., 2015). ...
Article
Full-text available
Shipboard present‐weather reports from 1950 to 2019 are aggregated and composited yearly and seasonally on a 1°×1° 1×11{}^{\circ}\times 1{}^{\circ} grid to characterize the global climatology and apparent long‐term trends in the relative frequency of four categories of oceanic precipitation: drizzle, moderate, and heavy non‐drizzle, precipitation associated with thunderstorms and deep convection, and frozen‐phase precipitation. Although ship reports are susceptible to subjective interpretation, the inferred distributions of these phenomena are consistent with observations from other platforms such as satellites and coastal surface stations. These distributions highlight widespread 70‐year trends that are often consistent across both annual and seasonal frequencies, with statistical significance at 95% confidence. The relative frequency of ship‐reported drizzle has largely increased in the tropics annually and seasonally, with linear best‐fit relative increases by as much as 15% per decade. Decreased relative frequencies have been observed in parts of the subtropics and at higher latitudes. Heavier precipitation has encompassed a growing fraction of non‐drizzle precipitation reports over the subtropical North Pacific and Mediterranean. The relative frequency of thunderstorm reports has declined over the open Atlantic but show positive trends over the Mediterranean and the western Atlantic. The trends in relative frozen precipitation occurrence suggest a poleward retreat of areas receiving frozen precipitation in the Northern Hemisphere. Possible mechanisms for these ship‐observed trends are discussed and placed in the context of the modeled effects of climate change on global precipitation.
... Traditional time series forecasting techniques such as ARIMA and Prophet have been extensively utilized in various applications due to their simplicity and ease of implementation. ARIMA models, for instance, rely on the combination of autoregressive and moving average components to predict future values based on past data, assuming linear relationships and stationarity [1]. Prophet, developed by Facebook, is known for its flexibility in handling seasonality and trend changes, making it popular for business and economic forecasting [2]. ...
Conference Paper
Full-text available
The accurate prediction of heart rate is critical for the proactive monitoring and management of cardiovascular health, a leading concern worldwide due to the prevalence of cardiovascular diseases. Traditional time series forecasting methods, such as ARIMA and Prophet, often fall short in addressing the complex, non-linear nature of heart rate data, which is inherently noisy and highly variable. This paper provides a comprehensive review of contemporary neural network architectures that have shown promise in this domain, specifically focusing on Long Short-Term Memory (LSTM) networks, transformer-based models (PatchTST and iTransformer), Tiny Time Mixers (TTMs), MOMENT models, and deep reinforcement learning. We delve into the architectural intricacies of these models, their training processes, and the performance metrics used to evaluate them. Our analysis highlights the unique strengths and limitations of each approach, emphasizing their suitability for heart rate time series forecasting. Through empirical evidence and comparative analysis, we demonstrate that transformer-based models, TTMs, MOMENT models and deep reinforcement learning significantly enhance forecasting accuracy and efficiency over traditional methods. This review aims to provide a detailed understanding of these advanced techniques, offering valuable insights for future research and practical applications in the field of cardiovascular health monitoring.
... The four options are the single RMSE criterion, RMSE and MAE, RMSE and MaxAE, or RMSE by MAE and MaxAE. Second, an analysis of MaxAE versus MAE will be carried out after forecasts are made by the ARIMA approach [1,2,6,7]. Finally, the selection of criteria should be justified followed by an appropriate conclusion. ...
Article
In time series forecasting, a commonly accepted criterion of the forecasting quality is the root-mean-square error (RMSE). Sometimes only RMSE is used. In other cases, another measure of forecasting accuracy is used along with RMSE. It is the mean absolute error (MAE). Although RMSE and MAE are the common criteria of time series forecasting quality, they both register information about averaged errors. However, averaging may remove information about volatility, which is typical for time series, in a few points (outliers) or narrow intervals. Information about outliers in time series forecasts (with respect to test data) can be registered by the maximum absolute error (MaxAE). The MaxAE criterion does not have any relation to averaging. It registers information about the worst outlier instead. Therefore, the goal is to ascertain the best criteria of time series forecasting quality, wherein the RMSE criterion is always present. First, 12 types of benchmark time series are defined to test and select criteria. The time series is of 168 points, whereas the last third of the series is forecasted. After having generated 200 times series for each of those 12 types, ARIMA forecasts are made at 56 points of every series. All the 2400 RMSEs are sorted in ascending order, whereupon the respective MAEs and MaxAEs are re-arranged as well. The interrelation between the RMSE and MAE/MaxAE is studied by their intercorrelation function. RMSEs and MaxAEs are “more different” than RMSEs and MAEs, because the correlation between the RMSE and MAE is stronger. Consequently, the MAE criterion is useless as it just nearly replicates information about the forecasting quality from the RMSE criterion. Inasmuch as the MaxAE criterion can import additional information about the forecasting quality, the best criteria are RMSE and MaxAE.
... We close this overview by noting that there is a vast literature on time series in the statistics community (cf. Box, Jenkins, and Reinsel [30]), and we make no attempt to survey that literature here. These investigations include considerable work on robust approaches (cf. ...
Preprint
Demand forecasting plays an important role in many inventory control problems. To mitigate the potential harms of model misspecification, various forms of distributionally robust optimization have been applied. Although many of these methodologies suffer from the problem of time-inconsistency, the work of Klabjan et al. established a general time-consistent framework for such problems by connecting to the literature on robust Markov decision processes. Motivated by the fact that many forecasting models exhibit special structure, as well as a desire to understand the impact of positing different dependency structures, in this paper we formulate and solve a time-consistent distributionally robust multi-stage newsvendor model which naturally unifies and robustifies several inventory models with forecasting. In particular, many simple models of demand forecasting have the feature that demand evolves as a martingale. We consider a robust variant of such models, in which the sequence of future demands may be any martingale with given mean and support. Under such a model, past realizations of demand are naturally incorporated into the structure of the uncertainty set going forwards. We explicitly compute the minimax optimal policy (and worst-case distribution) in closed form, by combining ideas from convexity, probability, and dynamic programming. We prove that at optimality the worst-case demand distribution corresponds to the setting in which inventory may become obsolete, a scenario of practical interest. To gain further insight, we prove weak convergence (as the time horizon grows large) to a simple and intuitive process. We also compare to the analogous setting in which demand is independent across periods (analyzed previously by Shapiro), and identify interesting differences between these models, in the spirit of the price of correlations studied by Agrawal et al.
... The standard linear models for time series are the autoregressive model (AR) and autoregressive moving average model (ARMA) (Box et al., 1994). Statistical measures of fitting and prediction with AR or ARMA can be computed for varying model orders (for the AR part and the moving average, MA, part) and prediction steps. ...
Preprint
In many applications, such as physiology and finance, large time series data bases are to be analyzed requiring the computation of linear, nonlinear and other measures. Such measures have been developed and implemented in commercial and freeware softwares rather selectively and independently. The Measures of Analysis of Time Series ({\tt MATS}) {\tt MATLAB} toolkit is designed to handle an arbitrary large set of scalar time series and compute a large variety of measures on them, allowing for the specification of varying measure parameters as well. The variety of options with added facilities for visualization of the results support different settings of time series analysis, such as the detection of dynamics changes in long data records, resampling (surrogate or bootstrap) tests for independence and linearity with various test statistics, and discrimination power of different measures and for different combinations of their parameters. The basic features of {\tt MATS} are presented and the implemented measures are briefly described. The usefulness of {\tt MATS} is illustrated on some empirical examples along with screenshots.
... In the statistical analysis PDFs of the original time traces are computed along with the modelled time traces using the mathematical AutoRegressive Integrated Moving Average (ARIMA). In this case, the time evolution of the ion heat flux is considered as a time series, to which we apply standard Box-Jenkins (ARIMA) modelling [9]. This mathematical procedure effectively removes deterministic autocorrelations from the system, allowing for the statistical interpretation of the residual part, which a posteriori turns out to be relevant for comparison with the analytical theory. ...
Preprint
A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the Auto-Regressive Integrated Moving Average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.
... tion, contribution due to white noise moves towards the finer scales because the process of differentiation converts the uncorrelated stochastic process to a first order moving average process and thereby distributes more energy to the finer scales. That the differentiation of white noise brings about this behavior is known in the Fourier spectrum (Box et. al., 1994). It may be noted that the nature and effectiveness of separation depend on the wavelet basis function chosen and also on the properties of the derivatives of WT, which is in itself a highly interesting and not fully understood subject (Strang and Nguyen, 1996). For the signals studied in this paper, model as well as experimental, the wa ...
Preprint
We have presented a new and alternative algorithm for noise reduction using the methods of discrete wavelet transform and numerical differentiation of the data. In our method the threshold for reducing noise comes out automatically. The algorithm has been applied to three model flow systems - Lorenz, Autocatalator, and Rossler systems - all evolving chaotically. The method is seen to work well for a wide range of noise strengths, even as large as 10% of the signal level. We have also applied the method successfully to noisy time series data obtained from the measurement of pressure fluctuations in a fluidized bed, and also to that obtained by conductivity measurement in a liquid surfactant experiment. In all the illustrations we have been able to observe that there is a clean separation in the frequencies covered by the differentiated signal and white noise.
... Having specified the power spectrum or, correspondingly, the autocorrelation function for sequences of Gaussian random numbers means to have fixed all parameters of a linear stochastic process. Hence, in principle, the coefficients of an auto regressive (AR(r)) or moving average (MA(r)) process can be uniquely determined, where, due to the power-law nature of spectrum and autocorrelation function, the orders r of either of these models have to be infinite [22]. Hence, the following results are valid for the class of linear long-term correlated processes [2]. ...
Preprint
The recurrence times between extreme events have been the central point of statistical analyses in many different areas of science. Simultaneously, the Poincar\'e recurrence time has been extensively used to characterize nonlinear dynamical systems. We compare the main properties of these statistical methods pointing out their consequences for the recurrence analysis performed in time series. In particular, we analyze the dependence of the mean recurrence time and of the recurrence time statistics on the probability density function, on the interval whereto the recurrences are observed, and on the temporal correlations of time series. In the case of long-term correlations, we verify the validity of the stretched exponential distribution, which is uniquely defined by the exponent γ\gamma, at the same time showing that it is restricted to the class of linear long-term correlated processes. Simple transformations are able to modify the correlations of time series leading to stretched exponentials recurrence time statistics with different γ\gamma, which shows a lack of invariance under the change of observables.
... Given n consecutive observations from this time series model, z 1 , . . . , z n , the log-likelihood function was discussed by Box, Jenkins and Reinsel (1994), as well as many other authors. Other asymptotically first-order efficient methods are available, such as the HR algorithm (Hannan and Rissanen, 1982) but many researchers prefer methods of estimation and inference based on the likelihood function (Barnard, Jenkins and Winsten, 1962;Fisher, 1973;Box and Luceño, 1997, §12B) and Taniguchi (1983) has shown that MLE is second-order efficient. ...
Preprint
A new likelihood based AR approximation is given for ARMA models. The usual algorithms for the computation of the likelihood of an ARMA model require O(n) flops per function evaluation. Using our new approximation, an algorithm is developed which requires only O(1) flops in repeated likelihood evaluations. In most cases, the new algorithm gives results identical to or very close to the exact maximum likelihood estimate (MLE). This algorithm is easily implemented in high level Quantitative Programming Environments (QPEs) such as {\it Mathematica\/}, MatLab and R. In order to obtain reasonable speed, previous ARMA maximum likelihood algorithms are usually implemented in C or some other machine efficient language. With our algorithm it is easy to do maximum likelihood estimation for long time series directly in the QPE of your choice. The new algorithm is extended to obtain the MLE for the mean parameter. Simulation experiments which illustrate the effectiveness of the new algorithm are discussed. {\it Mathematica\/} and R packages which implement the algorithm discussed in this paper are available (McLeod and Zhang, 2007). Based on these package implementations, it is expected that the interested researcher would be able to implement this algorithm in other QPE's.
... Standard time series literature is dominated by parametric models like autoregressive integrated moving average models [Box et al., 2013], the more recent autoregressive conditional heteroskedasticity models for time-varying volatility [Engle, 1982, Bollerslev, 1986, state-space [Durbin and Koopman, 2012], and Markov switching models [Bauwens et al., 2000]. In particular, Bayesian time series analysis [Steel, 2008] is inherently parametric in that a completely specified likelihood function is needed. ...
Preprint
The Whittle likelihood is widely used for Bayesian nonparametric estimation of the spectral density of stationary time series. However, the loss of efficiency for non-Gaussian time series can be substantial. On the other hand, parametric methods are more powerful if the model is well-specified, but may fail entirely otherwise. Therefore, we suggest a nonparametric correction of a parametric likelihood taking advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment. Using a Bernstein-Dirichlet prior for the nonparametric spectral correction, we show posterior consistency and illustrate the performance of our procedure in a simulation study and with LIGO gravitational wave data.
... Time series with trends are described by integrated ARMA sequences (ARIMA) and seasonal time series, which are examples of stochastic sequences with stationary increments. These models are properly described in the book by Box, Jenkins, and Reinsel [2]. Granger [8] introduced a concept of cointegrated sequences, namely, the integrated sequences such that some linear combination of them has a lower order of integration. ...
Preprint
We consider the problem of optimal estimation of the linear functional ANξ=k=0Na(k)ξ(k)A_N{\xi}=\sum_{k=0}^Na(k)\xi(k) depending on the unknown values of a stochastic sequence ξ(m)\xi(m) with stationary increments from observations of the sequence ξ(m)+η(m)\xi(m)+\eta (m) at points of the set Z{0,1,2,,N}\mathbb{Z}\setminus\{0,1,2,\ldots,N\}, where η(m)\eta(m) is a stationary sequence uncorrelated with ξ(m)\xi(m). We propose formulas for calculating the mean square error and the spectral characteristic of the optimal linear estimate of the functional in the case of spectral certainty, where spectral densities of the sequences are exactly known. We also consider the problem for a class of cointegrated sequences. We propose relations that determine the least favorable spectral densities and the minimax spectral characteristics in the case of spectral uncertainty, where spectral densities are not exactly known while a set of admissible spectral densities is specified.
... In the analysis we will make use of different types of distributions to retro-fit the PDFs of simulation results mainly using the Laplace distribution (χ = 1.0) and the Gaussian distribution (χ = 2.0).We focus on the time traces (averaged in the poloidal y-direction) at five fixed radial points located at x = 40, 80, 100, 140, 180. Each set of data describes the time evolution of the potential and vorticity to which we apply a standard Box-Jenkins modelling24 . This mathematical procedure effectively removes deterministic autocorrelations from the system, allowing for the statistical interpretation of the residual part, which a posteriori turns out to be relevant for comparison with the analytical theory. ...
Preprint
Full-text available
Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows and small scale drift turbulence. We find that for a wide range of adiabatic index values the stochastic component representing the small scale turbulent eddies of the flow, obtained from the ARIMA model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore there exist a strong influence of zonal flows for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.
... The Autoregressive Integrated Moving Average (ARIMA) model was proposed by Box and Jenkins in the early 1970s, is primarily used for analyzing and forecasting time series data [36]. The ARIMA model combines three elements: autoregression, integration, and moving Fig. 6 Structural framework of the self-attention model average. ...
Article
Full-text available
Background With the increasing impact of tuberculosis on public health, accurately predicting future tuberculosis cases is crucial for optimizing of health resources and medical service allocation. This study applies a self-attention mechanism to predict the number of tuberculosis cases, aiming to evaluate its effectiveness in forecasting. Methods Monthly tuberculosis case data from Changde City between 2010 and 2021 were used to construct a self-attention model, a long short-term memory (LSTM) model, and an autoregressive integrated moving average (ARIMA) model. The performance of these models was evaluated using three metrics: root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). Results The self-attention model outperformed the other models in terms of prediction accuracy. On the test set, the RMSE of the self-attention model was approximately 7.41% lower than that of the LSTM model, MAE was reduced by about 10.99%, and MAPE was reduced by approximately 9.87%. Compared to the ARIMA model, RMSE was reduced by about 28.86%, MAE by about 32.22%, and MAPE by approximately 29.89%. Conclusion The self-attention model can effectively improve the prediction accuracy of tuberculosis cases, providing guidance for health departments optimizing of health resources and medical service allocation.
... A suitable alternative method of modelling non stationary data is to add trend and seasonal component as independent variables to the regression model. The trend data is fitted by time polynomial and seasonal component is fitted by Fourier terms including sine and cosine (Box et al., 2003). The formula to be included in regression model is as follows (Stolwijk et al., 1999), ...
Article
Full-text available
Early forewarning of crop pest based on weather variables provides lead time to manage impending pest attacks that minimize crop loss, decrease the cost of pesticides and enhance the crop yield. This paper is an attempt to forewarn incidence of Cotton pests using weather variables. The pest incidence data from 2015 to 2023 for Aphids, Jassids, Thrips, and Whiteflies has been used for the study. The pest incidence being count variable, different count regression models such as zero inflated Poisson & negative binomial, hurdle Poisson & negative binomial, negative binomial and generalized Poisson regression models have been developed for forewarning of pests. Results indicated that zero inflated Poisson regression model outperformed the other models with improved performance of nearly 30 to 75%. Thus, the zero inflated Poisson regression model is a reliable tool in prediction of cotton pests, thereby aiding towards better pest management strategies.
... There are various techniques for estimating missing values in time series, including statistical methods such as autoregressive integrated moving average (ARIMA) [8], simple moving average (SMA), linear weighted moving average (LWMA), and exponentially weighted moving average (EWMA) [9]; interpolation-based methods such as spline and This study has several limitations. First, it relies on only two PM2.5 datasets. ...
Article
Full-text available
In this work, a novel model for hourly PM2.5 time series imputation is proposed for the estimation of missing values in different gap sizes, including 1, 3, 6, 12, and 24 h. The proposed model is based on statistical techniques such as moving averages, linear interpolation smoothing, and linear interpolation. For the experimentation stage, two datasets were selected in Ilo City in southern Peru. Also, five benchmark models were implemented to compare the proposed model results; the benchmark models include exponential weighted moving average (EWMA), autoregressive integrated moving average (ARIMA), long short-term memory (LSTM), gated recurrent unit (GRU), and bidirectional GRU (BiGRU). The results show that, in terms of average MAPEs, the proposed model outperforms the best deep learning model (GRU) between 26.61% and 90.69%, and the best statistical model (ARIMA) between 2.33% and 6.67%. So, the proposed model is a good alternative for the estimation of missing values in PM2.5 time series.
... For large N, we follow common practice and ignore L P , which corresponds to using the conditional likelihood (Box et al., 2008) (Section 7.1.2). ...
Article
Full-text available
This paper proposes algorithms for estimating parameters in Earth System Models (ESMs), specifically focusing on simulations that have not yet achieved statistical equilibrium and display climate drift. The basic idea is to treat ESM time series as outputs of an autoregressive process, with parameters that depend on those of the ESM. The maximum likelihood estimate of the parameters and the associated uncertainties are derived. This method requires solving a nonlinear system of equations and often results in unsatisfactory parameter estimates, especially in short simulations. This paper explores a strategy for overcoming this limitation by dividing the estimation process into two linear phases. This algorithm is applied to estimate parameters in the convection scheme of the Community Earth System Model version 2 (CESM2). The modified algorithm can produce accurate estimates from perturbation runs as short as 2 years, including those exhibiting climate drift. Despite accounting for climate drift, the accuracy of these estimates is comparable to that of algorithms that do not. While these initial results are not optimal, the autoregressive approach presented here remains a promising strategy for model tuning since it explicitly accounts for climate drift in a rigorous statistical framework. The current performance issues are believed to be technical in nature and potentially solvable through further investigation.
... Nevertheless, the computational demands of transformers are considerable, as the self-attention mechanism scales quadratically with input length. Additionally, transformers necessitate a considerable quantity of data for optimal functionality, which poses difficulties in domains with limited data availability [8]. ...
Article
Full-text available
The development of the stock market has been characterised by a lack of dynamism, particularly in the context of an economic slowdown, policy adjustments and global uncertainties. As a result, corporate earnings have declined, stock demand has decreased, stock prices have fallen and investors are uncertain about the future. The following article aims to demonstrate the construction of an autoregressive integrated moving average (ARIMA) model using the Python programming language, the utilization of the CSI 300 index data from 2012 to 2022 sourced from the Kaggle website, the prediction of its fluctuations, the generation of a line graph, and a comparison with the actual trend. In the final stage of the analysis, four diagnostic graphs will be deployed to ascertain the suitability of the model. The results show that the ARIMA model effectively forecasts stock trends, with the predicted upward and downward trends following 2022 largely aligning with the actual trends and overlapping the two lines. Investors can use this model to determine the best investment direction and reduce the risk of failure.
... In later sections, this assumption is relaxed with the introduction of phase-mediator interaction terms. Time series data often exhibit autocorrelated residuals (Box et al., 2008;Shadish & Sullivan, 2011). For example, a lag-1 autocorrelation can be represented as e m1t = ρe m1t-1 + u m1t where e m1t is the residual at time point t, e m1t-1 is the residual at time point t-1, u m1t is a white noise term at time point t that is uncorrelated with both e m1t and e m1t-1 , and ρ represents the lag-1 correlation between the residual at time point t-1 and the residual at time point t. ...
Article
Single-Case Experimental Designs (SCEDs), or N-of-1 trials, are commonly used to estimate intervention effects in many disciplines including in the treatment of youth mental health problems. SCEDs consist of repeated measurements of an outcome over time for a single case (e.g., student or patient) throughout one or more baseline phases and throughout one or more intervention phases. The manipulation of the baseline and intervention phase make the SCED a type of interrupted time series design, which is considered one of the most effective experimental designs for causal inference. An important step towards understanding why interventions are effective at producing a change in the outcome is through the investigation of mediating mechanisms. Hypotheses of mediating mechanisms involve an intervention variable which is hypothesized to affect an outcome through its effect on a mediating variable. Little work has attempted to combine mediation analysis and ABAB reversal designs. Therefore, the goals of this paper are to define, estimate, and interpret mediation effects for ABAB reversal designs. An empirical example is used to demonstrate how to estimate and interpret the mediation effects. R code is provided for researchers interested in estimating mediation effects in single-case reversal designs.
... where X i is the measured value of a certain observation i, N is the number of total observations in the period of the averaging (i.e. the time window) and w is the half of the window width (Box et al., 1994). The longer the selected observation period, the more stable the Poisson-distributed signal becomes; however, it also becomes less responsive to rapid changes. ...
Article
Full-text available
Geophysical and remote sensing products that rely on Poisson‐distributed measurement signals, such as cosmic‐ray neutron sensing (CRNS) and gamma spectrometry, often face challenges due to inherent Poisson noise. Common techniques to enhance signal stability include data aggregation or smoothing (e.g., moving averages and interpolation). However, these methods typically reduce the ability to resolve detailed temporal (stationary data) and spatial (mobile data) features. In this study, we introduced a method for contextual noise suppression tailored to Poisson‐distributed data, utilizing a discrete score attribution system. This score filter evaluates each observation against eight different criteria to assess its consistency with surrounding values, assigning a score between 0 (very unlikely) and 8 (very likely) to indicate whether the observation is likely to act as noise. These scores can then be used to flag or remove data points based on user‐defined thresholds. We tested the score filter's effectiveness on both stationary and mobile CRNS data, as well as on gamma‐ray spectrometry and electromagnetic induction (EMI) recordings. In our examples, the score filter consistently outperformed established filters, for example Savitzky–Golay and Kalman, in direct competition when applied to CRNS time series data. Additionally, the score filter substantially reduced Poisson noise in mobile CRNS, gamma‐ray spectrometry and EMI data. The scoring system also provides a context‐sensitive evaluation of individual observations or aggregates, assessing their conformity within the dataset. Given its general applicability, customizable criteria and very low computational demands, the proposed filter is easy to implement and holds promise as a valuable tool for denoising geophysical data and applications in other fields.
... Furthermore, in their authoritative work on time series analysis, Box, Jenkins, and Reinsel provided a thorough explanation of the ARIMA and SARIMA models. They highlighted the SARIMA model's effectiveness in handling periodic data [17]. This theoretical foundation supports the use of SARIMA in this study to improve the accuracy of rainfall predictions in Pakistan, where seasonal variations play a key role. ...
Article
Full-text available
This study aims to improve the accuracy of rainfall forecasting in Pakistan by comparatively analyzing the performance of two models, ARIMA and SARIMA, to optimize the forecasting methodology. The study points out that although the ARIMA model performs well in time series analysis, it has shortcomings in handling data with significant seasonal variations. Therefore, the SARIMA model was introduced and it performed better in forecasting seasonal variations. Future research should consider combining the SARIMA model with models that can explain global climate phenomena such as El Nio and La Nia to enhance the accuracy of forecasts. In addition, ways to automate and improve the selection of model parameters should be explored to make the SARIMA model more efficient and accurate. The introduction of the SARIMA model has significantly improved prediction accuracy and contributed to more efficient planning and management of water resources. Areas where improvements can be made include reserving water resources in advance during the dry season or allocating water resources appropriately during the rainy season to support irrigation agriculture, urban water supply, flood control measures, etc. These enhanced forecasting methods help Pakistan cope with climate change challenges.
Article
Communication traffic prediction is of great guiding significance for communication planning management and improvement of communication service quality. However, due to the complex spatiotemporal correlation and uncertainty caused by the spatial topology and dynamic time characteristics of mobile communication networks, traffic prediction is facing enormous challenges. We propose a mobile traffic prediction method using dynamic spatiotemporal synchronous graph convolutional network (DSSGCN). DSSGCN has designed multiple components, which can effectively capture the heterogeneity in the local space-time map. More specifically, the network not only models the dynamic characteristics of nodes in the spatiotemporal graph of network traffic, but also captures the dynamic spatiotemporal characteristics of the edges of mobile service data with different time stamps. The outputs of these two components are fused by collaborative convolution to obtain the prediction results. Experiments on two ground truth mobile traffic datasets show that our DSSGCN model has good prediction performance.
Article
Full-text available
Forecasting using historical time series data has become increasingly important in today’s world. This paper aims to assess the potential for stable positive development within the wholesale and retail trade sector (SK NACE Section G) and the operations of HORTI, Ltd.( Košice, Slovakia), a company within this industry (SK NACE 46.31—wholesale of fruit and vegetables) by predicting three financial indicators: costs, revenues, and earnings before taxes (EBT) (or earnings after taxes (EAT)). We analyze quarterly data from Q1 2009 to Q4 2023 taken from the sector and monthly data from January 2013 to December 2022 for HORTI, Ltd. Through time series analysis, we aim to identify the most suitable model for forecasting the trends in these financial indicators. The study demonstrates that simple legacy forecasting methods, such as exponential smoothing and Box–Jenkins methodology, are sufficient for accurately predicting financial indicators. These models were selected for their simplicity, interpretability, and efficiency in capturing stable trends, and seasonality, especially in sectors with relatively stable financial behavior. The results confirm that traditional Holt–Winters’ and Autoregressive Integrated Moving Average (ARIMA) models can provide reliable forecasts without the need for more complex approaches. While advanced methods, such as GARCH or machine learning, could improve predictions in volatile conditions, the traditional models offer robust, interpretable results that support managerial decision-making. The findings can help managers estimate the financial health of the company and assess risks such as bankruptcy or insolvency, while also acknowledging the limitations of these models in predicting large shifts due to external factors or market disruptions.
Article
Full-text available
Accurate and timely air quality forecasting is crucial for mitigating pollution-related hazards and protecting public health. Recently, there has been a growing interest in integrating visual data for air quality prediction. However, some limitations remain in existing literature, such as their focus on coarse-grained classification, single-moment estimation, or reliance on indirect and unintuitive information from visual images. Here we present a dual-channel deep learning model, integrating surveillance images and multi-source numerical data for air quality forecasting. Our model, which combines a single-channel hybrid network consisting of VGG16 and LSTM (named VGG16-LSTM) with a single-channel Long Short-Term Memory (LSTM) network, efficiently captures detailed spatiotemporal features from surveillance image sequences and temporal features from atmospheric, meteorological, and temporal data, enabling accurate time-series forecasting of PM2.5 and PM10 concentrations. Experiments conducted on the 2021 Shanghai dataset demonstrate that the proposed model significantly outperforms traditional machine learning methods in terms of accuracy and robustness for time-series forecasting, achieving R2 values of 0.9459 and 0.9045 and RMSE values of 4.79 μg/m3 and 11.51 μg/m3 for PM2.5 and PM10, respectively. Furthermore, validation results on the datasets from two stations in Kaohsiung, Taiwan, with average R2 values of 0.9728 and 0.9365 and average RMSE values of 1.89 μg/m3 and 5.69 μg/m3 for PM2.5 and PM10 using a pretrain–finetune training strategy, confirm the model’s adaptability across diverse geographical contexts. These findings highlight the potential of integrating surveillance images to enhance air quality prediction, offering an effective supplement to ground-level environmental monitoring. Future work will focus on expanding datasets and optimizing network architectures to further improve forecasting accuracy and computational efficiency, enhancing the model’s scalability for broader regional air quality management.
Article
Статья посвящена вопросу разработки методики расчета эффективности вывода инновационного продукта на зарубежный рынок в рамках современной цифровой экономики. В качестве инновационного продукта рассматриваются инспекционно‑досмотровые комплексы (ИДК) российского производства СТ‑2630М. Данные ИДК позволяют многократно увеличить скорость проверки транспортных средств в пунктах досмотра, так как позволяют получать рентгеноскопическое изображение груза, перевозимого транспортным средством, без необходимости его вскрытия и проведения ручного досмотра. В данной работе используется адаптированный авторами метод нейронных сетей прямого распространения для прогнозирования экономических данных размещения ИДК с проверкой ошибки прогнозирования методом интегрированной модели авторегрессии и скользящего среднего ARIMA. Анализируемый глобальный рынок составляет 243 страны, исследование проводится по тринадцати параметрам, связанным с количественной оценкой потенциальных мест размещения инспекционно-досмотровых комплексов и более двадцати экономических параметров стран. Также проводится сегментация стран на три сегмента: высококонкурентный, низкоконкурентный и смешанный. В работе рассматривается только смешанный сегмент стран, который является оптимальным для данного продукта как с точки зрения уровня конкуренции или уровня инноваций, так и с точки зрения платежеспособности страны. В заключении рассчитывается экономическая эффективность выхода на рынок инновационного продукта для найденного сегмента стран по формуле чистой приведен‑ ной стоимости NPV. При расчете NPV учитываются такие параметры, как текущий уровень инфляции страны, уровень грамотности населения, влияющий на затраты на обучение, логистические издержки на доставку оборудования. Также анализируется устойчивость полученного решения внесением небольшой вариации входных данных. The article is devoted to the development of methods for calculating the effectiveness of launching into the foreign mar‑ ket with high innovative product, in the modern digital econ‑ omy. X‑ray cargo inspection systems made in Russia (IDK), CT‑2630М are considered as an innovative product. These sys‑ tems allow the IDK to greatly increase the speed of the vehicle checks at security points, as they allow obtaining X‑ray image of the cargo transported by the vehicle without the necessity of opening and conducting manual inspections. In this article we use the method of neural networks of direct distribution adapted by the authors to predict the economic data of IDK placement with the check of prediction error by the method of integrated model of autoregression and moving average ARIMA. The ana‑ lyzed global market is 243 countries; the study is conducted on thirteen parameters related to the quantitative assessment of potential locations of inspection systems and more than twen‑ ty economic parameters of countries. There is also made a seg‑ mentation of countries into three segments: highly competitive, low‑competitive and mixed. The article examines only a mixed segment of countries, which is optimal for this product from the level of competition or the level of innovation and sol‑ vency of the country on the other hand. In conclusion, the economic efficiency of entering the market of an innovative product for the found segment of countries is calculated using 28 БИЗНЕС. ОБРАЗОВАНИЕ. ПРАВО. ВЕСТНИК ВОЛГОГРАДСКОГО ИНСТИТУТА БИЗНЕСА, 2019, май № 2 (47). Подписные индексы – 38683, Р8683 the formula of net present value NPV. The calculation of NPV takes into account such parameters as the current level of in‑ flation of the country, the level of literacy of the population, affecting the cost of training, logistics costs for the delivery of equipment. The stability of the resulting solution is also analyzed by adding a small variation of the input data.
Article
Full-text available
The prediction of stock market movements is a critical task for investors, financial analysts, and researchers. In recent years, significant advancements have been made in the field of stock prediction, driven by the integration of machine learning and data analysis techniques. Though stock market predictions are highly desired, there are many factors contributing towards volatility of the market. There is a need for extensive study and concentration on various predictive techniques to investigate different scenarios triggering such volatility. This paper reviews the latest methodologies employed for predicting stock prices, with a particular focus on the Australian stock market. Key techniques such as time series analysis like ARIMA & GARCH, machine learning models like SVM, LSTM & Neural Network, and sentiment analysis are discussed, highlighting their applications, key strengths, and some limitations.
ResearchGate has not been able to resolve any references for this publication.