Conference Paper

Short-Term Traffic Forecasting in Optical Network using Linear Discriminant Analysis Machine Learning Classifier

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The main assumptions are analogous to our previous papers focused on SS-FONs [49], [50]. In more detail, a transceiver supports transmission of a signal on an optical carrier (OC) that consists of three adjacent frequency slots (i.e., the OC is 37.5 GHz wide). ...
... This article can be treated as a continuation and extension of the work begun in [50], but also as a separate set of new ideas that connect to concepts already researched. Work [50] introduced the concept of a reallocation period and briefly tested it in a smaller set of scenarios. ...
... This article can be treated as a continuation and extension of the work begun in [50], but also as a separate set of new ideas that connect to concepts already researched. Work [50] introduced the concept of a reallocation period and briefly tested it in a smaller set of scenarios. In contrast to the above work, this article takes into account more topologies, types of traffic, routing algorithms, and different types of reallocation periods. ...
Article
Full-text available
The overall network volume in backbone optical networks is constantly growing, composed of many smaller network services with increasing trends and various seasonality. Due to the recent advances in machine learning algorithms, short- and long-term traffic fluctuations can be forecasted. Consequently, the backbone optical network can be adapted to traffic changes aiming to improve its performance. However, an important challenge lies in developing effective optimization methods capable of adapting to traffic changes to leverage the knowledge about the traffic. To this end, this paper addresses the time-varying traffic in spectrally-spatially flexible optical networks (SS-FONs), which are a promising technology to mitigate backbone network requirements of vast traffic volume transmission. The main contribution of this paper is twofold. Firstly, we introduce a new traffic prediction method using multioutput regression and temporal features to forecast traffic between all node pairs and integrate this prediction method into an optimization framework developed for dynamic resource allocation in translucent SS-FONs with time-varying traffic. Secondly, we evaluate potential network performance improvements from periodic lightpath reallocation through extensive numerical experiments. According to the results of experiments run on two representative optical network topologies, the proposed approach with periodic resource allocation allows achieving up to 7.8 percentage points reduction of bandwidth blocking compared to the reference scenario without reallocation. Consequently, the network requires up to 23.4% fewer transceivers to deliver the same traffic in considered scenarios and thus the power consumption savings are provided.
... In the state-of-the-art optical networks, traffic is typically represented by demands. [137][138][139] The optical network operates based on a time scale and can be divided into time steps or iterations. In particular, in each time step/iteration, a number of demands arrive to the network, some of which are established. ...
... Furthermore, this was extended by including the prediction of traffic volume and holding time. 138 They observed that the best classifier for such tasks was LDA. 137 Additionally, Gui et al. 143 benchmarked their GCN-GRU based traffic prediction over several approaches, including LSTM, CNN, and GRU, and the results suggested that GCN-GRU has a greater prediction quality as compared to these other approaches. ...
Article
Full-text available
Optical networks generate a vast amount of diagnostic, control and performance monitoring data. When information is extracted from this data, reconfigurable network elements and reconfigurable transceivers allow the network to adapt both to changes in the physical infrastructure but also changing traffic conditions. Machine learning is emerging as a disruptive technology for extracting useful information from this raw data to enable enhanced planning, monitoring and dynamic control. We provide a survey of the recent literature and highlight numerous promising avenues for machine learning applied to optical networks, including explainable machine learning, digital twins and approaches in which we embed our knowledge into the machine learning such as physics-informed machine learning for the physical layer and graph-based machine learning for the networking layer.
... That information may be then utilized for instance to plan the routing rules for the approaching traffic demands. By these means, it is possible to increase the ratio of the accepted demands while a demand switching process is realized faster [11,12]. In the longterm prediction, which is much more challenging, the forecast is made for a longer period (i.e., several upcoming hours or even days/months). ...
... It is worth mentioning that SIX is currently one of the most popular traffic datasets in the research society, as it shares extensive and diverse statistics. It is especially widely applied for the task of traffic prediction in various configurations [11,12]. Similarly, several other platforms publish general information regarding observed traffic at Internet exchange points. ...
Article
Full-text available
The paper studies efficient modeling and prediction of daily traffic patterns in transport telecommunication networks. The investigation is carried out using two historical datasets, namely WASK and SIX, which collect flows from edge nodes of two networks of different size. WASK is a novel dataset introduced and analyzed for the first time in this paper, while SIX is a well-known source of network flows. For the considered datasets, the paper proposes traffic modeling and prediction methods. For traffic modeling, the Fourier Transform is applied. For traffic prediction, two approaches are proposed—modeling-based (the forecasting model is generated based on historical traffic models) and machine learning-based (network traffic is handled as a data stream where chunk-based regression methods are applied for forecasting). Then, extensive simulations are performed to verify efficiency of the approaches and their comparison. The proposed modeling method revealed high efficiency especially for the SIX dataset, where the average error was lower than 0.1%. The efficiency of two forecasting approaches differs with datasets–modeling-based methods achieved lower errors for SIX while machine learning-based for WASK. The average prediction error for SIX reached 3.36% while forecasting for WASK turned out extremely challenging.
... Such a concept follows from characteristics of optical networks and other transport network technologies. According to the best of our knowledge, the only paper that focuses on the prediction of traffic levels in optical networks is our previous paper [16], where we presented some preliminary results on this problem. To fill the research gap, in this paper we introduce and examine two supervised ML approaches: classification and regression. ...
... All experiments were conducted using datasets, which were generated based on real traffic characteristics. The results reported in the next sections prove that the proposed approaches outperform the methods described in [16]. ...
Article
Full-text available
Rapid growth of network traffic causes the need for the development of new network technologies. Artificial intelligence provides suitable tools to improve currently used network optimization methods. In this paper, we propose a procedure for network traffic prediction. Based on optical networks’ (and other network technologies) characteristics, we focus on the prediction of fixed bitrate levels called traffic levels. We develop and evaluate two approaches based on different supervised machine learning (ML) methods—classification and regression. We examine four different ML models with various selected features. The tested datasets are based on real traffic patterns provided by the Seattle Internet Exchange Point (SIX). Obtained results are analyzed using a new quality metric, which allows researchers to find the best forecasting algorithm in terms of network resources usage and operational costs. Our research shows that regression provides better results than classification in case of all analyzed datasets. Additionally, the final choice of the most appropriate ML algorithm and model should depend on the network operator expectations.
... Linear discriminant analysis, or LDA in short, is a fundamental approach to machine learning and pattern recognition for dimensionality reduction. The method of LDA is one of the classical algorithms that can represent subspace discriminant [23]. It has been widely implemented in several research domains. ...
Article
Full-text available
This research aim is to propose a machine learning approach to automatically evaluate or categories hospital quality status using quality indicator data. This research was divided into six stages: data collection, pre-processing, feature engineering, data training, data testing, and evaluation. In 2020, we collected 5,542 data values for quality indicators from 658 Indonesian hospitals. However, we analyzed data from only 275 hospitals due to inadequate submission. We employed methods of machine learning such as decision tree (DT), gaussian naïve Bayes (GNB), logistic regression (LR), k-nearest neighbors (KNN), support vector machine (SVM), linear discriminant analysis (LDA) and neural network (NN) for research archive purposes. Logistic regression achieved a 70% accuracy rate, SVM a 68% accuracy rate, and neural network a 59.34% of accuracy. Moreover, K-nearest neighbors achieved a 54% of accuracy and decision tree a 41% accuracy. Gaussian-NB achieved a 32% accuracy rate. The linear discriminant analysis achieved the highest accuracy with 71%. It can be concluded that linear discriminant analysis is the algorithm suitable for hospital quality data in this research.
... Their research claims that the regression model outperforms the categorization model in traffic prediction. D. Szostak et al. [10] reformulated the traffic prediction problem as a classification problem, and their proposed classification model predicts traffic bitrates rather than traffic volume. According to their experimental results, they utilized a Linear Discriminant Analysis (LDA) classifier to anticipate future traffic, and it was 93 percent accurate. ...
Article
Full-text available
The ISP (Internet Service Provider) industry relies heavily on internet traffic forecasting (ITF) for long-term business strategy planning and proactive network management. Effective ITF frameworks are necessary to manage these networks and prevent network congestion and over-provisioning. This study introduces an ITF model designed for proactive network management. It innovatively combines outlier detection and mitigation techniques with advanced gradient descent and boosting algorithms, including Gradient Boosting Regressor (GBR), Extreme Gradient Boosting (XGB), Light Gradient Boosting Machine (LGB), CatBoost Regressor (CBR), and Stochastic Gradient Descent (SGD). In contrast to traditional methods that rely on synthetic datasets, our model addresses the problems caused by real aberrant ISP traffic data. We evaluated our model across varying forecast horizons—six, nine, and twelve steps—demonstrating its adaptability and superior predictive accuracy compared to traditional forecasting models. The integration of the outlier detection and mitigation module significantly enhances the model’s performance, ensuring robust and accurate predictions even in the presence of data volatility and anomalies. To guarantee that our suggested model works in real-world situations, our research is based on an extensive experimental setup that uses real internet traffic monitoring from high-speed ISP networks.
... To the best of our knowledge, long-term traffic forecasting has not been addressed in the literature in the context of prediction of traffic level. Additionally, short-term traffic forecasting using traffic levels was described only in articles [14], [25] and [26]. To fill the research gap, this work introduces, formulates, and examines the long-term forecasting problem as a prediction of fixed traffic levels. ...
... a comparative analysis of ML approaches for categorization of network traffic. D. Szostak et al [17] presented that to provide services to users, network operators must prioritize efficient resource allocation. Its optimization provides for cost savings, required service quality, and anomalies in data flow detection. ...
Conference Paper
Full-text available
In order to spot potential security threats or performance problems, Network Traffic Analysis (NTA) involves monitoring and analyzing network traffic. However, Machine Learning (ML) methods are frequently used to automate NTA. Network traffic classification, anomaly detection, and malicious activity detection can all be done using ML techniques. In order to enhance network performance, they can also be utilized to forecast future traffic patterns. ML algorithms come in a variety of forms and can be applied to NTA. Support vector machines (SVM), decision trees, and random forests are the most used methods. Depending on the particular application, an algorithm will be chosen. SVMs, for instance, are frequently used for classification tasks, whereas decision trees are frequently utilized for anomaly detection jobs. Network performance and security can be enhanced using NTA employing ML. It can aid in the detection of possible risks, the prevention of data breaches, and the enhancement of network performance. In the proposed work real time NTA using ML and DL algorithms discussed with tools and applications. Random forest algorithm is implemented and obtained an accuracy of 99.31%. Benefits of applying ML to NTA includes increased accuracy when it comes to spotting dangers and anomalies, ML algorithms have the potential to be more precise than conventional rule-based techniques. Less false positives, ML algorithms can be customized to produce fewer false positives, which can save time and money. Enhanced scalability, ML algorithms can be scaled to manage high levels of network traffic.
... In order to obtain an accurate assessment of the performance of the model, we carried out the stratified 10-fold cross-validation 100 times. In the beginning, traditional machine learning modeling was carried out with the assistance of the LightGBM Classifier (LGBM) [57], Gradient Boosting Classifier (GBC) [58], XGBoost Classifier (XGB) [59], Extra Tree Classifier (ETC) [60], Decision Tree Classifier (DT) [61], Random Forest Classifier (RF) [62], Linear Discriminant Analysis (LDA) [63], and Logistic Regression (LR) [64]. Following this, the three models that had the best overall effectiveness within these records were chosen for stacking modeling. ...
Article
Full-text available
Background: Frailty is a serious physical disorder affecting the elderly all over the world. However, the frail elderly have low physical fitness, which limits the effectiveness of current exercise programs. Inspired by this, we attempted to integrate Baduanjin and strength and endurance exercises into an exercise program to improve the physical fitness and alleviate frailty among the elderly. Additionally, to achieve the goals of personalized medicine, machine learning simulations were performed to predict post-intervention frailty. Methods: A total of 171 frail elderly individuals completed the experiment, including a Baduanjin group (BDJ), a strength and endurance training group (SE), and a combination of Baduanjin and strength and endurance training group (BDJSE), which lasted for 24 weeks. Physical fitness was evaluated by 10-meter maximum walk speed (10 m MWS), grip strength, the timed up-and-go test (TUGT), and the 6 min walk test (6 min WT). A one-way analysis of variance (ANOVA), chi-square test, and two-way repeated-measures ANOVA were carried out to analyze the experimental data. In addition, nine machine learning models were utilized to predict the frailty status after the intervention. Results: In 10 m MWS and TUGT, there was a significant interactive influence between group and time. When comparing the BDJ group and the SE group, participants in the BDJSE group demonstrated the maximum gains in 10 m MWS and TUGT after 24 weeks of intervention. The stacking model surpassed other algorithms in performance. The accuracy and precision rates were 75.5% and 77.1%, respectively. Conclusion: The hybrid exercise program that combined Baduanjin with strength and endurance training proved more effective at improving fitness and reversing frailty in elderly individuals. Based on the stacking model, it is possible to predict whether an elderly person will exhibit reversed frailty following an exercise program.
... Relatively speaking, machine learning methods such as the support vector machine (SVM) algorithm have strong generalization ability and global optimality. It can collect and analyze the dependencies of network traffic to achieve the effect of predicting the future network [8]. It works for small samples. ...
Article
Full-text available
With the development of network function virtualization (NFV), the resource management of service function chains (SFC) in the virtualized environment has gradually become a research hotspot. Usually, users hope that they can get the network services they want anytime and anywhere. The network service requests are dynamic and real-time, which requires that the SFC in the NFV environment can also meet the dynamically changing network service requests. In this regard, this paper proposes an SFC deployment method based on traffic prediction and adaptive virtual network function (VNF) scaling. Firstly, an improved network traffic prediction method is proposed to improve its prediction accuracy for dynamically changing network traffic. Secondly, the predicted traffic data is processed for the subsequent scaling of the VNF. Finally, an adaptive VNF scaling method is designed for the purpose of dynamic management of network virtual resources. The experimental results show that the method proposed in this paper can manage the network resources in dynamic scenarios. It can effectively improve the availability of network services, reduce the operating overhead and achieve a good optimization effect.
... We used a 10-fold cross-validation training strategy, and each model was trained one hundred times, separately. First, we train ten classical machine learning models using LightGBM Classifier (LGBM) [50], Gradient Boosting Classifier (GBC) [51], XGBoost Classifier (XGB) [52], Ex-tra Tree Classifier (ETC) [53], k Neighbors Classifier (KNN) [54], Decision Tree Classifier (DT) [55], Random Forest Classifier (RF) [56], Linear Discriminant Analysis (LDA) [57], Support Vector Classifier (SVC) [58], and Logistic Regression (LR) [59]. Then, we used the three models with the highest average accuracy for stacking. ...
Article
Full-text available
Background: Sarcopenia is a geriatric syndrome characterized by decreased skeletal muscle mass and function with age. It is well-established that resistance exercise and Yi Jin Jing improve the skeletal muscle mass of older adults with sarcopenia. Accordingly, we designed an exercise program incorporating resistance exercise and Yi Jin Jing to increase skeletal muscle mass and reverse sarcopenia in older adults. Additionally, machine learning simulations were used to predict the sarcopenia status after the intervention. Method: This randomized controlled trial assessed the effects of sarcopenia in older adults. For 24 weeks, 90 older adults with sarcopenia were divided into intervention groups, including the Yi Jin Jing and resistance training group (YR, n = 30), the resistance training group (RT, n = 30), and the control group (CG, n = 30). Computed tomography (CT) scans of the abdomen were used to quantify the skeletal muscle cross-sectional area at the third lumbar vertebra (L3 SMA). Participants' age, body mass, stature, and BMI characteristics were analyzed by one-way ANOVA and the chi-squared test for categorical data. This study explored the improvement effect of three interventions on participants' L3 SMA, skeletal muscle density at the third lumbar vertebra (L3 SMD), skeletal muscle interstitial fat area at the third lumbar vertebra region of interest (L3 SMFA), skeletal muscle interstitial fat density at the third lumbar vertebra (L3 SMFD), relative skeletal muscle mass index (RSMI), muscle fat infiltration (MFI), and handgrip strength. Experimental data were analyzed using two-way repeated-measures ANOVA. Eleven machine learning models were trained and tested 100 times to assess the model's performance in predicting whether sarcopenia could be reversed following the intervention. Results: There was a significant interaction in L3 SMA (p < 0.05), RSMI (p < 0.05), MFI (p < 0.05), and handgrip strength (p < 0.05). After the intervention, participants in the YR and RT groups showed significant improvements in L3 SMA, RSMI, and handgrip strength. Post hoc tests showed that the YR group (p < 0.05) yielded significantly better L3 SMA and RSMI than the RT group (p < 0.05) and CG group (p < 0.05) after the intervention. Compared with other models, the stacking model exhibits the best performance in terms of accuracy (85.7%) and F1 (75.3%). Conclusion: One hybrid exercise program with Yi Jin Jing and resistance exercise training can improve skeletal muscle area among older adults with sarcopenia. Accordingly, it is possible to predict whether sarcopenia can be reversed in older adults based on our stacking model.
... The increase in Internet traffic growth motivates many efforts to focus on increasing the capacity of the communication network in order to provide better service to users. Thus, optical access systems were established as a promising technology for being able to support large volumes of data traffic, subject to adequate levels of QoS (quality of service) [4]. ...
... D. Szostak et. al [12] formulated traffic prediction problem into a classification problem and their proposed classification model predicts traffic bitrates level instead of exact traffic volume. They used Linear Discriminant Analysis (LDA) classifier to forecast the future traffic and it shows 93% accuracy according to their experimental results. ...
Preprint
Full-text available
Prediction of network traffic behavior is significant for the effective management of modern telecommunication networks. However, the intuitive approach of predicting network traffic using administrative experience and market analysis data is inadequate for an efficient forecast framework. As a result, many different mathematical models have been studied to capture the general trend of the network traffic and predict accordingly. But the comprehensive performance analysis of varying regression models and their ensemble has not been studied before for analyzing real-world anomalous traffic. In this paper, several regression models such as Extra Gradient Boost (XGBoost), Light Gradient Boosting Machine (LightGBM), Stochastic Gradient Descent (SGD), Gradient Boosting Regressor (GBR), and CatBoost Regressor were analyzed to predict real traffic without and with outliers and show the significance of outlier detection in real-world traffic prediction. Also, we showed the outperformance of the ensemble regression model over the individual prediction model. We compared the performance of different regression models based on five different feature sets of lengths 6, 9, 12, 15, and 18. Our ensemble regression model achieved the minimum average gap of 5.04% between actual and predicted traffic with nine outlier-adjusted inputs. In general, our experimental results indicate that the outliers in the data can significantly impact the quality of the prediction. Thus, outlier detection and mitigation assist the regression model in learning the general trend and making better predictions.
... In [3], authors represented a machine learning approach for anticipation of short-term traffic system. The Linear Discriminant Analysis (LDA) is used to predict real traffic, where 93% accuracy is claimed by the author. ...
Article
Full-text available
In this paper, we use three machine learning techniques: Linear Discriminant Analysis (LDA) along different Eigen vectors of an image, Fuzzy Inference System (FIS) and Fuzzy c-mean clustering (FCM) to recognize objects and human face. Again, Fuzzy c-mean clustering is combined with multiple linear regression (MLR) to reduce the four dimensional variable into two dimensional variables to get the influence of all variables on the scatterplot. To keep the outlier within narrow range, the MLR is again applied in logistic regression. Individual method is found suitable for particular type of object recognition but does not reveal standard range of recognition for all types of objects. For example, LDA along Eigen vector provides high accuracy of detection for human face recognition but very poor performance is found against discrete objects like chair, butterfly etc. The FCM and FIS are found to provide moderate result in all kinds of object detection but combination of three methods of the paper provide expected result with low process time compared to deep leaning neural network.
... Although some papers are covering the application of ML and DL to optical networks, it is still in its early stages. In related works researchers are focusing mostly on the following aspects of network layer domain: traffic prediction in the time-space domain to effectively plan and operate communication networks [4][5][6][7][8][9], Virtual Topology Design (VTD) and reconfiguration along with solving Routing, Modulation and Spectrum Assignment (RMSA) problem [10], [11], failure detection and localization to maintain service status and meet Service Level Agreements (SLAs) [12][13][14], traffic flow classification that will allow flows differentiation to guarantee proper Quality of Service [15][16][17][18][19][20], and path computation for enabling fast path selection and more efficient service provisioning [21]. ...
Conference Paper
Full-text available
Nowadays, artificial intelligence provides an excellent opportunity for scientists to improve the efficiency of resource allocation in communication networks. In this paper, we focus on applying two methods: Long-Short Term Memory and Monte Carlo Tree Search, to solve the problem of cloud resource allocation in dynamic, real-time traffic scenarios. We use a framework of Software Defined Elastic Optical Networks and cloud resources available from Amazon Web Services. Results show that the application of Monte Carlo Tree Search and Long-Short Term Memory provides superior performance, which is an excellent opportunity for network operators to achieve better utilization of their networks, with lower operational costs.
... Due to the ever-evolving nature of Internet usage and a vast number of variables involved, developing efficient algorithms is a complex task that is difficult to tackle. In recent years ML and DL have been leveraged in some research studies to the RSA problem and traffic prediction [23], [24], [25], [26], [27] with promising results. Although some papers are covering the application of ML to optical networks, it is still in its early stage [28]. ...
Conference Paper
Full-text available
Responding to the rapid growth of network traffic , the development of new backbone network technologies is essential. One of the solutions is to use Elastic Optical Networks (EONs) instead of traditional Wavelength Division Multiplexing (WDM). EON provides the means to allocate spectrum bandwidth flexibly in highly congested networks. However, due to physical limitations, many EON nodes require the use of regenerators. To optimize the cost and the efficiency of signal regeneration, we propose the use of a deep learning model. We use it to calculate the best allocation of regenerators in the network so that even with limited resources, EONs can still efficiently route traffic. After numerous experiments, our approach shows significant improvement in reducing both spectrum and regenerator blockage rates.
Article
Full-text available
Background Sarcopenia is characterized by the loss of skeletal muscle mass and muscle function with increasing age. The skeletal muscle mass of older people who endure sarcopenia may be improved via the practice of strength training and tai chi. However, it remains unclear if the hybridization of strength exercise training and traditional Chinese exercise will have a better effect. Objective We designed a strength training and tai chi exercise hybrid program to improve sarcopenia in older people. Moreover, explainable artificial intelligence was used to predict postintervention sarcopenic status and quantify the feature contribution. Methods To assess the influence of sarcopenia in the older people group, 93 participated as experimental participants in a 24-week randomized controlled trial and were randomized into 3 intervention groups, namely the tai chi exercise and strength training hybrid group (TCSG; n=33), the strength training group (STG; n=30), and the control group (n=30). Abdominal computed tomography was used to evaluate the skeletal muscle mass at the third lumbar (L3) vertebra. Analysis of demographic characteristics of participants at baseline used 1-way ANOVA and χ ² tests, and repeated-measures ANOVA was used to analyze experimental data. In addition, 10 machine-learning classification models were used to calculate if these participants could reverse the degree of sarcopenia after the intervention. Results A significant interaction effect was found in skeletal muscle density at the L3 vertebra, skeletal muscle area at the L3 vertebra (L3 SMA), grip strength, muscle fat infiltration, and relative skeletal muscle mass index (all P values were <.05). Grip strength, relative skeletal muscle mass index, and L3 SMA were significantly improved after the intervention for participants in the TCSG and STG (all P values were <.05). After post hoc tests, we found that participants in the TCSG experienced a better effect on L3 SMA than those in the STG and participants in the control group. The LightGBM classification model had the greatest performance in accuracy (88.4%), recall score (74%), and F 1 -score (76.1%). Conclusions The skeletal muscle area of older adults with sarcopenia may be improved by a hybrid exercise program composed of strength training and tai chi. In addition, we identified that the LightGBM classification model had the best performance to predict the reversion of sarcopenia.
Article
The declining physical condition of the older adults is a pressing issue. Wu Qin Xi exercise, despite being low-intensity, is highly effective among older adults. Inspired by its characteristics, we designed a new exercise program for frail older adults, combining strength, endurance, and Wu Qin Xi. Furthermore, we employed machine learning to predict whether frailty can be reversed in older adults after the intervention. A total of 181 community-dwelling frail older adults aged 65 years or older participated in this single-center, randomized controlled study, with 54.7% (n=99) being female. The study assessed the effectiveness of several exercise modalities in reversing frailty. The Fried‘s frailty criterion was used to assess the degree of frailty of the subjects. Participants were assigned a three-digit code 001–163 and randomly assigned (1:1:1) by computer to three different groups based on the study participant number: the Wu Qin Xi group (WQX), the strength exercise mixed with endurance exercise training group (SE), and the WQXSE hybrid exercise group incorporated the above two. Body composition and frailty-related physical fitness factors were measured before and after a 24-week intervention. The measurements included Body height, Body mass, Timed Up and Go Test (TUGT), grip strength assessment (GS), 6min walk test (6 min WT), and 10 m maximum walk speed (10 m MWS). Data were analyzed using repeated measures ANOVA to determine group and time interaction effects and machine learning models were used to predict program effectiveness. A total of 163 participants completed the study, with 53.9% (n=88) of them being female. The two items, 10 m maximum walking speed (10 m MWS) and grip strength, were significantly affected by the interaction of group and time. Compared to the other two groups, the WQXSE group showed the most improvement in the item 10 m MWS. In addition, following 24 weeks of training, 68 (41.7%) of the initially frail older adults had reversed their frailty status. Among them, 19 (36.5%) were in the WQX group, 24 (44.4%) were in the WQXSE group, and 25 (43.9%) were in the SE group. The stacking model exhibited superior performance when compared to other algorithms. A hybrid exercise regimen comprising the Wu Qin Xi routine and exercises focused on both strength and endurance holds the potential to yield greater improvements in the physical fitness of older adults, as well as reducing frailty. Leveraging a stacking model, it is possible to forecast the likelihood of older adults successfully reversing their frailty status following participation in a prevention exercise program.
Article
Full-text available
During the last decade, optical networks have become "smart networks". Software-defined networks, software-defined optical networks, and elastic optical networks are some emerging technologies that provide a basis for promising innovations in the functioning and operation of optical networks. Machine learning algorithms are providing the possibility to develop this promising study area. Since machine learning can learn from a large amount of data available from the network elements. They can find a suitable solution for any environment and thus create more dynamic and flexible networks that improve the user experience. This paper performs a systematic mapping that provides an overview of machine learning in optical networks, identifies opportunities, and suggests future research lines. The study analyzed 96 papers from the 841 publications on this topic to find information about the use of machine learning techniques to solve problems related to the functioning and operation of optical networks. It is concluded that supervised machine learning techniques are mainly used for resource management, network monitoring, fault management, and traffic classification and prediction of an optical network. However, specific challenges need to be solved to successfully deploy this type of method in real communication systems since most of the research has been carried out in controlled experimental environments.
Article
Recent studies have shown that a large amount of energy consumption in the communication medium occurs at the data processing plane. The data plane executes the forwarding decisions in the routers and switches. Through initial studies, the line card was identified as the most granular component of the router and later forward engine, a sub component of line card has been recognized as the most power intensive component of routers. Fundamental techniques like link rate adaption, sleep and wake‐up have been imposed to reduce the energy at line cards and forward engines. On the other hand, router's architectures undergo drastic changes like empowering data plane with parallel processing due to the rise of gigabit routers. In this paper, to increase the degree of local autonomy and better energy savings, we have adopted the parallel processing design and imposed a variable amount of traffic on different line cards. The existing parallel forward engines perform load balancing in synchronous manner. In our design, asynchronous behavior is imposed such that every line card could independently decide to wake up or turn off their forward engine based on the traffic parameters. We have done simulations and witnessed increased energy savings (average sleeping time) without compromising the completion of lookup process.
Article
To provide a suitable operation in optical code division multiple access (OCDMA) networks, it is paramount to balance the powers received at the destination optical node. This work presents a solution strategy for the power allocation (PA) problem in OCDMA by rewriting it as a linear programming (LP) problem and applying two LP methods based on the Simplex method and the Interior Point method (IPM). Such LP methods proposed in the PA OCDMA context were compared with two methods available in the literature: a) hybrid ALPSO PA method, which is based on the particle swarm optimization (PSO) strategy combined with the augmented Lagrangian (AL) analytical method and the solver GUROBI; b) the high-complexity benchmark solution matrix inversion method (MIM), which is used to verify the quality of the Simplex and IPM solutions. Numerical result reveal the effectiveness and efficiency of both LP methods when compared with other competitive methods. Numerical results in terms of floating-point operations (FLOPS), normalized mean squared error (NMSE), convergence, and the evolution of the allocated power reveal the effectiveness and efficiency of both LP methods when compared with the literature methods, mainly under higher network dimensions (≥ 32 optical nodes), achieving better accuracy-complexity tradeoffs. For 32 ≤ ≤ 512 users, the IPM has resulted in perfect feasibility ( = 0) and little complexity, i.e., an order less in terms of Flops than Simplex, twice to five times less than MIM procedure, and at least four orders less complexity than the hybrid analytical-heuristic ALPSO method.
Chapter
This work focuses on finding efficient Machine Learning (ML) method for traffic prediction in optical network. Considering optical networks’ characteristics, we predict fixed bitrate levels. For the considered problem, we propose two ML approaches, namely classification and regression, for which we compare performance of single ML algorithms and ensemble methods. Features for dataset instances are selected based on autocorrelation analysis. Prediction quality is analyzed using a new metric, which evaluates algorithms in terms of network resources usage and operational costs. Results obtained for datasets generated using real-world traffic characteristics show that for the examined prediction problem the regression approach yields better results than classification. Additionally, ensemble methods outperform single ML algorithms performance, i.e., the best committee returns 5,3% better results than the reference algorithm.
Article
Full-text available
Today’s telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users’ behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions.
Article
Full-text available
Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.
Article
Full-text available
In this paper, a multiresolution finite-impulse-response (FIR) neural-network-based learning algorithm using the maximal overlap discrete wavelet transform (MODWT) is proposed. The multiresolution learning algorithm employs the analysis framework of wavelet theory, which decomposes a signal into wavelet coefficients and scaling coefficients. The translation-invariant property of the MODWT allows alignment of events in a multiresolution analysis with respect to the original time series and, therefore, preserving the integrity of some transient events. A learning algorithm is also derived for adapting the gain of the activation functions at each level of resolution. The proposed multiresolution FIR neural-network-based learning algorithm is applied to network traffic prediction (real-world aggregate Ethernet traffic data) with comparable results. These results indicate that the generalization ability of the FIR neural network is improved by the proposed multiresolution learning algorithm.
Article
Full-text available
In most industrial systems, forecasts of external demand or predictions of the future system state are necessary to achieve optimal management and control. Forecasting tasks can be formulated as different classes of problems such as function approximation or classification, in which neural network techniques can be applied. In this paper, some examples of forecasting applications using neural networks are presented. Through critical analysis of these applications, our aim is to show the applicability of neural network techniques to forecasting problems, their constraints and limits, and also their advantages and drawbacks as compared to other techniques. KEYWORDS Artificial neural networks, Forecasting, Machine learning, Statistical modeling. 1 Introduction In many industrial systems, it is often necessary to have reliable forecasts of external demand for optimal management and control. Generally, the end-user's needs can be classified in four categories depending on the relationshi...
Article
Traffic engineering with traffic prediction is a promising approach to accommodate time-varying traffic without frequent route changes. In this approach, the routes are decided so as to avoid congestion on the basis of the predicted traffic. However, if the range of variation including temporal traffic changes within the next control interval is not appropriately decided, the route cannot accommodate the shorter-term variation and congestion still occurs. To solve this problem, we propose a prediction procedure to consider the short-term and longer-term future traffic demands. Our method predicts the longer-term traffic variation from the monitored traffic data. We then take account of the short-term traffic variation in order to accommodate prediction uncertainty incurred by temporal traffic changes and prediction errors. We use the standard deviation to estimate the range of short-term fluctuation. Through the simulation using actual traffic traces on a backbone network of Internet2, we show that traffic engineering using the traffic information predicted by our method can set up routes that accommodate traffic variation for several or more hours with efficient load balancing. As a result, we can reduce the required bandwidth by 18.9% using SARIMA with trend component compared with that of the existing traffic engineering methods.
Conference Paper
Many Internet events exhibit periodical patterns. Such events include the availability of end-hosts, usage of internetwork links for balancing load and cost of transit, traffic shaping during peak hours, etc. Internet monitoring systems that collect huge amount of data can leverage periodicity information for improving resource utilization. However, automatic periodicity inference is a non trivial task, especially when facing measurement “noise”. In this paper we present two methods for assessing the periodicity of network events and inferring their periodical patterns. The first method uses Power Spectral Density for inferring a single dominant period that exists in a signal representing the sampling process. This method is highly robust to noise, but is most useful for single-period processes. Thus, we present a novel method for detecting multiple periods that comprise a single process, using iterative relaxation of the time-domain autocorrelation function. We evaluate these methods using extensive simulations, and show their applicability on real Internet measurements of end-host availability and IP address alternations.
Article
This article presents three methods to forecast accurately the amount of traffic in TCP/IP based networks: a novel neural network ensemble approach and two important adapted time series methods (ARIMA and Holt-Winters). In order to assess their accuracy, several experiments were held using real-world data from two large Internet service providers. In addition, different time scales (5 min, 1 h and 1 day) and distinct forecasting lookaheads were analysed. The experiments with the neural ensemble achieved the best results for 5 min and hourly data, while the Holt-Winters is the best option for the daily forecasts. This research opens possibilities for the development of more efficient traffic engineering and anomaly detection tools, which will result in financial gains from better network resource management.
Conference Paper
Traffic anomalies such as failures and attacks are commonplace in today's network, and identifying them rapidly and accurately is critical for large network operators. The detection typically treats the traffic as a collection of flows that need to be examined for significant changes in traffic pattern (e.g., volume, number of connections). However, as link speeds and the number of flows increase, keeping per-flow state is either too expensive or too slow. We propose building compact summaries of the traffic data using the notion of sketches. We have designed a variant of the sketch data structure, k-ary sketch, which uses a constant, small amount of memory, and has constant per-record update and reconstruction cost. Its linearity property enables us to summarize traffic at various levels. We then implement a variety of time series forecast models (ARIMA, Holt-Winters, etc.) on top of such summaries and detect significant changes by looking for flows with large forecast errors. We also present heuristics for automatically configuring the model parameters. Using a large amount of real Internet traffic data from an operational tier-1 ISP, we demonstrate that our sketch-based change detection method is highly accurate, and can be implemented at low computation and memory costs. Our preliminary results are promising and hint at the possibility of using our method as a building block for network anomaly detection and traffic measurement.
Article
Traffic anomalies such as failures and attacks are commonplace in today's network, and identifying them rapidly and accurately is critical for large network operators. The detection typically treats the traffic as a collection of flows that need to be examined for significant changes in traffic pattern (e.g., volume, number of connections) . However, as link speeds and the number of flows increase, keeping per-flow state is either too expensive or too slow. We propose building compact summaries of the traffic data using the notion of sketches. We have designed a variant of the sketch data structure, k-ary sketch, which uses a constant, small amount of memory, and has constant per-record update and reconstruction cost. Its linearity property enables us to summarize traffic at various levels. We then implement a variety of time series forecast models (ARIMA, Holt-Winters, etc.) on top of such summaries and detect significant changes by looking for flows with large forecast errors. We also present heuristics for automatically configuring the model parameters.
Time-efficient shrinkage algorithm for Fourier-based prediction enabling proactive optimization in software defined networks (SDNRoute)
  • jaglarz
Neutral network based models for forecasting
  • ding