Article

Viewpoint: self-similarity upsets data traffic assumptions

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

An abstract is not available.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... De acuerdo con [16], la consideración del arribo aleatorio es incorrecta y el tráfico en redes de computadoras es congestionado con patrones que se distinguen porque se repiten en intervalos de tiempo específico. Por lo tanto puede ser útil, las consideraciones de arribos aleatorios puede no ser totalmente acertado para el análisis de redes de computadoras [17]. Sin embargo, aún cuando el tiempo de arribo de los paquetes puede ser estimado con una precisión adecuada, la evaluación del desempeño de las redes está lejos de ser solucionado debido a los accesos híbridos de los protocolos de accesos aleatorios. ...
... Es por ello que las redes mesh son tolerantes a fallas, es decir; si falla un nodo de la red o si una interferencia ocurre entre una comunicación, la red continúa operando. Simplemente los datos son enviados a lo largo de una ruta alterna[17]. Por otro lado, el empleo de redes mesh trae consigo el estudio de otro tema de investigación, el ruteo en las redes Ad-hoc (sin infraestructura).Los primeros despliegues de WiMAX fueron planeados siguiendo principalmente una topología punto multi-punto. ...
... If the video tra c possesses long-range dependence, then the bu ers needed at switches and multiplexers must be larger than those predicted by the traditional (short-range dependent) queueing models, which might create greater delays. Also, in traditional network tra c engineering, it is assumed that linear increases in bu er size will produce nearly exponential decreases in cell loss Stallings, 1997]. With long-range dependent tra c, this assumption may be false: the decrease in cell loss with an increase in bu er size is far less than expected Stallings, 1997]. ...
... Also, in traditional network tra c engineering, it is assumed that linear increases in bu er size will produce nearly exponential decreases in cell loss Stallings, 1997]. With long-range dependent tra c, this assumption may be false: the decrease in cell loss with an increase in bu er size is far less than expected Stallings, 1997]. Other aspects of network design like call admission control, quality of service (QOS), and congestion control, will also require rethinking. ...
Article
Traditionally, Markovian models have been used to statistically represent variable bit rate (VBR) video traffic. These models exhibit short-range dependence (SRD). However, recent measurements show that VBR video traffic possesses self-similar characteristics, meaning that long-range dependence (LRD) in the traffic stream lasts much longer than traditional models can capture. This paper investigates the relative performance impacts of SRD and LRD in VBR video traffic, by conducting a simulation study using four sets of VBR video data: (1) a real video trace, with both SRD and LRD, (2) a synthetic video trace with only LRD, (3) a synthetic video trace with only SRD, and (4) a synthetic video trace without SRD and LRD. The performance study suggests that video traffic models must capture both SRD and LRD. 1 Introduction Variable bit rate (VBR) video is expected to become one of the major applications used on high-speed networks such as ATM-based B-ISDN [Huang et al., 1995], [Heyman and...
... No ano de 1994 surge como fruto de uma pesquisa previa no fin da década dos 80 e inicio da década dos 90 o artigo [1] titulado "On the self-similar nature of Ethernet traffic" (ganhador em 1995 do William J. Bennett Award da IEEE Communications Society, e no ano de 1996 o W. R. G. Baker Prize Award também da IEEE), a partir desse artigo e dessa data o mundo da engenharia de tráfego das atuais redes de dados e modelos para a caracterização nunca mais seria o memesmo [2] [3]. Os antigos paradigmas de considerar o comportamento do tráfego de redes como obedecendo às leis Poissonianas ou de forma mais geral às regras Markovianas, foram colocados num cenário altamente questionado. ...
... This paper focuses on exploring a periodic load balancing method. The feature that network flow presents a wave form is discussed in literature [10]. That is to say, a wave of large flow would appear after a small flow for a long time. ...
... It is a self-adaptive decomposition method, which assumes that the original data may have many different modes of oscillations coexisting at the same time. The main purpose of EMD method is to decompose the original data into a series of oscillatory functions, namely, intrinsic mode functions (IMFs), which contain the local information embedded in the time series [10]. ...
Article
Annual runoff forecasting is one of the most important applications for effective reservoir management. Given the time-varying and non-linear characteristics of river runoff data, a novel hybrid model is proposed to improve the forecasting accuracy. First, the original data of runoff is decomposed into a number of intrinsic mode functions (IMFs) and one residual term using the ensemble empirical mode decomposition (EEMD) method. Then, these sub-series are modeled respectively by radial basis function network (RBFN) model. Finally, the prediction results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff time series. The proposed hybrid model is examined by predicting the annual runoff of Maduwang station in Bahe River, China. The comparison results indicate that the proposed hybrid model can effectively enhance RBFN approach for annual runoff series forecasting accuracy and it is superior to the commonly used model like auto-regressive integrated moving average (ARIMA) and back-propagation network (BPNN).
... and therefore faster. For Poisson, Markov and Gaussiari traffic rnodels the decay is exponential which is much faster than the decay predicted by The fact that larger buffers are needed for non-Gaussian heavy-traffic conditions explains the observation that. in real network environments, ATM switches do not often meet their specifications, whose derivation is based on non self-similar or short-tailed models [70]. ...
... No ano de 1994 surge como fruto de uma pesquisa previa no fin da década dos 80 e inicio da década dos 90 o artigo [1] titulado "On the self-similar nature of Ethernet traffic" (ganhador em 1995 do William J. Bennett Award da IEEE Communications Society, e no ano de 1996 o W. R. G. Baker Prize Award também da IEEE), a partir desse artigo e dessa data o mundo da engenharia de tráfego de redes e modelos para a caracterização do tráfego de dados nunca mais seria o mesmo [2] [3]. Os antiguos paradigmas de considerar o comportamento do tráfego de redes como obedecendo às leis Poissonianas ou de forma mais geral obedecendo regras Markovianas, foram colocados num cenário altamente questionável. ...
Article
RESUMO No passado recente a revelação de um novo paradigma na caracterização de tráfego em redes de dados de alta velocidade foi bastante surprendente e ao mesmo tempo complexa devido à falta de ferramentas de análise matemática para desenhar e dimensionar os atuais sistemas de redes de alto desempenho considerando este novo modelo. Muito trabalho esta sendo desenvolvido para conseguir um conjunto de ferramentas de análise que abranja tanto as características de tráfego com dependência de curta duração assim como o tráfego com dependência de longa duração. O propôsito deste artigo é mostrar de forma clara e objetiva as diferenças entre estes dois grupos de modelos de tráfego, isto é, os modelos clássicos e os modelos atuais desde o ponto de vista estatístico, como um compendio final e completo deste fascinante tópico no modelamento de tráfego.
... It has been observed recently that packet loss and delay are more serious than expected because network traffic is more bursty and exhibits greater variability than previously suspected. This phenomenon has led to the discovery of network traffic's self-similar, or fractal, characteristic [6]. A covariance-stationary process X(t) is called self-similar if X(t) -X(O) and rH (X(u) -X(O)) are identical in distribution, where the time t is rescaled in the ratio r, i.e., U = tlr. ...
Conference Paper
This paper presents a new self-similar traffic model derived from the arrival processes of M/G/∞ queue. It has a structure similar to that of a fractional ARIMA, with a driven process of fBm (fractional Brownian motion). The coefficients of the fBm are derived from the Pareto distribution of the active periods of the arrival process. When applied to a single server with self-similar input, the model results in an explicit buffer level equation which matches Norros’ storage model. So this method can be also served as a verification of Norros’ assumptions. The effectiveness of the proposed model has been verified by some practical examples.
... File transfer and email). The objective of these traffic models is to create the superposition of bursts from individual mobile VC's, as occurs in practice [15]. ...
Conference Paper
The performance of Wireless Asynchronous Transfer Mode (WATM) networks is influenced by both cell level and call level issues. These can introduce delay and loss into the services that use the WATM Access Points (APs). At the cell or burst levels, buffering and schemes in the APs, and the instantaneous load applied to the radiocells can have an impact on the Quality of Sevice (QoS) offerred. At the call level, the number of calls blocked at setup or handoff contributes to the Grade of Service (GoS) offered. The GoS and QoS must be considered together when assessing the performance of multimedia services in a WATM network. A novel modelling approach is presented here, whereby an integrated call level-cell level model allows realistic traffic patterns to be generated and applied at both levels. This approach permits analysis of a WATM AP architecture in terms of delay (at the cell level), and blocked calls (at the call level). Results presented show the performance of different buffering techniques in the WATM APs. Additionally, call level performance parameters allow the GoS offered to clients in the radio picocells to be adjusted across a range of traffic classes in order to optimise the QoS for time-sensitive applications.
... The applicability of these to the multimedia Internet world is now being questioned. William Stallings (Stallings, 1997) describes the phenomenon of selfsimilarity and how it affects buffers and links. The data traffic is not only burstier than voice traffic, but the bursts come in clusters. ...
Conference Paper
Networks are evolving rapidly into huge, omnipresent, multiservice entities. They are connected worldwide into an Internet that has many different administrations, purposes, resource owners, and users. As the network grows, design parameters are exceeded and new vulnerabilities are introduced. Network security solutions must accommodate enormous changes in the network itself, in the network security requirements, and in the mechanisms and constraints that drive appropriate security mechanisms. As the network serves a larger and more diverse group of users, multiple, flexible security approaches will be necessary to meet their requirements.
... For example, the exponentially distributed ON-OFF traffic appears smooth if visualized in a large time scale, which is not the case for the observed data network traffic. The most important consequence of selfsimilarity is that the long tail in distribution results in a larger than expected delay and blocking, which necessitates larger buffers for switching nodes or more radios for base stations [16]. Self-similar traffic exhibits long-range dependence (known as the "Joseph effect") which, proved by Willinger et al. [10], can be constructed by aggregating sources with infinite variance (having the "Noah effect"). ...
Article
The advanced cellular Internet service (ACIS) is targeted for applications such as Web browsing with a peak downlink transmission rate on the order of 1-2 Mbits/s using a wide-area cellular infrastructure. In order to provide bandwidth on demand using scarce radio spectrum, the medium-access control (MAC) protocol must: 1) handle dynamic and diverse traffic with high throughput, and 2) efficiently reuse limited spectrum with high peak rates and good quality. Most of the existing approaches do not sufficiently address the second aspect. This paper proposes a dynamic packet assignment (DPA) scheme which, without coordinating base stations, allocates spectrum on demand with no collisions and low interference to provide high downlink throughput. Interference sensing and priority ordering are employed to reduce interference probability. A staggered frame assignment schedule is also proposed to prevent adjacent base stations from allocating the same channel to multiple mobiles at the same time. Simulation results based on a packet data traffic model derived from wide-area network traffic statistics, which exhibit a “self-similar” property when aggregating multiple sources, confirm that this method is able to reuse spectrum efficiently in a large cellular system having many users with short active periods. Distributed iterative power control further enhances spectrum efficiency such that the same channel can be simultaneously reused in every base station
Article
Future switches and routers must not only forward packets at high-speed, but also provide support for a diverse range of traffic services and at the same time, maintain the highest possible link utilization. Effects such as shaping, connection interference, and cell clumping, which can be induced by various buffering and scheduling schemes in switches, can directly influence the quality of service (QoS) provided by the network. A network of multiple asynchronous transfer mode (ATM) switches has been modelled to analyze the accumulated effects of switch delays and cell jitter. Degradation in the QoS across several switches and the relative amounts of degradation contributed by switch types, are obtained from the simulations. The results presented show that this analysis may be used to achieve better understanding of cell level QoS issues within a large network. Network and switch complexity may be reduced by exploiting information regarding the traffic mixture, switch type, and network topology. Such an approach allows architectures to be redesigned in an intelligent manner with regard to QoS issues and the functional classification of the switches within the network.
Chapter
Full-text available
Ethernet is one of the most popular LAN technologies. The capacities of Ethernet have steadily increased to Gbps and it is also being studied for MAN implementation. With the discovery that real network traffic is selfsimilar and long-range dependent, new models are needed for performance evaluation of these networks. One of the most important methods of modelling self-similar traffic is Pseudo self-similar processes. The foundations are based on the theory of decomposability, which was developed approximately 20 years ago. Many researchers have revisited this theory recently and it is one of the building blocks for self-similar models derived from short-range dependent processes. In this paper we will review LANs, self-similarity, several modelling methods applied to LAN modelling, and focus on pseudo self-similar models. KeywordsEthernet–self-similarity–decomposability–pseudo self-similar processes
Article
Full-text available
Tremendous advances in technology have made possible Giga- and Terabit networks today. Simi- lar advances need to be made in the management and control of these networks if these technologies are to be successfully accepted in the market place. Although years of research have been expended at designing control mechanisms necessary for fair resource allocation as well as guaranteeing Quality of Service, the discovery of the self-similar nature of traffic ows in all packet networks and services, irrespective of topology, technology or protocols leads one to wonder whether these control mech- anisms are applicable in the real world. In an attempt to answer this question we have designed network simulators consisting of realistic client/server interactions over various protocol stacks and network topologies. Using this testbed we present some preliminary results which show that simple ow control mechanisms and bounded resources cannot alter the heavy-tail nature of the offered traffic. We also discuss methods by which application level models can be designed and their impacts on network performance can be studied.
Conference Paper
From a pure signal point of view, jitter can be defined as an interference on an analog line caused by a variation of the signal from its reference timing slot. The same effect can be experienced within an ATM switch buffer at the cell level mechanism due to that many traffic streams are concurring to be served. Therefore, a theoretical approach for a single quality of service (QoS) constant bit rate (CBR) cell stream being multiplexed firstly with an elastic short-range dependent (SRD) background traffic and thereafter with a long-range dependent (LRD) traffic is presented. Results show that jitter experienced by the CBR cell stream, is extremely high when LRD traffic is being multiplexed with. This is not the case when SRD traffic is taking into account. Furthermore, this work shows a very interesting result, for low Hurst parameter values (H < 0.70), a sort of cross-effect boundary is visualized; it means that there could exist a threshold where self-similarity could have no adverse effect on the network.
Conference Paper
The main objective of this paper is to develop a tractable model for self-similar traffic and to apply it to ATM networks. We develop a new traffic model derived from the arrival processes of the type M/G/∞. This modeling method not only provides us an explicit and analytical expression for the self-similar traffic processes, but also sets up a connection between the two most popular self-similar processes. This model has a structure similar to that of a fractional ARIMA, with a driven process of fBm (fractional Brownian motion). But the coefficients of the fBm are derived from the Pareto distribution of the active periods of the arrival process. We also derive an explicit buffer level equation based on the proposed traffic model, which matches Norros' (1994) storage model. So this method can be also served as a verification of Norros' assumptions. The queueing behavior of a single server to self-similar input can be analytically investigated with the proposed equation. The effectiveness of these methods has been demonstrated by some practical applications
Conference Paper
Providing guaranteed quality of service (QoS) in cell and packet based networks places additional demands upon the design of switch fabrics. This paper considers the application of various fundamental buffering approaches in ATM, for support of predictable QoS. The paper identifies the functional requirements of the switch fabric at the ATM layer, and make recommendations for an improved architecture that considers QoS at the switch fabric level. The relative merits of each buffering scheme has been determined by applying realistic broadband traffic loads involving CBR, VBR and UBR traffic to simulated models of the buffer strategies. Simulation results are presented, along with a performance analysis for each scheme. The resultant architecture allows switches to be realised which have reliable and predictable parameters in the ATM layer and which guarantee QoS as well as reducing connection admission control complexity
Conference Paper
Full-text available
Network Dispatcher (ND) is a software tool that “routes” TCP connections to multiple TCP servers that share their workload. It exports a set of virtual IP addresses that are concealed and shared by the servers. It implements a novel dynamic load-sharing algorithm for allocation of TCP connections among servers according to their real-time load and responsiveness. ND forwards packets to the servers without performing any TCP/IP header translations, consequently outgoing server-to-client packets are not handled, and can follow a separate network route to the clients. Its allocation method was proven to be efficient in live tests, supporting Internet sites that served millions of TCP connections per hour. This paper describes the load management features of ND
Conference Paper
The dynamic evolution of ecological systems in which predators and prey compete for survival has been investigated by applying suitable mathematical models. This kind of mathematical framework has been shown to be suited to describe evolution of economical systems as well, where instead of predators and prey there are consumers and resources. We believe that this kind of system, called dynamic systems, could be usefully applied to informatics context, for example to model the dynamic interactions of client/server systems, such as Internet users and Web servers. We present the general mathematical model, often referred to in biological and economics literature, and show how to apply dynamic systems theory to model client/server interactions in different system load conditions. Web clients compete to obtain some URLs from the server and the efficiency of the server decreases when the number of clients increases, in a similar way to predator and prey populations. The feasibility of this approach is supported by experimental results
Conference Paper
Full-text available
ShockAbsorber is a software router of TCP connections that supports load sharing across multiple TCP servers that share a set of virtual IP addresses. It consists of the executor, an O/S kernel extension that supports fast IP packet forwarding, and a user level manager process that controls it. The manager implements a novel dynamic load-sharing algorithm for allocation of TCP connections among servers according to their real-time load and responsiveness. This algorithm produces weights that are used by the executor to quickly select a server for each new connection request. The executor forwards client TCP packets to the servers without performing any TCP/IP header translations. Outgoing server-to-client packets are not handled by ShockAbsorber and can follow a separate network route to the clients. Depending on the workload traffic, the performance benefit of this half-connection method can be significant. Prototypes of ShockAbsorber were used to scale up several large and high-load Internet sites serving millions of TCP connections per hour
Conference Paper
Advanced cellular Internet service (ACIS) is targeted for applications such as Web browsing with a peak downlink rate of the order of 1-2 Mb/s using wide-area cellular infrastructure. In order to provide bandwidth on demand using scarce spectrum, the medium access control (MAC) protocol must: (1) handle dynamic and diverse traffic with high throughput and (2) efficiently reuse limited spectrum with high peak rates and good quality. Most of the existing approaches do not sufficiently address the second aspect. This paper proposes a dynamic packet assignment (DPA) scheme which, without coordinating base stations, allocates spectrum on demand with no collisions and low interference to provide high downlink throughput. Interference sensing and priority ordering are employed to reduce the interference probability. A staggered frame assignment schedule is also proposed to prevent adjacent base stations from allocating the same channel to multiple mobiles at the same time. Simulation results based on a packet data traffic model derived from wide-area network traffic statistics, which exhibit a “self-similar” property when aggregating multiple sources, confirm that this method is able to reuse spectrum efficiently in a large cellular system having many users with short active periods. Distributed iterative power control further enhances spectrum efficiency such that the same channel can be reused in every base station
Conference Paper
Asynchronous Transfer Mode (ATM) is a highspeed network technology that transmits various types of information across networks such as voice, video, image, data, etc. In an ATM network the basic data units, called cells, are routed through switched or permanent virtual circuits (virtual channels). A Smart Permanent Virtual Circuit is a connection that looks like a Permanent Virtual Circuit at the local and remote endpoints with a Switched Virtual Circuit in the middle. If a link carrying a Smart Permanent Virtual Circuit goes down and there is an alternate route, then the network automatically reroutes the Smart Permanent Virtual Circuit around the link. As a result of the rerouting, the network may not be able to deliver the guaranteed quality of services as it was negotiated. It may have to change the quality of service parameters negotiated for other connections. The objective of this paper is to apply Distributed Artificial Intelligence (DAI) methodologies, “intelligent agents”, in ATM network management. The paper presents a search algorithm that helps the agents learn from previous interactions and experience. Agents can evaluate alternate paths in order to maintain as many connections as possible with the quality of service guaranteed originally
Article
Teletraffic engineers provide models allowing communications networks to be planned and systems to be designed to meet the performance needs of users within a reasonable cost. The successful modeler combines analytical or simulation skills with a deep understanding of the technology. In the emerging information networking environment comprising new technologies such as ATM, Internet, wireless, etc., and new services such as video, multimedia, data and personal communications services, the old paradigms of circuit-switched calls and Erlang distributions have been severely challenged. The confluence of the shifts in technologies and services along with the convergence of computing, telecommunications, consumer electronics, and electronic media industries, and the shift from a monopolistic to competitive business paradigm, has created a tremendously rich lode of fundamental problems that need to be addressed by teletraffic engineers. In this article the author describes the historical role of the teletraffic engineer, reviews several of the major paradigm shifts, and discusses some of the challenges facing the teletraffic community with an emphasis on modeling wireless communications systems
ResearchGate has not been able to resolve any references for this publication.