Article

Internet Growth: Is There a 'Moore's Law' for Data Traffic?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Internet traffic is approximately doubling each year. This growth rate applies not only to the entire Internet, but to a large range of individual institutions. For a few places we have records going back several years that exhibit this regular rate of growth. Even when there are no obvious bottlenecks, traffic tends not to grow much faster. This reflects complicated interactions of technology, economics, and sociology, similar to those that have produced "Moore's Law" in semiconductors. A doubling of traffic each year represents extremely fast growth, much faster than the increases in other communication services. If it continues, data traffic will surpass voice traffic around the year 2002. However, this rate of growth is slower than the frequently heard claims of a doubling of traffic every three or four months. Such spectacular growth rates apparently did prevail over a two-year period 1995-6. Ever since, though, growth appears to have reverted to the Internet's historical pattern...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The use of web based applications has resulted in rapid increase of internet traffic [1]. Efficient utilization of network resources, such as network bandwidth, is essential to deal with this high volume of traffic. ...
... The cost function of Fortz and Thorup was formally defined as minimize = a∈A a (l a ) (1) subject to the constraints: ...
... Figure 1 shows a topology with weights assigned to each arc. These weights are in the range [1,20]. A solution for this topology can be (18,1,7,15,3,17,14,19,13,18,4,16,16). ...
Article
Full-text available
The open shortest path first (OSPF) routing protocol is a well-known approach for routing packets from a source node to a destination node. The protocol assigns weights (or costs) to the links of a network. These weights are used to determine the shortest paths between all sources to all destination nodes. Assignment of these weights to the links is classified as an NP-hard problem. The aim behind the solution to the OSPF weight setting problem is to obtain optimized routing paths to enhance the utilization of the network. This paper formulates the above problem as a multi-objective optimization problem. The optimization metrics are maximum utilization, number of congested links, and number of unused links. These metrics are conflicting in nature, which motivates the use of fuzzy logic to be employed as a tool to aggregate these metrics into a scalar cost function. This scalar cost function is then optimized using a fuzzy particle swarm optimization (FPSO) algorithm developed in this paper. A modified variant of the proposed PSO, namely, fuzzy evolutionary PSO (FEPSO), is also developed. FEPSO incorporates the characteristics of the simulated evolution heuristic into FPSO. Experimentation is done using 12 test cases reported in literature. These test cases consist of 50 and 100 nodes, with the number of arcs ranging from 148 to 503. Empirical results have been obtained and analyzed for different values of FPSO parameters. Results also suggest that FEPSO outperformed FPSO in terms of quality of solution by achieving improvements between 7 and 31 %. Furthermore, comparison of FEPSO with various other algorithms such as Pareto-dominance PSO, weighted aggregation PSO, NSGA-II, simulated evolution, and simulated annealing algorithms revealed that FEPSO performed better than all of them by achieving best results for two or all three objectives.
... European, Asian and Australian users, a large percentage of their Internet traffic is actually with US networks. According to an estimate [52], 60% of Telstra (the dominant Australian provider) Internet traffic had been with US provider networks. Therefore, it's reasonable to assume that a large portion of end-to-end traffic traverse over multiple provider networks. ...
... In studies conducted by Coffman and Odlyzko [9,52], it was shown that in 1997, the traffic carried by private lines and the Internet was 3,000 -5,000 TB/month, "Killer" applications: Network engineering and deployment take time. The past history tells us that user traffic triggered by some "killer" applications could potentially "flood" the network before the providers can respond and adjust their network capacity. ...
... The past history tells us that user traffic triggered by some "killer" applications could potentially "flood" the network before the providers can respond and adjust their network capacity. For instance, due to Web browsing, the Internet traffic has an abnormal growth [52] that doubles every three or four months in 1995 and 1996, which represents a 100-fold traffic explosion over a two-year span. Another traffic explosion took place more recently. ...
Article
Resource reservation protocols were originally designed to signal end hosts and network routers to provide quality of service to individual real-time flows. More recently, Internet Service Providers (ISPs) have been using the same signaling mechanisms to set up provider-level Virtual Private Networks (VPNs) in the form of MPLS Label Switched Path (LSP). It is likely that the need for reservation signaling protocols will increase, and these protocols will eventually become an indispensable part of Internet service. Therefore, reservation signaling must scale well with the rapidly growing size of the Internet. Over the years, there have been debates over whether or not there is a need for resource reservation. Some people have been advocating over-provisioning as the means to solve link congestion and end-to-end delay problems. The over-provisioning argument is largely driven by the expectation that the bandwidth price will drop drastically. From our investigation, however, we found that many end users have not been benefiting from over-provisioning: the current Internet has bandwidth bottleneck links that can cause longlasting congestion and delay. At the same time, leased line cost has not been reduced sufficiently in a timely manner for many network providers to deploy high-speed links everywhere
... • L'Internet a connu une croissance record [6,7], mesurée en nombre de noeuds connectés et aussi en terme de popularité -utilisateurs et services proposés. Nous considérons que le développement futur de l'Internet continuera d'être alimenté principalement par les trois aspects mentionnés : les ordinateurs portables, les dispositifs intelligents et les réseaux sans fil. ...
... VNAT ne contient pas un mécanisme qui permet aux deux hôtes de se retrouver en cas de déplacement simultané. Aussi, il n'y a pas de mécanisme de résolution de conflit d'adresses, cas où une adresse IP est réutilisée et deux connexions différentes partageant une extrémité commune entrent en conflit 6 . Les auteurs ne proposent pas un mécanisme de négociation d'une autre adresse virtuelle, et suggèrent tout simplement d'interrompre une des deux connexions. ...
... Cependant, cette procédure standard est spécifiée dans la recommandation IEEE 802.11f[26], mais sans qu'elle soit présente dans toutes les implémentations6 Et plusieurs autres alias, parmi lesquels drakkar.imag.fr. ...
Article
This thesis was motivated by the new ubiquitous context of ambient wireless networks. The Internet protocols were designed 30 years ago without taking into consideration the nomadic use of networks and do not respond to the new constraints of mobility. We aimed to conceive mechanisms to make the mobility transparent to the applications and users. A part of our work was focused on the improvement of handoff delay of physical layer 802.11 to about 20 ms. Host mobility and network changes often require the reconfiguration of several parameters at IP layer. For the local mobility we chosen to allow the hosts to keep theirs IP address unchanged and to propagate host routes in the local domain. On the contrary, for global mobility, the optimal routing in Internet force us to conceive a solution where the mobile hosts change their IP addresses according to the subnet there are connected to. Our solution is based on "end-to-end" paradigm where the two hosts implied in a connection are the only ones to ensure the connection transfer to the new attachment points. It use interception of calls to the socket library and of DNS requests as well as local address translation to virtualise IP addresses into stable host identifiers that are presented to upper layers.
... Every IP routing protocol has a cost associated with the links in the networks. In MPLS, devices that support IP forwarding, the IP routing tables are used to build IP forwarding tables, also called forwarding information base (FIB) [6,7]. After the IP routing tables have been built, MPLS labels are assigned to individual entries in the IP routing table (individual IP prefixes) and propagated to adjacent MPLS devices through a Label Distribution Protocol (LDP). ...
... MPLS-TE allows for a TE scheme where the head end router of a label switched path (LSP) can calculate the most efficient route through the network toward the tail end router of the LSP [4,6,10]. TE consists of three main steps which are measure, model and control. ...
... The operator measures the physical layout of the network which is necessary for tasks like capacity planning and network visualization followed by estimation of possible settings of the links, knowing how much an IGP setting affects the traffic flow. IGP protocol is a routing protocol which is used with the group of IP networks under the control of one entity which gives a common routing policy for the internet [6,11]. The Cisco IOS IP Service Level Agreements (SLAs) Internet Control Message Protocol (ICMP) echo operation is used to monitor end-to-end response time between a Cisco router and devices using IP. ...
Article
Full-text available
The need for improved network performance towards providing reliable services in the face of growing demand on enterprise network and internet service across all sectors of the economy has become very paramount. Latency and packet loss as quality of service (QoS) metrics are issues of concern since different multimedia applications, voice and data packets have to be delivered to end systems over long distances. This study investigated the technology behind the delivery of the packets by comparing the performance of each of IP, MPLS and MPLS-TE on the same congested WAN design. The results showed that MPLS-TE had the least latency and barely any packet loss.
... demands [1,18]. Such services are specialized in managing two main actors: (1) candidates, people with specific characteristics and competencies looking for new employment opportunities; and (2) recruiters, companies that offer job vacancies and look for the best professionals in the market. ...
... demands [1,18]. Such services are specialized in managing two main actors: (1) candidates, people with specific characteristics and competencies looking for new employment opportunities; and (2) recruiters, companies that offer job vacancies and look for the best professionals in the market. In general, the goal is to identify the ideal candidates for each job, using search and recommendation algorithms that explore candidates' profiles (i.e., curriculum) and the skills required by different job vacancies [3]. ...
Conference Paper
Online recruitment services have attracted an increasing number of candidates and recruiters who are looking for better job opportunities and the best professionals in their respective areas. These services, through search and recommendation systems, explore candidates and job profiles to identify the ideal candidates for each job vacancy. There are many challenges when we follow this scenario, such as reciprocal matches between vacancies and candidates, temporal dynamics (candidate/vacancy relationship varies over time) and imbalances between demand and supply between areas. Modeling the preferences and behavior of candidates and recruiters is an essential task for which improvements can be proposed by these services to mitigate their challenges. We present in this work a methodology that aims to help answer questions that may be asked about users preferences and behavior, extracting information that leads to improvements in existing functionality and the creation of new ones. We applied our methodology to actual data and questions, which were provided by Catho, the leading Latin American market company within this segment. In the analysis of results, we present opportunities for improvement in online recruitment services, such as the creation of a tool to help register job vacancies and resumes.
... Ces prédictions montrent une augmentation rapide et continue du trafic internet, au point que certains y voient une analogie à la loi de Moore [3]. Par ailleurs ces 2 graphiques montrent que les prédictions semblent assez fiables, en 2011 6.6 zettaoctets étaient prédits pour 2016, contre 6 zettaoctets réellement échangés en 2016 soit une différence de 10%. ...
... Avec Si la permittivité du Si, q la charge élémentaire, N la concentration des dopants et kT le produit de la constante de Boltzmann avec la température. Pour des dopages compris entre 1e17at/cm 3 et 1e18at/cm 3 ...
Thesis
L’augmentation continue du trafic Internet met sous pression les centres de données numériques. La photonique sur silicium est une solution attrayante pour réaliser les interconnections. La modulation dans le silicium repose sur l’effet de la dispersion plasma. Elle est principalement réalisée en intégrant une jonction PN au sein d’un guide d’onde. Cependant cette approche est relativement peu efficace.Dans ce travail, 2 solutions alternatives sont étudiées pour améliorer l’efficacité de modulation. D’une part je me suis intéressé à l’utilisation d’une structure électronique capacitive. D’autre part le SiGe-contraint sur silicium présente des coefficients électro-optiques plus importants que le silicium.Des simulations numériques sont réalisées pour estimer les performances atteignables avec ces approches. Les simulations reposent sur une approche perturbative du problème, en faisant l’hypothèse que les charges libres influencent peu la propagation du mode optique. Les résultats obtenus montrent que les modulateurs capacitifs sont plus performants que les modulateurs PN pour des applications à 100Gb/s. De plus, ces résultats montrent que le SiGe-contraint améliore les performances, pour des teneurs en Ge inférieures à 0.25.Ensuite, des modulateurs capacitifs sont fabriqués dans la plateforme photonique de STMicroelectronics. La fabrication tend à s’inscrire le plus possible au sein des procédés de fabrication déjà existants. Une attention particulière est portée sur le dépôt de SiGe et à l’intégration d’un niveau en polysilicium. Pour ce faire, des étapes de fabrication supplémentaires sont introduites au sien des fonderies de STMicroelectronics et du CEA-Leti.Enfin, les composants fabriqués sont caractérisés. Des efficacités de 37°/mm à 2V sont mesurées pour des pertes électro-optiques atteignant 1.9dB/mm. Des bandes passantes électro-optiques supérieures à 15GHz sont mesurées pour les composants les plus rapides, correspondant aux composants avec du polysilicium recuit à 1050°C.
... This resulted in the terms Big Data and Big Data Analytics-certainly a long-term development as stated by Coffman and Odlyzko. [2] This development opens new opportunities for business and society and drives innovation, such as for our envisioned Smart Cities. [3,4] However this development also poses a challenge for data scientists and real nightmares for network security analysts. ...
Chapter
In order to make our envisioned Smart Cities become reality one day, it is essential that confidentiality, integrity and availability of all assets within the digital ecosystem can be ensured at any time. The challenges to secure such complex and highly critical systems is, however, tremendous. Today, Network Services are confronted with a growing amount and diversity of attacks and at the same time detection of them is getting more complex. This is mainly a result of more sophisticated attacks and a consequence of the more ubiquitous and overall more complex IT ecosystem. The resulting rapidly increasing network traffic makes it extremely hard to detect and prevent attacks in traditional ways. This paper proposes Security Information Management (SIM) enhancements considering Big Data Analysis principles. In the context of Cyber Security, the blueprint and implementation presented can be adopted in organisations or Smart City contexts. After devising a blueprint for Big Data enhanced SIM based on the latest research, the system architecture and the resulting implementation are presented. The blueprint and implementation have been field-tested in a real world SIM large scale environment and evaluated with real network security logs. Our research is timely, since the application of Big Data principles to SIM environments has been rarely investigated so far, and there exists the need for a general concept of enhancement possibilities.
... Process enhancement aims to extend an existing process model by using information of the actual process that is recorded in event logs. Two things have made process mining one of the hottest topics in workflow technology (Li et al. 2016): one is the advancement of multicore and parallel technology that leads to a digital universe (Coffman and Odlyzko 2002) and the other is the growing digital universe that is well-aligned with processes to record and analyze more events (Cook and Wolf 1998). Recently, process mining has been widely used in several domains, including web log mining (Das and Turkoglu 2009), software development (Caldeira and e Abreu 2016), health care (Mans et al. 2013b), and education (Bannert et al. 2014). ...
Article
There has been a long debate on how to measure design productivity. Compared to construction productivity, design productivity is much more difficult to measure because design is an iterative and innovative process. Today, with rapid extension of building information modeling (BIM) applications, tremendous volumes of design logs have been generated by design software systems, such as Autodesk Revit. A systematic approach composed of a detailed step-by-step procedure is developed to deeply mine design logs in order to monitor and measure the productivity of the design process. A pattern retrieval algorithm is proposed to identify the most frequent design sequential patterns in building design projects. A novel metric for measuring design productivity based on the discovered sequential patterns is put forward. A large data set of design logs, provided by a large international design firm, is used as a case study to demonstrate the feasibility and applicability of the developed approach. Results indicate that: (1) typically, each designer executes specific commands more than any other commands; for instance, it is shown for a designer that the accumulative frequency of three commands can reach up to 56.15% of the entire number of commands executed by the designer; (2) a particular sequential pattern of design commands (\”pick lines → \”trim/extend two lines or walls to make a corner→ \”finish sketch”) has been executed 2,219 times, accounting for 46.75% of instances associated with the top five discovered sequential patterns of design commands; (3) the identified sequential patterns can be used as a project control mean to detect outlier performers that may require additional attention from project leaders; and (4) productivity performance within the discovered sequential patterns varies significantly among different designers; for instance, one of the designers (designer #6 in the case study) is identified as the most productive designer in executing both Patterns I and II, whereas another designer (Designer #1) is found to be the most productive designer in executing both Patterns III and IV. It is also uncovered that designers, on average, spend less time running the most observed sequential patterns of design commands as they gain more experience. This research contributes: (1) to the body of knowledge by providing a novel approach to monitoring, measuring, and analyzing design productivity; and (2) to the state of practice by providing new insights into what additional design process information can be retrieved from Revit journal files.
... The market for optical networks has grown by diversifying the routes of optical fiber communication. The density of fiber lines has grown along with an exponential increase in data traffic and data transmission [1][2][3]. The capacity of commercial light-wave systems increased from roughly 1 Gb/s in the mid-1980s to 1 Tb/s by 2000 [4]. ...
Article
Full-text available
A handheld line information reader and a line information generator were developed for the efficient management of optical communication lines. The line information reader consists of a photo diode, trans-impedance amplifier, voltage amplifier, microcontroller unit, display panel, and communication modules. The line information generator consists of a laser diode, laser driving circuits, microcontroller unit, and communication modules. The line information reader can detect the optical radiation field of the test line by bending the optical fiber. To enhance the sensitivity of the line information reader, an additional lens was used with a focal length of 4.51 mm. Moreover, the simulation results obtained through BeamPROP® software from Synopsys, Inc. demonstrated a stronger optical radiation field of the fiber due to a longer transmission wavelength and larger bending angle of the fiber. Therefore, the developed devices can be considered as useful tools for the efficient management of optical communication lines.
... A drop in the requirement has negative consequences on communication industry particularly when the related investment has been made. This happened in the context of telecom crisis of 2002 when it was assumed that the exponential rise in traffic flow would continue for several years [4]. A correct data flow model might have averted the crisis. ...
Conference Paper
We have modeled data flow in communication link using Random motion of a particle which results in a Gaussian pattern of traffic flow over a period of time. The varying degrees of spectral deviation present a coherent model of data flow for wired links. We have considered multiple link systems and presented an n-dimensional representation of traffic model using a Gaussian function governed by n-parameters. The model opens new insights towards analyzing and predicting bandwidth requirements in communication links and their pr o-spective failure.
... The internet is growing continuously and the availability of sophisticated attacking tools makes internet a very dangerous place. Internet is doubling every two years by Moore's law of data traffic [1] and the number of hosts is tripling every two years [2]. Threats are also growing proportionally over the years. ...
Article
Full-text available
IP filtering is a technique used to control IP packets flow in and out of a network where Filter engine inspects at source and destination IP of incoming and outgoing packets. Here Filter engine is designed to improve the performance of the filter, i.e. to reduce the processing time of the filtering mechanism. The data structure used in the IP filter is hashing, for larger number of hosts and various ranges IP network of hosts hashing provides much better performance than link list. Here hash function for the hash table is valid IP ranges. In hash table technique the comparison can be done with minimum number of comparisons.
... Router will forward the IP packets to interface mentioned after forwarding interface information is retrieved routing table. Internet is doubling every two years by Moore's law of data traffic [1] and the number of hosts is tripling every two years [2]. Threats are also growing proportionally over the years. ...
Article
Full-text available
Main task of a Router is to provide routing of Internet Protocol (IP) packets. Routing is achieved with help of the IP lookup. Router stores information about the networks and interfaces in data structures commonly called as routing tables. Comparison of IP from incoming packet with the IPs stored in routing table for the information about route is IP Lookup. IP lookup performs by longest IP prefix matching. The performance of the IP router is based on the speed of prefix matching. IP lookup is a major bottle neck in the performance of Router. Various algorithms and data structures are available for IP lookup. This paper is about reviewing various tree based structure and its performance evaluation.
... In the US, Internet traffic growth is estimated to be more than 100% per annum [32,33,34], regardless of conditions in the financial market [35]. With the growth in the number of Internet hosts, the same trend is expected in the rest of the world as well [36,37,38]. ...
Article
Full-text available
In this thesis, we present a framework to compare and evaluate alternative topologies and architectures for future optical backbone networks. The most advanced form of currently deployed optical network, a point-topoint WDM network, has IP routers connected with Wavelength Division Multiplexed (WDM) links. A potential bottleneck in this type of network is the capacity of the IP routers as traffic loads increase. An alternative architecture that aims to address this limitation is an Automatically Switched Optical Network (ASON). The design of an optical network involves determining the number of network elements and their interconnection topology for a given traffic demand and the capacity constraint of each network element. We present a linear algorithm to design an ASON. Using this algorithm, we identify the bottlenecks in an ASON, and compare its cost to that of a point-to-point WDM network. Traffic grooming is the aggregation of low level traffic flows into a higher level traffic flow. We develop a scheme that can be used to perform waveband grooming for several different topologies of an ASON that uses single-layer multigranular Optical Cross-Connects (MG-OXCs). We also investigate how different traffic grooming schemes can be used to eliminate the bottlenecks in an ASON. Also in this thesis, we develop a new modeling approach for ASONs, and evaluate the cost and scalability of different architectures of point-to-point WDM networks and ASONs as a function of traffic load. Through this modeling approach, we identify that an ASON is lower in cost than a point-to-point WDM network for low traffic loads. We also demonstrate that an ASON needs IP routers to be lower in cost for high traffic loads. We also analyzed how the cost of an ASON is affected by other factors such as the use of 40 Gb/s versus 10 Gb/s lightpaths, and reductions in network element cost over time.
... A drop in the requirement has negative consequences on communication industry particularly when the related investment has been made. This happened in the context of telecom crisis of 2002 when it was assumed that the exponential rise in traffic flow would continue for several years [4]. A correct data flow model might have averted the crisis. ...
... Nowadays, as cloud computing and other data intensive applications such as video streaming gain more and more importance, the amount of data processed in networks and compute centers is growing. Moore's law for data traffic [16] states that the overall data traffic doubles each year. This yields unique challenges for resource management, particularly bandwidth allocation. ...
Preprint
Full-text available
In \emph{bandwidth allocation games} (BAGs), the strategy of a player consists of various demands on different resources. The player's utility is at most the sum of these demands, provided they are fully satisfied. Every resource has a limited capacity and if it is exceeded by the total demand, it has to be split between the players. Since these games generally do not have pure Nash equilibria, we consider approximate pure Nash equilibria, in which no player can improve her utility by more than some fixed factor $\alpha$ through unilateral strategy changes. There is a threshold $\alpha_\delta$ (where $\delta$ is a parameter that limits the demand of each player on a specific resource) such that $\alpha$-approximate pure Nash equilibria always exist for $\alpha \geq \alpha_\delta$, but not for $\alpha < \alpha_\delta$. We give both upper and lower bounds on this threshold $\alpha_\delta$ and show that the corresponding decision problem is ${\sf NP}$-hard. We also show that the $\alpha$-approximate price of anarchy for BAGs is $\alpha+1$. For a restricted version of the game, where demands of players only differ slightly from each other (e.g. symmetric games), we show that approximate Nash equilibria can be reached (and thus also be computed) in polynomial time using the best-response dynamic. Finally, we show that a broader class of utility-maximization games (which includes BAGs) converges quickly towards states whose social welfare is close to the optimum.
... For example, the number of hosts advertised in the Domain Name Service (DNS) has increased from 1.3 million in January 1993 to 171.6 million in January 2003 [isc], more than 130-fold expansion in a 10-year time span; the number of users on line grows from 16 million in December 1995 to 580.78 million in May 2002 [nua] only in a six-and-half-year period of time; and the volume of traffic carried by the Internet increases at a rate of approximately 100 percent each year [CO01]. All of these speak for the great success of the Internet in last decade. ...
... The evolution of the technology in communications and processing capabilities, as well as other sociologic factors (e.g. social progress, global networking, working connected on remote devices), leads to the fact that both the entire Internet and a large range of individual institutions nearly double the network traffic each year since 1997 [16,61]. ...
... Fourth, the possibility exists that the Internet will never attain a true extended state of bandwidth glut or bandwidth scarcity. Analysis conducted in the early 2000s indicated that there was an effect similar to Cooper's Law in terms of Internet growth, and that demand for the bandwidth would continue to increase into the future, doubling roughly once every year [Coffman and Odlyzko, 2000]. It also held that due to this, data transmission capabilities would continue to grow with demand. ...
Article
Full-text available
Global Internet usage has fueled much of the technological innovation seen during the first decade of the twenty-first century. Unsurprisingly, this has led to a commensurate increase in consumption of bandwidth, the measure of how much information the Internet can transmit. However, bandwidth is not an inexhaustible resource. Wired communications require physical infrastructure, requiring considerable investment and construction to expand, and wireless communications require sections of electromagnetic spectrum, which has grown much more crowded. This article examines the current bandwidth situation in light of networking trends and events as of 2010. Findings indicate that, although there is no immediate bandwidth crisis, one may eventually come, especially in the wireless spectrum, and, although technological innovation may provide a considerable hedge against the crippling implications of such a shortage, care must be taken to manage growth in bandwidth usage to maintain it at acceptable levels while accounting for the needs of all concerned parties.
... Some authors predict (see [1]) that over-engineering (providing enough capacity to meet possible peak demands) will eliminate network congestion phenomena. Indeed, the present Internet traffic growth of about 100% per year [2] may be compensated by available technology solutions. Thus the question may be asked if it is reasonable to waste time dealing with reactive flow control issues, where one of the main targets is to provide the means to avoid congestion -perhaps it would be better to wait till tomorrow. ...
Article
Full-text available
The paper deals with reactive flow control in a communication network where the objective of the control is to maximize the total utility of all sources over their transmission rates. The control mechanism is derived as a price adjustment algorithm, formally to solve the dual problem of the price method. The paper examines the workability of implementation of various proposed price adjustment algorithms in IP-based networks and discusses the feasibility of using prices generated during optimization periods as a base of charging the users. The detailed simulation results are presented.
... Instead, we are. Our scarce human attention (Simon, 1971) is simply incapable of managing the millions of gigabytes that are sent and received every day (Coffman & Odlyzko, 2000). In 1998, there were 87.2 billion pieces of direct mail delivered to US mailboxes. ...
... However, it is an undeniable fact that, we are awash in a flood of data today. Coffman et al. [27] envisage that Internet traffic doubles each year and Parkinson's law states that, as long as there is storage, data will keep expanding [28]. In today's Internet era, around 40% of the world population are connected and this is one of the reasons for abundance of global IP traffic [29]. ...
Article
Full-text available
Summarization has been proven to be a useful and effective technique supporting data analysis of large amounts of data. Knowledge discovery from data (KDD) is time consuming, and summarization is an important step to expedite KDD tasks by intelligently reducing the size of processed data. In this paper, different summarization techniques for structured and unstructured data are discussed. The key finding of this survey is that not all summarization techniques create a summary suitable for further analysis. It is highlighted that sampling techniques are a viable way of creating a summary for further knowledge discovery such as anomaly detection from summary. Also different summary evaluation metrics are discussed.
... T HE Internet has become the worldwide interconnection of billions of individuals through providers operated by government, industry, academia, and private parties. Supporting the fast growth of the Internet is a challenge [1], not only due to technical issues, but also because of economical factors. For instance, in order to decrease and/or postpone investments in the network infrastructure, an Internet Service Provider (ISP) may employ discriminatory traffic management techniques [2]. ...
Article
Full-text available
Network Neutrality is becoming increasingly important as the global debate intensifies and governments worldwide implement and withdraw regulations. According to this principle, all Internet traffic must be processed without differentiation, regardless of origin, destination and/or content. Neutrality supporters claim that traffic differentiation can compromise innovation, fair competition and freedom of choice. However, detecting that an ISP is not employing traffic differentiation practices is still a challenge. This work presents a survey of strategies and tools for detecting traffic differentiation on the Internet. After presenting basic neutrality definitions as well as an overview of the worldwide debate, we describe ways that can be used by an ISP to implement traffic differentiation, and define the problem of differentiation detection. This is followed by a description of multiple existing strategies and tools. These solutions differ mainly on how they execute network measurements, the metrics employed, traffic generation techniques, and statistical methods. We also present a taxonomy for the different types of traffic differentiation and the different types of detection. Finally, we identify open challenges and future research directions.
... A importância da Internet na sociedade moderna tem aumentado significativamente, conforme a quantidade de usuários e serviços disponíves na rede cresce [K.G. Coffman 2002]. Adequar e manter a estrutura da rede para atender esta demanda crescente é um desafio, em especial porque além dos aspectos tecnológicos, devem ser considerados também aspectos econômicos. Os provedores de acesso (Internet Service Providers, ISPs) podem empregar técnicas de gerência de tráfego de rede para reduzir e/ou postergar invest ...
Chapter
Full-text available
Network Neutrality (NN) is becoming increasingly important as the global debate intensifies and governments worldwide implement regulations. According to NN, all types of traffic must be processed without discrimination, regardless of origin, destiny and/or content. The discrimination between different types of traffic compromises innovation, fair competition and the freedom of choice of consumers. However, ensuring that ISPs are not employing discriminating practices is still a challenge. This tutorial presents an overview of several existing solutions to detect “traffic differentiation”. These solutions differ mainly on the monitoring topology, metrics and statistical methods employed. An introduction to the global debate around NN is also presented, as well as an overview of different regulations defined in Brazil and other countries around the world.
... I N THE big data era, modern enterprise data and Internet traffic have been exploding exponentially with a per-year growth amount that exceeds the total amount of data in the past years [1]. That exerts tremendous pressure on the existing Manuscript received April 16, 2020; revised June 12, 2020; accepted July 6, 2020. ...
Article
Full-text available
The huge amount of data enforces great pressure on the processing efficiency of database systems. By leveraging the in-situ computing ability of emerging nonvolatile memory, processing-in-memory (PIM) technology shows great potential in accelerating database operations against traditional architectures without data movement overheads. In this article, we introduce ReSQM, a novel ReCAM-based accelerator, which can dramatically reduce the response time of database systems. The key novelty of ReSQM is that some commonly used database queries that would be otherwise processed inefficiently in previous studies can be in-situ accomplished with massively high parallelism by exploiting the PIM-enabled ReCAM array. ReSQM supports some typical database queries (such as SELECTION, SORT, and JOIN) effectively based on the limited computational mode of the ReCAM array. ReSQM is also equipped with a series of hardware-algorithm co-designs to maximize efficiency. We present a new data mapping mechanism that allows enjoying in-situ in-memory computations for SELECTION operating upon intermediate results. We also develop a count-based ReCAM-specific algorithm to enable the in-memory sorting without any row swapping. The relational comparisons are integrated for accelerating inequality join by making a few modifications to the ReCAM cells with negligible hardware overhead. The experimental results show that ReSQM can improve the (energy) efficiency by $611\times $ ( $193\times $ ), $19\times $ ( $17\times $ ), $59\times $ ( $43\times $ ), and $307\times $ ( $181\times $ ) in comparison to a 10-core Intel Xeon E5-2630v4 processor for SELECTION, SORT, equi-join, and inequality join, respectively. In contrast to state-of-the-art CMOS-based CAM, GPU, FPGA, NDP, and PIM solutions, ReSQM can also offer $2.2\times 39\times $ speedups.
... For example, we can mention the need to increase the capacity of communication channels. By the 2000s, Internet traffic roughly doubled each year [1]. Today, the emergence of new services, such as the virtual private network (VPN) [2], virtual/augmented reality [3], and new technologies, such as 8K [4], is driving future demand for bandwidth [5]. ...
Article
The growth of transmission rates in optical fibers can increase the demand for devices that perform network node processing. Usually, such devices achieve complex optical signal processing through high non-linearity effects and optoelectronic devices. In this work, we present the numerical acquisition of a configurable multi-function logic gate in which the OR and AND gates can be enabled based on the logic values entered in a selector. Our device consists of a single piece of three-core PCF, with linear pulse propagation, and without the need for any other mechanisms. This result presents evidence that information processing within functional fibers is possible and might be achieved using only fiber design.
... Bunun yanı sıra, e-ticaret 1980'lerde daha geniş kapsamlı, dünyada giderek daha popüler hale gelmiş, bunu 1990'larda internet ve e-posta izlemiştir. Bu zamana kadar, hızla benimsenen internet ve e-posta teknolojileri, toplu veri depolama ve analizini kolaylaştırmaya başlamıştır (Coffman and Odlyzko, 2002). ...
Article
Full-text available
Endüstri devrimlerinin, her dönemde olduğu gibi araziyi, toprağı, yapıyı ve buna ilişkin çevreyi etkilemesi kaçınılmazdır. Dünyada teknolojik gelişmeler gün geçtikçe artan bir ivmeyle değişkenlik gösterirken, teknolojinin etkisindeki birçok sektörde görüldüğü gibi, gayrimenkul sektörünün de değişen dünya düzeninin etkisinde kalmaması düşünülemez. Endüstri 4.0’ın etkisinde olan ve günümüzde Endüstri 5.0’a yönelik görüşlerin havada uçuştuğu bir ortamda, gayrimenkul alanında da teknolojinin nimetlerinden faydalanılması kaçınılmaz olacaktır. Dünyada ve Türkiye’de gayrimenkul tanımına giren kısıtlı unsurlar vardır. Bu unsurlar teknolojik gelişmelerden en fazla fiziksel özellikleriyle etkilenmekte ve dijital dünyanın adımlarına öncelikle bu özellikleriyle uyum sağlamaktadır. Bu çalışmada, Endüstri 4.0’ın emlak sektöründeki önemli bir yansıması olan Gayrimenkul 4.0 ve emlak yönetimi alanındaki dijitalleşmenin, gayrimenkul teknolojisindeki gelişimle birlikte göstermiş olduğu dönüşüm ortaya koyulmaktadır. Gayrimenkul 4.0’ın temel bileşenleri, genel özellikleri, kente ve insana kattıkları ve Gayrimenkul 4.0’ın geleceğine ilişkin fütürist bakış açısıyla yorumlanan, farklı bir emlak yönetimi vizyonu oluşturulmaktadır. Çalışmada, dünya gündeminde yer teşkil eden COVID-19 Pandemisi ile birlikte gayrimenkul sektöründeki tedbirlerden de bahsedilerek, küresel değişimlere karşı, teknolojinin de desteğiyle gayrimenkul sektörünün geliştirdiği savunma mekanizması ortaya koyulmaktadır.
... However, this is not the topic discussed here. The second is to focus on the amount of information conveyed by each communication activity [4]. One may approach this issue in an algorithmic way. ...
Chapter
This research introduces a new method to evaluate and make most practical use of the growth of information on the Internet. The method is based on the “Internet in Real Time” statistics delivered by WebFX and Worldometer tools, and develops a combination of these two options. A method-based application is deployed with the following five live counters: Internet traffic, Tweets sent, Google searches, Emails sent and Tumblr posts, to measure the dynamics of the overall trend in user activity. Four parts of a day and three categories of days are examined as corresponding discrete inputs. In the search for the most effective web advertisement, the displaying strategy will vary directly with the level of users’ activity in the form of live counters over 12 months of 2018. Two competent surveys consolidate the outcome of our work and demonstrate how the company can identify the best web advertising strategy closer to his business needs and interests. We conclude with the discussion whether the considered method can lead to superior efficiency level of all other communication activities.
... In addition to regular backup, incremental backup to remote cloud servers and data compression techniques ensure typically <5% of the total production data needs to be backed up during any given disaster [23,29]. Moore's law for Big Data states that Internet traffic is likely to continue doubling each year for the next decade [39]. Assuming that large enterprises like Google processed up to 100 PB of data, daily in its DC, in 2015 [23,40], this would translate to roughly 400 PB in 2017. ...
Article
Full-text available
Network failure caused by disasters (both natural and man-made like earthquakes, floods, cyclones, electromagnetic pulse attacks etc.) result in communication disruption and huge amounts of data loss in the backbone datacenter (DC) networks. To prevent such large-scale network disruptions and quickly resume connectivity after the disaster, network operators require improved and efficient data-transfer algorithms in geographically distributed (geo-distributed) optical inter-DC networks. Minimizing loss of infrastructure and preventing network disruption requires estimating the damage from a possible disaster. In this study, the authors consider a mutual backup model, where DCs can serve as backup sites of each other, thereby significantly reducing the backup duration (i.e. DC-Backup-Window (DC-B-Wnd)). They specifically consider the joint optimization of probabilistic backup site selection and the amount of data to be backed up. They propose mixed-integer linear programming models for backup time minimization using a single DC as well as dual DCs at backup sites. Further, they investigate the trade-off between DC-B-Wnd and the computational complexity of the proposed algorithms and perform extensive numerical simulations to show that, in the case of disasters, single and dual DC backups with risk-aware probabilistic path selection give shorter backup windows as compared to existing algorithms.
... Moore's law for Big Data states that Internet traffic is likely to continue doubling each year for the next decade [16]. Assuming large enterprises like Google processed up to 100 PB of data daily in its DCs, in 2015 [12], this would translate to roughly 400 PB of data at present. ...
... Evidence seems to abound that the time-honored four P'sproduct, place, price, and promotionare increasingly coming under pressure. Internet traffic is approximately doubling each year, which represents extremely fast growth, much faster that increases in other communication services (Goffman and Odlyzko 2000). In cyberspace, producing or service providing firms also have to stress relationship-based marketing plans in order to achieve customer loyalty. ...
Article
Full-text available
This study constructs and frameworks for the analysis of the E-commerce and internet marketing in Bangladesh. The study aims at analyzing the impact of e-commerce as well as internet on the as usual phenomenon of traditional marketing in Bangladesh. The study reflects the market satisfaction level towards the existing services and performance of internet marketing and state the relationship between E-commerce and internet market in Bangladesh. Although Bangladesh is a developing country, the uses of technologies have the great impact on the mind of customers as well as the businessmen. The prospects of electrical technologies are optimistic in Bangladesh. This study is really descriptive in nature. In the study, total 160 respondents were selected from various regions in Bangladesh from the different professions and the primary data were collected from these respondents using questionnaires. In this study, the secondary data were also used; the secondary data were collected from journal, books, website etc. The data were analyzed by the statistical tools. The study also characterizes that the customers, presently, of e-commerce and internet marketing in Bangladesh are satisfied with the most of the attributes but they are dissatisfied with small numbers of attributes. Finally, the study characterizes the customers' mode of attitudes and satisfaction level toward e-commerce and internet marketing in Bangladesh
...  This dataset is made available to give correct facts and figures on Internet data traffic in a Nigerian university campus that is driven by Information and Communication Technologies (ICTs) [5,6]. ...
Article
Full-text available
In this data article, a robust data exploration is performed on daily Internet data traffic generated in a smart university campus for a period of twelve consecutive (12) months (January–December, 2017). For each day of the one-year study period, Internet data download traffic and Internet data upload traffic at Covenant University, Nigeria were monitored and properly logged using required application software namely: FreeRADIUS; Radius Manager Web application; and Mikrotik Hotspot Manager. A comprehensive dataset with detailed information is provided as supplementary material to this data article for easy research utility and validation. For each month, descriptive statistics of daily Internet data download traffic and daily Internet data upload traffic are presented in tables. Boxplot representations and time series plots are provided to show the trends of data download and upload traffic volume within the smart campus throughout the 12-month period. Frequency distributions of the dataset are illustrated using histograms. In addition, correlation and regression analyses are performed and the results are presented using a scatter plot. Probability Density Functions (PDFs) and Cumulative Distribution Functions (CDFs) of the dataset are also computed. Furthermore, Analysis of Variance (ANOVA) and multiple post-hoc tests are conducted to understand the statistical difference(s) in the Internet traffic volume, if any, across the 12-month period. The robust data exploration provided in this data article will help Internet Service Providers (ISPs) and network administrators in smart campuses to develop empirical model for optimal Quality of Service (QoS), Internet traffic forecasting, and budgeting.
... Ezeket a mérési adatokat azonban nemcsak a használók száma, hanem az alkalmazott technikák és technológiák, illetve azok változásai is nagymértékben befolyásolják. Ugyanakkor az utóbbi idõben az adatforgalom növekedésérõl a korábbiakhoz képest sokkal reálisabb becslések jelentek meg (lásd ehhez Coffman és Odlyzko 2001). Az európai adatok vegyes képet mutatnak. ...
Article
A digitális kultúra avantgárdja a kezdetektõl fogva ellenérzéssel és gyanakvással fogadta az államok, a nemzetközi politikai és gazdasági szervezetek azon törekvéseit, hogy a világháló mûködését kívülrõl diktált szabályok közé kényszerítsék. Az ott folyó kommunikációnak ugyanis vannak már írott és íratlan szabályai. Ezeket a technikai és közösségi szabályokat – a számítógépeket és lokális hálózatokat összekapcsoló TCP/IP protokolltól az országkódokon és generikus kódokon alapuló legfelsõbb szintû tartományneveken át a tisztességes számítógép-használat és kommunikáció chartájáig – önkéntesek alkották meg, és azok a használók konszenzusán alapulnak. Hiszen a világhálónak nem volt és ma sincs kormánya, amely különleges autoritásánál fogva bármit is rákényszeríthetne annak közösségére. Ezek a szabályok persze a világháló létrehozóinak és elsõ néhány százezer használójának politikai kultúráját, értékvilágát fejezik ki, amely világosan és egyértelmûen felismerhetõ az általuk teremtett és egymás között használt beszédmódban. Ha szavakat kellene keresnünk a digitális kultúra értékvilágának a jellemzésére, akkor elõször a nyitottság, a középpont-nélküliség, az interaktivitás, a konszenzuson alapuló szabályozás, a szabad identitásválasztás, a nem lokalitás, a hierarchia, az elõjogok és a tekintélyek elutasítása jut az eszünkbe. A digitális kultúra létrehozói szerint ezek a szavak jelölik ki tudásunk, kultúránk, emberi kapcsolataink és politikai cselekvéseink új horizontját. Az internet nagy hatású teoretikusai ezért is szeretik önmagukat és a világháló társadalmát úgy látni, mint akik Jefferson, Washington, Paine, Mill, Madison, Tocqueville, Brandeis, Holmes és mások libertinus politikai filozófiai hagyományait folytatják, azokat radikalizálják és teljesítik ki a kibertérben (Barbrook és Cameron 1997: 41–59; Barlow 1990: 45–57; Sobchack 1995: 11–28; Sterling 1994). (...)
Thesis
Full-text available
Network Neutrality is becoming increasingly important as the global debate intensifies and governments worldwide implement and withdraw regulations. According to this principle, all traffic must be processed without differentiation, regardless of origin, destination and/or content. Traffic Differentiation (TD) practices should be transparent, regardless of regulations, since they can significantly affect end-users. It is thus essential to monitor TD in the Internet. Several solutions have been proposed to detect TD. These solutions are based on network measurements and statistical inference. However, there are still open challenges. This thesis has three main objectives: (i) to consolidate the state of the art regarding the problem of detecting TD; (ii) to investigate TD on contexts not yet explored, in particular the Internet of Things (IoT); and (iii) to propose new solutions regarding TD detection that address open challenges, in particular locating the source of TD. We first describe the current state of the art, including a description of multiple solutions for detecting TD. We also propose a taxonomy for the different types of TD and the different types of detection, and identify open challenges. Then, we evaluate the impact of TD on IoT, by simulating TD on different IoT traffic patterns. Results show that even a small prioritization may have a significant impact on the performance of IoT devices. Next, we propose a solution for detecting TD in the Internet. This solution relies on a new strategy of combining several metrics to detect different types of TD. Simulation results show that this strategy is capable of detecting TD under several conditions. We then propose a general model for continuously monitoring TD on the Internet, which aims at unifying current and future TD detection solutions, while taking advantage of current and emerging technologies. In this context, a new solution for locating the source of TD in the Internet is proposed. The goal of this proposal is to both enable the implementation of our general model and address the problem of locating TD. The proposal takes advantage of properties of Internet peering to identify in which Autonomous System (AS) TD occurs. Probes from multiple vantage points are combined, and the source of TD is inferred based on the AS-level routes between the measurement points. To evaluate this proposal, we first ran several experiments to confirm that indeed Internet routes do present the required properties. Then, several simulations were performed to assess the efficiency of the proposal for locating TD. The results show that for several different scenarios issuing probes from a few end-hosts in core Internet ASes achieves similar results than from numerous end-hosts on the edge.
Conference Paper
Generally, the voice quality of a VoIP call can be analyzed through the measurement of suitable metrics at the application layer of the International Organization for Standardization/Open Systems Interconnection (ISO/OSI) protocol stack. However, the peculiarities of such a kind of measurements make very difficult to provide to each user a value representative of the quality of each VoIP call.
Article
Optimal utilization of resources in present-day communication networks is a challenging task. Routing plays an important role in achieving optimal resource utilization. The open shortest path first (OSPF) routing protocol is widely used for routing packets from a source node to a destination node. This protocol assigns weights (or costs) to the links of a network. These weights are used to determine the shortest path between all sources to all destination nodes. Assignment of these weights to the links is classified as an NP-hard problem. This paper formulates the OSPF weight setting problem as a multi-objective optimization problem, with maximum utilization, number of congested links, and number of unused links as the optimization objectives. Since the objectives are conflicting in nature, an efficient approach is needed to balance the trade-off between these objectives. Fuzzy logic has been shown to efficiently solve multi-objective optimization problems. A fuzzy cost function for the OSPF weight setting problem is developed in this paper based on the Unified And-OR (UAO) operator. Two iterative heuristics, namely, simulated annealing (SA) and simulated evolution (SimE) have been implemented to solve the multi-objective OSPF weight setting problem using a fuzzy cost function. Results are compared with that found using other cost functions proposed in the literature (Sqalli et al. in Network Operations and Management Symposium, NOMS, 2006). Results suggest that, overall, the fuzzy cost function performs better than existing cost functions, with respect to both SA and SimE. Furthermore, SimE shows superior performance compared to SA. In addition, a comparison of SimE with NSGA-II shows that, overall, SimE demonstrates slightly better performance in terms of quality of solutions.
Article
High-speed routers rely on well-designed packet buffers that support multiple queues, provide large capacity and short response times. Some researchers suggested combined SRAM/ DRAM hierarchical buffer architectures to meet these challenges. However, these architectures suffer from either large SRAM requirement or high time-complexity in the memory management. In this paper, we present scalable, efficient, and novel Data Dissemination architecture. Two fundamental issues need to be addressed to make this architecture feasible: (1) how to minimize the overhead of an individual packet buffer; and (2) how to design scalable packet buffers using independent buffer subsystems. We address these issues by first designing an efficient compact buffer that reduces the SRAM size requirement by (k – 1)/k. Then, we introduce a feasible way of coordinating multiple subsystems with a load-balancing algorithm that maximizes the overall system performance. Both theoretical analysis and experimental results demonstrate that our load-balancing algorithm and the Data Dissemination architecture can easily scale to meet the buffering needs of high bandwidth links and satisfy the requirements of scale and support for multiple queues.
Article
Full-text available
A number of network operators have recently claimed (1) that their costs are exploding due to increased Internet broadband traffic associated with video; (2) that, due to market defects, consumers need not and do not pay the increased costs of the broadband service; and (3) that it may therefore become necessary for content providers to subsidise the cost of the consumer's Internet service - especially as networks evolve to fibre-based Next Generation Access (NGA). Under close scrutiny, none of these claims is persuasive. (1) Internet traffic is indeed increasing, but usage-based cost per subscriber in the fixed network is fairly constant - technological improvements are in balance with the growth in traffic (which is in fact considerably less, in percentage terms, than it was in past years). (2) Prices for fixed broadband service are stable because costs are stable - this is a success of the competitive market, not a failure. In those instances where costs truly are increasing, network operators seem to be able to raise prices accordingly. (3) The argument for cross-subsidies rests on the theory of two-sided markets, but that theory does not necessarily imply that subsidies should be flowing from content providers to network operators. If the greatest challenge to NGA migration is that the incremental willingness of consumers to pay for ultra-fast broadband is insufficient to fund the corresponding network upgrades, then what is apparently needed is more high value high bandwidth content. One could just as well argue that subsidies should flow into the content provision industry as out of it - a detailed examination would be needed.
Research
Full-text available
The Internet Protocol (IP) traffic had been growing strongly thereby increasing the possibility of IP Traffic implications in the future. Visual Network Index (VNI) is designed to provide a customized view and qualitative analysis of the IP Traffic growth in various categories and types of global IP network, respectively, within a specified period of time. This paper focuses on the state of global IP Traffic growth based on the Cisco VNI forecast 2009 to 2014 and further considering mobile Data Forecast from 2010 to 2015. From these, it was confirmed that the global IP traffic will quadruple from 2009 to 2014. However, IP Traffic will continue to be dominated by Video and, by 2015, almost 66% of the world's mobile data traffic will be video. Middle East and Africa will be expected to generate significant IP traffic in the near future growing at a fastest pace compared to other regions. Thus, the provision of IP bandwidth management will aid to maintain success sequel to the IP traffic remarkable improvement.
Article
Full-text available
The Internet Protocol (IP) traffic had been growing strongly thereby increasing the possibility of IP Traffic implications in the future. Visual Network Index (VNI) is designed to provide a customized view and qualitative analysis of the IP Traffic growth in various categories and types of global IP network, respectively, within a specified period of time. This paper focuses on the state of global IP Traffic growth based on the Cisco VNI forecast 2009 to 2014 and further considering mobile Data Forecast from 2010 to 2015. From these, it was confirmed that the global IP traffic will quadruple from 2009 to 2014. However, IP Traffic will continue to be dominated by Video and, by 2015, almost 66% of the world's mobile data traffic will be video. Middle East and Africa will be expected to generate significant IP traffic in the near future growing at a fastest pace compared to other regions. Thus, the provision of IP bandwidth management will aid to maintain success sequel to the IP traffic remarkable improvement.
Thesis
Full-text available
Gegenstand dieser Arbeit ist die Entwicklung einer sozialwissenschaftlichen Methodologie zur Analyse symbolischer Ordnungen. Darunter werden hier die Strukturen und Systeme von sozial standardisierten Zeichen (Symbolen) verstanden, die sich im Prozess der Kommunikation bilden.
Article
Full-text available
IP filtering is a technique used to control IP packets flow in and out of a network where Filter engine inspects at source and destination IP of incoming and outgoing packets. Here Filter engine is designed to improve the performance of the filter, i.e. to reduce the processing time of the filtering mechanism. The data structure used in the IP filter is hashing, for larger number of hosts and variety ranges IP network of hosts hashing provides much better performance than link list. Here hash function for the hash table is valid IP classes with host capacities i.e. class A, class B, class C. The IP filter engine have to compare the source and destination IP of each IP packet. In hash table technique the comparison can be done with minimum number of comparisons.
Article
The applicability of network-based computing depends on the availability of the underlying network bandwidth. Such a growing gap between the capacity of the backbone network and the end users` needs results in a serious bottleneck of the access network in between. As a result, ISP incurs disadvantages in their business. If this situation is known to ISP in advance, or if ISP is able to predict traffic volume end-to-end link high-load zone, ISP and end users would be able to decrease the gap for ISP service quality. In this paper, simulation tools, such as ACE, ADM, and Flow Analysis, were used to be able to perceive traffic volume prediction and end-to-end link high-load zone. In using these simulation tools, we were able to estimate sequential transaction in real-network for e-Commerce. We also imported virtual network environment estimated network data, and create background traffic. In a virtual network environment like this, we were able to find out simulation results for traffic volume prediction and end-to-end link high-load zone according to the increase in the number of users based on virtual network environment.
Article
Full-text available
Internet traffic measurement and analysis generate dataset that are indicators of usage trends, and such dataset can be used for traffic prediction via various statistical analyses. In this study, an extensive analysis was carried out on the daily internet traffic data generated from January to December, 2017 in a smart university in Nigeria. The dataset analysed contains seven key features: the month, the week, the day of the week, the daily IP traffic for the previous day, the average daily IP traffic for the two previous days, the traffic status classification (TSC) for the download and the TSC for the upload internet traffic data. The data mining analysis was performed using four learning algorithms: the Decision Tree, the Tree Ensemble, the Random Forest, and the Naïve Bayes Algorithm on KNIME (Konstanz Information Miner) data mining application and kNN, Neural Network, Random Forest, Naïve Bayes and CN2 Rule Inducer algorithms on the Orange platform. A comparative performance analysis for the models is presented using the confusion matrix, Cohen’s Kappa value, the accuracy of each model, Area under ROC Curve, etc. A minimum accuracy of 55.66% was observed for both the upload and the download IP data on the KNIME platform while minimum accuracies of 57.3% and 51.4% respectively were observed on the Orange platform.
Book
Full-text available
Przeciętnemu użytkownikowi Internet jawi się jako źródło nieprzebranych informacji, licznych udogodnień i aplikacji oraz ogromnych zasobów treści rozrywkowych. Co dzień miliardy ludzi korzysta z dobrodziejstw Internetu, nie zastanawiając się nad tym, że większość z jego zasobów, dostarczanych jako produkty wirtualne, jest nieodpłatna. Wartościowe produkty wirtualne są dostarczane przy cenach zerowych, co stoi w sprzeczności z zasadą gospodarowania. Z ekonomicznego punktu widzenia jest to paradoks. Niniejsza książka jest próbą jego wyjaśnienia na kanwie trzech programów badawczych: ekonomii neoklasycznej, ekonomii kosztów transakcyjnych i teorii wymiany społecznej. Prowadzone tu rozważania mają zarówno charakter praktyczny, ukazując strategie biznesowe przedsiębiorców udostępniających nieodpłatnie produkty wirtualne, jak i teoretyczny, dotyczący możliwości eksplanacyjnych poszczególnych programów badawczych.
Article
Full-text available
Niniejszy artykuł porusza kwestię rzadkości dóbr informacyjnych, ze szczególnym uwzględnieniem ich cyfrowej formy. Celem artykułu jest rozważenie kwestii rzadkości w kontekście budulca cyfrowych dóbr informacyjnych, jakim są bity. Materiał empiryczny nad badaniami dotyczącymi rzadkości stanowią dane na temat ceny dysków twardych z lat 1980–2017. Zastosowana metoda badawcza oparta jest na teoretycznych rozważaniach nad rzadkością w ekonomii oraz analogii do zależności opisanych prawem Moore’a, które poddano krytycznej analizie. W artykule dokonano analizy zmian kosztu jednostkowego dysków twardych w przeliczeniu na megabajt w celu ukazania relacji między rzadkością materialnych nośników a nierzadkością niematerialnych treści dóbr informacyjnych. W wyniku badań wykazano, że w relatywistycznym ujęciu cyfrowe dobra informacyjne mogą być nierzadkie.
Article
Full-text available
The academic performance of a student in a university is determined by a number of factors, both academic and non-academic. Student that previously excelled at the secondary school level may lose focus due to peer pressure and social lifestyle while those who previously struggled due to family distractions may be able to focus away from home, and as a result excel at the university. University admission in Nigeria is typically based on cognitive entry characteristics of a student which is mostly academic, and may not necessarily translate to excellence once in the university. In this study, the relationship between the cognitive admission entry requirements and the academic performance of students in their first year, using their CGPA and class of degree was examined using six data mining algorithms in KNIME and Orange platforms. Maximum accuracies of 50.23% and 51.9% respectively were observed, and the results were verified using regression models, with R2 values of 0.207 and 0.232 recorded which indicate that students’ performance in their first year is not fully explained by cognitive entry requirements.
Conference Paper
In this thesis multiple approaches are presented which demonstrate the effectiveness of mathematical modelling to the study of terrorism and counter-terrorism strategies. In particular, theories of crime science are quantified to obtain objective outcomes. The layout of the research findings is in four parts. The first model studied is a Hawkes point process. This model describes events where past occurrence can lead to an increase in future events. In the context of this thesis a point process is used to capture dependence among terrorist attacks committed by the Provisional Irish Republican Army (PIRA) during ``The Troubles'' in Northern Ireland. The Hawkes process is adapted to produce a method capable of determining quantitatively temporally distinct phases within the PIRA movement. Expanding on the Hawkes model the next area of research introduces a time-varying background rate. In particular, using the Fast Fourier Transform a sinusoidal background rate is derived. This model then enables a study of seasonal trends in the attack profile of the Al Shabaab (AS) group. To study the spatial dynamics of terrorist activity a Dirichlet Process Mixture (DPM) model is examined. The DPM is used in a novel setting by considering the influence of improvised explosive device (IED) factory closures on PIRA attacks. The final research area studied in this thesis is data collection methods. An information retrieval (IR) tool is designed which can automatically obtain terrorist event details. Machine learning techniques are used to compare this IR data to a manually collected dataset. Future research ideas are introduced for each of the topics covered in this dissertation.
Book
The goal of this research was to assess, if the current legal framework of obligations related to personal data breach under GDPR are purposefully applicable also in the context of internet of things and if so, then which changes can help to overcome eventual discovered challenges or obstacles to it. This issue is studied from four perspectives. The introduction to the topic is from the cyber security perspective. The term personal data breach is defined and explained in relation to the term security incident. Next are presented possible forms of personal data breach, offered evidence for the scope and frequency of this phenomenon and outlined the future trend of its development. Pursuant to that the potential harm for individuals from personal data breach is explained. After that, the topic is approached from the legal perspective. Within it is presented a comprehensive analysis of the legal frameworks with obligations aimed at prevention or mitigation of personal data breach in the EU, as well as in the United States. These are then discussed with the aim to identify challenges and limits applicable to them. The next chapter introduces the impact of technological change of the context, which is defined by the term internet of things. The attention is focused on the new challenges, which are brought by it to personal data processing. The variety of situations, which fall under this term, is captured through three partial scenarios: automated machine-to-machines communication, smart city environment and change in the role of microenterprises. These views are completed with an economic perspective. This is used for modelling the decision-making of the obliged parties regarding their compliance with the obligations related to personal data breach. Subsequently, the presented perspectives are merged, the obtained findings regarding personal data breach in the context of internet of things are summarized and then the possible solutions for the discovered challenges of compliance with the respective obligations are discussed.
Article
Full-text available
A simple observation, made over 30 years ago, on the growth in the number of devices per silicon die has become the central driving force of one of the most dynamic of the world's industries. Because of the accuracy with which Moore's Law has predicted past growth in IC complexity, it is viewed as a reliable method of calculating future trends as well, setting the pace of innovation, and defining the rules and the very nature of competition. And since the semiconductor portion of electronic consumer products keeps growing by leaps and bounds, the Law has aroused in users and consumers an expectation of a continuous stream of faster, better, and cheaper high-technology products. Even the policy implications of Moore's Law are significant: it is used as the baseline assumption in the industry's strategic road map for the next decade and a half
Article
Full-text available
Suitable pricing models for Internet services represent one of the main prerequisites for a successfully running implementation of a charging and accounting tool. This paper introduces general aspects influencing the choice of a pricing model and presents a survey of relevant approaches to be found in the scientific literature. Based on cost model investigations some detailed insight into price and cost issues from an Internet Service Provider's (ISP) point of view is given. Moreover, current challenges as well as problems are discussed in a practical context as investigated in the Swiss National Science Foundation project CATI -- Charging and Accounting Technology for the Internet.
Article
Full-text available
This document describes the architectural and engineering issues of building a wide area optical Internet network as part of the CANARIE advanced networks program. Recent developments in high density Wave Division Multiplexing fiber systems allows for the deployment of a dedicated optical Internet network for large volume backbone pipes that does not require an underlying multi-service SONET/SDH and ATM transport protocol. Some intrinsic characteristics of Internet traffic such as its self similar nature, server bound congestion, routing and data asymmetry allow for highly optimized traffic engineered networks using individual wavelengths. By transmitting GigaBit Ethernet or SONET/SDH frames natively over WDM wavelengths that directly interconnect high performance routers the original concept of the Internet as an intrinsically survivable datagram network is possible. Traffic engineering, restoral, protection and bandwidth management of the network must now be carried out at the IP lay...
Article
How much information is there in the world? This paper makes various estimates and compares the answers with the estimates of disk and tape sales, and size of all human memory. There may be a few thousand petabytes [*] of information all told; and the production of tape and disk will reach that level by the year 2000. So in only a few years, (a) we will be able save everything \-no information will have to be thrown out, and (b) the typical piece of information will never be looked at by a human being. Here is a chart of the current amount of online storage, comparing both commercial servers [Tenopir 1997]. and the Web [Markoff 1997]. [Mauldin 1995]. with the Library of Congress. These numbers involve Ascii text files only. This chart suggests that next year the Web will be as large as LC.
Article
Recent developments in high density Wave Division Multiplexing fiber systems allows for the deployment of a dedicated optical Internet network for large volume backbone pipes that does not require an underlying multi-service SONET/SDH and ATM transport protocol. Some intrinsic characteristics of Internet traffic such as its self similar nature, server bound congestion, routing and data asymmetry allow for highly optimized traffic engineered networks using individual wavelengths. By transmitting GigaBit Ethernet or SONET/SDH frames natively over WDM wavelengths that directly interconnect high performance routers the original concept of the Internet as an intrinsically survivable datagram network is possible. Traffic engineering, restoral, protection and bandwidth management of the network must now be carried out at the IP layer and so new routing or switching protocols such as MPLS that allow for uni- directional paths with fast restoral and protection at the IP layer become essential for a reliable production network. The deployment of high density WDM municipal and campus networks also gives carriers and ISPs the flexibility to offer customers as integrated and seamless set of optical Internet services.
Article
ommunication, connectivity, education, entertainment, e-com-merce—across a broad spectrum of activities, the commodity Internet has made a strong impact on the way we live, work, and play. Nevertheless, many classes of applications do not yet run well, and some don't run at all, over the commodity net. As new applications are developed in disciplines from medicine to engineering to the arts and sciences, their success increasingly depends on an ability to use net-works effectively. In research and education collaborations all over the world, efforts are under way to make use of new network technologies and develop network services that will facilitate these advanced applica-tions. One such effort in the United States is called the Internet2 Project [1] . The Internet2 Project was started in 1996 by 34 U.S. research universi-ties. It has since grown to over 140 universities, and includes several corporate members and international partners. This article examines network technology used in Internet2, and looks at some of the engi-neering challenges involved in facilitating applications being developed by Internet2 members.
Article
Costs of communications networks are determined largely by the maximal capacities of those networks. On the other hand, the traffic those networks carry depends on how heavily those networks are used. Hence, utilization rates and utilization patterns determine the costs of providing services and, therefore, are crucial in understanding the economics of communications networks. A comparison of utilization rates and costs of various networks helps disprove many popular myths about the Internet. Although packet networks are often extolled for the efficiency of their transport, it often costs more to send data over internal corporate networks than using modems on the switched voice network. Packet networks are growing explosively not because they utilize underlying transport capacity more efficiently but because they provide much greater flexibility in offering new services. Study of utilization patterns shows there are large opportunities for increasing the efficiency of data transport and making the Internet less expensive and more useful. On the other hand, many popular techniques, such as some Quality of Service measures and ATM, are likely to be of limited usefulness.
Article
The simple model on which the Internet has operated, with all packets treated equally, and charges only for access links to the network, has contributed to its explosive growth. However, there is wide dissatisfaction with the delays and losses in current transmission. Further, new services, such as packet telephony, require assurance of considerably better service. These factors have stimulated the development of methods for providing Quality of Service (QoS), and this will make the Internet more complicated. Differential quality will also force differential pricing, and this will further increase the complexity of the system.The solution of simply putting in more capacity is widely regarded as impractical. However, it appears that we are about to enter a period of rapidly declining transmission costs. The implications of such an environment are explored by considering models with two types of demands for data transport, differing in sensitivity to congestion. Three network configurations are considered: (1) with separate networks for the two types of traffic, (2) with a single network that provides uniformly high QoS, and (3) with a single physical network that provides differential QoS. The best solution depends on the assumptions made about demand and technological progress. However, we show that the provision of uniformly high QoS to all traffic may well be best in the long run. Even when it is not the least expensive, the additional costs it imposes are usually not large. In a dynamic environment of rapid growth in traffic and decreasing prices, these costs may well be worth paying to attain the simplicity of a single network that treats all packets equally and has a simple charging mechanism.
Article
. The public Internet is currently far smaller, in both capacity and traffic, than the switched voice network. The private line networks are considerably larger in aggregate capacity than the Internet. They are about as large as the voice network in the U.S., but carry less traffic. On the other hand, the growth rate of traffic on the public Internet, while lower than is often cited, is still about 100% per year, much higher than for traffic on other networks. Hence, if present growth trends continue, data traffic in the U.S. will overtake voice traffic around the year 2002 and will be dominated by the Internet. 1. Introduction There are many predictions of when data traffic will overtake voice. It either happened yesterday, or will happen today, tomorrow, next week, or perhaps only in 2007. There are also wildly differing estimates for the growth rate of the Internet. The number of Internet users is variously given as increasing at 20 or 50 percent per year, and the traffic on the I...
Article
"In the future data communication networks interoperability becomes critical from both technological and business strategy perspective. Significance of interoperability has to be evaluated in terms of the overall economic performance of the system. In this paper we will present our view of the future of data communication networks, challenges in interoperability, and the economic challenges that will arise in this 'real time' economy. We will provide insights derived from general equilibrium approach to these networks, e.g., what are the impacts of competition and interoperability on the competing entities which will own different parts of the network. We believe that potential excessive congestion is the single biggest obstacle in the feasibility of a global, interoperable network. We will discuss the simulation experiments we have carried out to determine approximate priority prices in real-time and discuss the potential benefits in managing congestion through such a pricing scheme. We define a framework for the policy research for an interoperable network which may facilitate electronic commerce. We also discuss the issues related to the market structure such as monopoly, duopoly, and more competitive ownership of the parts of the network and its impact on interoperability, efficiency, and economic performance of the system."
Conference Paper
This paper reexamines the rules of thumb for the design of data storage systems. Briefly, it looks at storage, processing, and networking costs, ratios, and trends with a particular focus on performance and price/performance. Amdahl's ratio laws for system design need only slight revision after 35 years-the major change being the increased use of RAM. An analysis also indicates storage should be used to cache both database and Web data to save disk bandwidth, network bandwidth, and people's time. Surprisingly, the 5-minute rule for disk caching becomes a cache-everything rule for Web caching
Article
A recently completed single-site study has yielded information about how Internet traffic will evolve, as new users discover the Internet and existing users find new ways to incorporate the Internet into their work patterns. The author reviews existing statistics and studies of network growth, which show that network traffic generally grows exponentially with time, at least until the network carrying capacity is reached. He then describes how he captured and reduced the data used in this study. The following points are also addressed: the overall growth in the site's wide-area traffic; the appearance of periodic traffic; the growth in network use by individual computers or users; and the changing geographic profile of the traffic. The implications and limitations of the results are also summarizes.< >
Article
Moore's (1965) law, which predicts that the number of transistors on an integrated circuit doubles every 18-24 months, has held remarkably well over three decades of semiconductor device production. Using historical data, a similar prediction is made for modem speeds, and in particular for the speed at which data can be transmitted over twisted wire pairs, where it is found that modem data rates have historically doubled every 1.9 years. This result suggests that rapid increases in bandwidth delivered to subscribers over the coming decades will have profound societal and economic effects, just as the development of the integrated circuit and the microprocessor have had. Nevertheless, the regulated aspect of telecommunications may limit the growth of bandwidth and the deployment of high-speed modems which deliver services over existing twisted wire pairs. The cable environment, which from a data transmission perspective is, generally speaking, less regulated, may allow for deployment of modems which support rates in excess of what is predicted by our Moore's law analogy. In this article we examine Moore's law as applied to modem technology, and how regulation may affect the deployment of broadband services
Article
The Internet is the latest in a long succession of communication technologies. The goal of this work is to draw lessons from the evolution of all these services. Little attention is paid to technology as such, since that has changed radically many times. Instead, the stress is on the steady growth in volume of communication, the evolution in the type of traffic sent, the qualitative change this growth produces in how people treat communication, and the evolution of pricing. The focus is on the user, and in particular on how quality and price differentiation have been used by service providers to influence consumer behavior, and how consumers have reacted.
Article
. The popular press often extolls packet networks as much more efficient than switched voice networks in utilizing transmission lines. This impression is reinforced by the delays experienced on the Internet and the famous graphs for traffic patterns through the major exchange points on the Internet, which suggest that networks are running at full capacity. This paper shows the popular impression is incorrect; data networks are very lightly utilized compared to the telephone network. Even the backbones of the Internet are run at lower fractions (10% to 15%) of their capacity than the switched voice network (which operates at over 30% of capacity on average). Private line networks are utilized far less intensively (at 3% to 5%). Further, this situation is likely to persist. The low utilization of data networks compared to voice phone networks is not a symptom of waste. It comes from different patterns of use, lumpy capacity of transmission facilities, and the high growth rate of the indu...
Moore's law: Past, present, and future Available through Spectrum online search at http
  • R R Schaller
  • Schaller
[Schaller] R. R. Schaller, Moore's law: Past, present, and future, IEEE Spectrum, vol. 34, no. 6, June 1997, pp. 52-59. Available through Spectrum online search at http://www.spectrum.ieee.org¡.
Introduction to Telephones and Telephone Traffic
  • A M Noll
Fiber optimism: Nortel, Lucent, and Cisco are battling to win the high-stakes fiber-optics game. Red Herring
  • L Bruno
We're insatiable: Now it's 20 million million bytes a day
  • N Cochrane
N. Cochrane, We're insatiable: Now it's 20 million million bytes a day, Melbourne Age, Jan. 15, 2001. Available at http://www.it.fairfax.com.au/networking/20010115/A13694-2001Jan15.html¡.
MCI chief sees big outlays to handle net traffic: Ebbers estimates $100B to upgrade network
  • P J Howe
P. J. Howe, MCI chief sees big outlays to handle net traffic: Ebbers estimates $100B to upgrade network, Boston Globe, March 7, 2000.
The myth of Internet growth
  • P Sevcik
P. Sevcik, The myth of Internet growth, Business Communications Review, vol. 29, no. 1, January 1999, pp. 12-14. Available at http://www.bcr.com/bcrmag/01/99p12.htm¡.
Bandwidth use and pricing trends in the U
  • D Galbi
From SWITCH to SWITCH* -extrapolating from a case study
  • J Harms
J. Harms, From SWITCH to SWITCH* -extrapolating from a case study, Proc. INET'94, pp. 341-1 to 341-6, available at http://info.isoc.org/isoc/whatis/conferences/inet/94/papers/index.html¡.
UC Berkeley must manage campus network growth, The Daily Californian
  • J Mccredie
J. McCredie, UC Berkeley must manage campus network growth, The Daily Californian, March 14, 2000. Available at http://www.dailycal.org/article.asp?id=1912&ref=news¡.
Telstra: The prices fight Wired News
  • S Taggart
The bandwidth tidal wave
  • G Gilder
G. Gilder, The bandwidth tidal wave, Forbes ASAP, Dec. 5, 1994. Available at http://www.forbes.com/asap/gilder/telecosm10a.htm¡.
  • A M Odlyzko
A. M. Odlyzko, The Internet and other networks: Utilization rates and their implications, Information Economics & Policy. To appear. (Presented at the 1998 Telecommunications Policy Research Conference.) Available at http://www.research.att.com/© amo¡.
A practical review of pricing and cost recovery for Internet services
  • P Reichl
  • S Leinen
  • B Stiller
P. Reichl, S. Leinen, and B. Stiller, A practical review of pricing and cost recovery for Internet services, to appear in Proc. 2nd Internet Economics Workshop Berlin (IEW'99),
  • D Galbi
D. Galbi, Bandwidth use and pricing trends in the U.S., Telecommunications Policy, vol. 24, no. 11 (Dec. 2000). Available at http://www.galbithink.org¡.
  • L Bruno
  • M Jander
  • J C R Licklider