ACM SIGCOMM Computer Communication Review

Published by Association for Computing Machinery
Online ISSN: 0146-4833
Publications
Conference Paper
Not Available
 
Conference Paper
Recent advances in network technology has significantly increased the network bandwidth as well as enriched the set of network functions, enabling a rich set of computer assisted collaborations. DiCE (Di stributed C ollaborative E nvironment) is a multimedia collaboration environment that is being developed at the IBM T. J. Watson Research Center. The challenge of DiCE is to explore the advanced functionalities enabled by high speed networking and multimedia computing technologies to develop effective computer assisted systems that will significantly increase the productivity of collaborations. The main objectives of DiCE are, first, to design an efficient multimedia collaboration environment, and then to prototype the design as the base for the development of a workstation based, hub-free (fully distributed) multimedia conferencing system.
 
Conference Paper
This paper describes definitive and systematic rules for selecting HDTV still test pictures to be used for properly assessing high-quality factors.We extracted three elementary psychological factors: "vividness," "comfortableness" and "sharpness," dominating the impression of HDTV still pictures, by using `Semantic Differential Method.' And HVC-signal characteristics, in Munsell color space, correlating closely with these factors were defined as selection guidelines of pictures symbolizing high-quality TV.
 
Conference Paper
First Page of the Article
 
Conference Paper
First Page of the Article
 
Conference Paper
A novel message-passing protocol that guarantees at-most-once message delivery without requiring communication to establish connections, is described. The authors discuss how to use these messages to implement higher level primitives such as at-most-once remote procedure calls (RPC) and describe an implementation of at-most-once RPCs using their method. Performance measurements indicate that at-most-once RPCs can be provided at the same cost as less desirable RPCs that do not guarantee at-most-once execution. The method is based on the assumption that clocks throughout the system are loosely synchronized. Modern protocols provide good bounds on clock skew with high probability; the present method depends on the bound for performance but not for correctness
 
Conference Paper
The authors describe the architecture and give details on the hardware implementation of a special packet video transceiver, which is appropriate for the communication of broadcast television signals between two stations of an integrated services packet network. The system performs line processing and can detect and/or correct various kinds of errors concerning the reception of packets at the receiver, basically by replacing a wrong or missing packet or line by previously received information from the preceding scan line. Additional information is given about the design and operation of a special frequency-regulation system which the receiver uses to trigger a video digital-to-analog converter which reconverts the digitized video signal into analog form, as well as for a master clock source. This system evaluates the transmitter scanning frequency, basing the evaluation on the timing of the incoming packets. These packets are transmitted through a noisy channel (isochronous network) and are affected by random but bounded delay (jitter)
 
Conference Paper
This work outlines the development and performance validation of an architecture for distributed shared memory in a loosely coupled distributed computing environment. This distributed shared memory may be used for communication and data exchange between communicants on different computing sites; the mechanism will operate transparently and in a distributed manner. This paper describes the architecture of this mechanism and metrics which will be used to measure its performance. We also discuss a number of issues related to the overall design and what research contribution such an implementation can provide to the computer science field.
 
Conference Paper
Two basic synchronization problems are involved in the distributed multimedia systems. One of them is the simultaneous real-time data delivery. Simultaneous Real Time Data Delivery (SRTDD) refers to delivering multimedia data in different data streams belonging to the same time interval simultaneously. In this paper, an entire SRTDD control scheme for multimedia transmission is established. We propose a layered architecture which can be mapped to the OSI model to support the SRTDD control. The segment delivery protocols for real-time multimedia data streams are addressed. A practical segmentation method is also developed for real-time voice and video.
 
Article
This paper describes a strategy that was designed, implemented, and presented at the {\it Mobile Ad Hoc Networking Interoperability and Cooperation (MANIAC) Challenge 2013}. The theme of the MANIAC Challenge 2013 was ``Mobile Data Offloading,'' and consisted on developing and comparatively evaluating strategies to offload infrastructure access points via customer ad hoc forwarding using handheld devices (e.g., tablets and smartphones). According to the challenge rules, a hop-by-hop bidding contest (or ``auction'') should decide the path of each data packet towards its destination. Consequently, each team should rely on other teams' willingness to forward packets for them in order to get their traffic across the network. In this application scenario, the incentive for customers to join the ad hoc network is discounted monthly fees, while operators should benefit from decreased infrastructure costs. Following the rules of MANIAC Challenge 2013, this paper proposes a strategy that is based on the concept of how ``tight'' a node is to successfully deliver a packet to its destination within a given deadline. This ``tightness'' idea relies on a shortest-path analysis of the underlying network graph, and it is used to define three sub-strategies that specify a) how to participate in an auction; b) how to announce an auction; and c) how to decide who wins the announced auction. The proposed strategy seeks to minimize network resource utilization and to promote cooperative behavior among participant nodes.
 
Targeted keyword-based advertising.
Article
Online advertising is currently the richest source of revenue for many Internet giants. The increased number of online businesses, specialized websites and modern profiling techniques have all contributed to an explosion of the income of ad brokers from online advertising. The single biggest threat to this growth, is however, click-fraud. Trained botnets and individuals are hired by click-fraud specialists in order to maximize the revenue of certain users from the ads they publish on their websites, or to launch an attack between competing businesses. In this note we wish to raise the awareness of the networking research community on potential research areas within the online advertising field. As an example strategy, we present Bluff ads; a class of ads that join forces in order to increase the effort level for click-fraud spammers. Bluff ads are either targeted ads, with irrelevant display text, or highly relevant display text, with irrelevant targeting information. They act as a litmus test for the legitimacy of the individual clicking on the ads. Together with standard threshold-based methods, fake ads help to decrease click-fraud levels.
 
Article
For multicomputer applications, the most important performance parameter of a network is the latency for short messages. In this paper, we present an analysis of communication latency using measurement of the Nectar system. Nectar is a high performance multicomputer built around a high bandwidth crosspoint network. Nodes are connected to the Nectar network using network coprocessors that are primarily responsible for the protocol processing, but which can also execute application code. This architecture allows us to analyse message latency both between workstations with an outboard protocol engine and between lightweight nodes with a minimal runtime system and a fast, simple network interface (the coprocessors). We study how much context switching, buffer management and protocol processing contribute to the communication latency, and discuss how the latency is influenced by the protocol's implementation. We also discuss and analyse two other network performance measures: communication overhead and throughput.
 
Throughput of the RandomReset-CSMA policy versus p 0 , for j = 0  
Article
Throughput improvement of the Wireless LANs has been a constant area of research. Most of the work in this area, focuses on designing throughput optimal schemes for fully connected networks (no hidden nodes). But, we demonstrate that the proposed schemes, though perform optimally in fully connected network, achieve significantly lesser throughput even than that of standard IEEE 802.11 in a network with hidden nodes. This motivates the need for designing schemes that provide near optimal performance even when hidden nodes are present. The primary reason for the failure of existing protocols in the presence of hidden nodes is that these protocols are based on the model developed by Bianchi. However this model does not hold when hidden nodes exist. Moreover, analyzing networks with hidden nodes is still an open problem. Thus, designing throughput optimal schemes in networks with hidden nodes is particularly challenging. The novelty of our approach is that it is not based on any underlying mathematical model, rather it directly tunes the control variables so as to maximize the throughput. We demonstrate that this model independent approach achieves maximum throughput in networks with hidden terminals as well. Apart from this major contribution, we present stochastic approximation based algorithms for achieving weighted fairness in a connected networks. We also present a throughput optimal exponential backoff based random access algorithm. We demonstrate that the exponential backoff based scheme may outperform an optimal p-persistent scheme in networks with hidden terminals. This demonstrates the merit of exponential backoff based random access schemes which was deemed unnecessary by results shown by Bianchi. Comment: 16 pages, 13 figures
 
Article
In heterogeneous networks, achieving congestion avoidance is difficult because the congestion feedback from one subnetwork may have no meaning to source on other other subnetworks. We propose using changes in round-trip delay as an implicit feedback. Using a black-box model of the network, we derive an expression for the optimal window as a function of the gradient of the delay-window curve. The problems of selfish optimum and social optimum are also addressed. It is shown that without a careful design, it is possible to get into a race condition during heavy congestion, where each user wants more resources than others, thereby leading to a diverging congestion It is shown that congestion avoidance using round trip delay is a promising approach. The aproach has the advantage that there is absolutely no overhead for the network itself. It is exemplified by a simple scheme. The performance of the scheme is analyzed using a simulation model. The scheme is shown to be efficient, fair, convergent and adaptive to changes in network configuration. The scheme as described works only for networks that can ne modelled with queueing servers with constant service times. Further research is required to extend it for implementation in practical networks. Several directions for future research have beensuggested.
 
Article
As the rollout of secure route origin authentication with the RPKI slowly gains traction among network operators, there is a push to standardize secure path validation for BGP (i.e., S*BGP: S-BGP, soBGP, BGPSEC, etc.). Origin authentication already does much to improve routing security. Moreover, the transition to S*BGP is expected to be long and slow, with S*BGP coexisting in "partial deployment" alongside BGP for a long time. We therefore use theoretical and experimental approach to study the security benefits provided by partially-deployed S*BGP, vis-a-vis those already provided by origin authentication. Because routing policies have a profound impact on routing security, we use a survey of 100 network operators to find the policies that are likely to be most popular during partial S*BGP deployment. We find that S*BGP provides only meagre benefits over origin authentication when these popular policies are used. We also study the security benefits of other routing policies, provide prescriptive guidelines for partially-deployed S*BGP, and show how interactions between S*BGP and BGP can introduce new vulnerabilities into the routing system.
 
Article
Content-Centric Networking (CCN) is an alternative to today's Internet IP-style packet-switched host-centric networking. One key feature of CCN is its focus on content distribution, which dominates current Internet traffic and which is not well-served by IP. Named Data Networking (NDN) is an instance of CCN; it is an on-going research effort aiming to design and develop a full-blown candidate future Internet architecture. Although NDN's emphasizes content distribution, it must also support other types of traffic, such as conferencing (audio, video) as well as more historical applications, such as remote login and file transfer. However, suitability of NDN for applications that are not obviously or primarily content-centric. We believe that such applications are not going away any time soon. In this paper, we explore NDN in the context of a class of applications that involve lowlatency bi-directional (point-to-point) communication. Specifically, we propose a few architectural amendments to NDN that provide significantly better throughput and lower latency for this class of applications by reducing routing and forwarding costs. The proposed approach is validated via experiments.
 
The experiment traces. 
Model parameters for content classes 1–5. 
Cache size vs hit probability under LRU policy for TRACE 4. Note that similar results are obtained for all traces.
Article
The dimensioning of caching systems represents a difficult task in the design of infrastructures for content distribution in the current Internet. This paper addresses the problem of defining a realistic arrival process for the content requests generated by users, due its critical importance for both analytical and simulative evaluations of the performance of caching systems. First, with the aid of YouTube traces collected inside operational residential networks, we identify the characteristics of real traffic that need to be considered or can be safely neglected in order to accurately predict the performance of a cache. Second, we propose a new parsimonious traffic model, named the Shot Noise Model (SNM), that enables users to natively capture the dynamics of content popularity, whilst still being sufficiently simple to be employed effectively for both analytical and scalable simulative studies of caching systems. Finally, our results show that the SNM presents a much better solution to account for the temporal locality observed in real traffic compared to existing approaches.
 
Article
We present a new, systematic approach for analyzing network topologies. We first introduce the dK-series of probability distributions specifying all degree correlations within d-sized subgraphs of a given graph G. Increasing values of d capture progressively more properties of G at the cost of more complex representation of the probability distribution. Using this series, we can quantitatively measure the distance between two graphs and construct random graphs that accurately reproduce virtually all metrics proposed in the literature. The nature of the dK-series implies that it will also capture any future metrics that may be proposed. Using our approach, we construct graphs for d=0,1,2,3 and demonstrate that these graphs reproduce, with increasing accuracy, important properties of measured and modeled Internet topologies. We find that the d=2 case is sufficient for most practical purposes, while d=3 essentially reconstructs the Internet AS- and router-level topologies exactly. We hope that a systematic method to analyze and synthesize topologies offers a significant improvement to the set of tools available to network topology and protocol researchers.
 
An overview of the main CON features: content routing, caching, and content signature. Content is address by name (x).
Topology of cache attacks.  
Article
As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality.
 
A Virtual Private Network (VPN) based on Quantum Key Distribution.
Our first quantum cryptographic link, based on interferometric phase modulation of single photons.
The QKD protocol stack.
Simplified schematic of the IKE / IPsec architecture for Virtual Private Networks.
Article
BBN, Harvard, and Boston University are building the DARPA Quantum Network, the world's first network that delivers end-to-end network security via high-speed Quantum Key Distribution, and testing that Network against sophisticated eavesdropping attacks. The first network link has been up and steadily operational in our laboratory since December 2002. It provides a Virtual Private Network between private enclaves, with user traffic protected by a weak-coherent implementation of quantum cryptography. This prototype is suitable for deployment in metro-size areas via standard telecom (dark) fiber. In this paper, we introduce quantum cryptography, discuss its relation to modern secure networks, and describe its unusual physical layer, its specialized quantum cryptographic protocol suite (quite interesting in its own right), and our extensions to IPsec to integrate it with quantum cryptography.
 
Article
Several users of our AS relationship inference data (http://www.caida.org/data/active/as-relationships/), released with cs/0604017, asked us why it contained AS relationship cycles, e.g., cases where AS A is a provider of AS B, B is a provider of C, and C is a provider of A, or other cycle types. Having been answering these questions in private communications, we have eventually decided to write down our answers here for future reference.
 
Quarterly Trunk Calls on Weekdays in the United Kingdom, December 1975  
Article
We explore the problem of sharing network resources when users' preferences lead to temporally concentrated loads, resulting in an inefficient use of the network. In such cases external incentives can be supplied to smooth out demand, obviating the need for expensive technological mechanisms. Taking a game-theoretic approach, we consider a setting in which bandwidth or access to service is available during different time slots at a fixed cost, but all agents have a natural preference for choosing the same time slot. We present four mechanisms that motivate users to distribute the load by probabilistically waiving the cost for each time slot, and analyze the equilibria that arise under these mechanisms.
 
Article
Denial of Service (DoS) attacks frequently happen on the Internet, paralyzing Internet services and causing millions of dollars of financial loss. This work presents NetFence, a scalable DoS-resistant network architecture. NetFence uses a novel mechanism, secure congestion policing feedback, to enable robust congestion policing inside the network. Bottleneck routers update the feedback in packet headers to signal congestion, and access routers use it to police senders' traffic. Targeted DoS victims can use the secure congestion policing feedback as capability tokens to suppress unwanted traffic. When compromised senders and receivers organize into pairs to congest a network link, NetFence provably guarantees a legitimate sender its fair share of network resources without keeping per-host state at the congested link. We use a Linux implementation, ns-2 simulations, and theoretical analysis to show that NetFence is an effective and scalable DoS solution: it reduces the amount of state maintained by a congested router from per-host to at most per-(Autonomous System). Comment: The original paper is published in SIGCOMM 2010
 
Article
Distributed denial of service attacks are often considered a security problem. While this may be the way to view the problem with today's Internet, new network architectures attempting to address the issue should view it as a scalability problem. In addition, they need to address the problem based on a rigorous foundation.
 
Article
In a virtualized infrastructure where physical resources are shared, a single physical server failure will terminate several virtual servers and crippling the virtual infrastructures which contained those virtual servers. In the worst case, more failures may cascade from overloading the remaining servers. To guarantee some level of reliability, each virtual infrastructure, at instantiation, should be augmented with backup virtual nodes and links that have sufficient capacities. This ensures that, when physical failures occur, sufficient computing resources are available and the virtual network topology is preserved. However, in doing so, the utilization of the physical infrastructure may be greatly reduced. This can be circumvented if backup resources are pooled and shared across multiple virtual infrastructures, and intelligently embedded in the physical infrastructure. These techniques can reduce the physical footprint of virtual backups while guaranteeing reliability.
 
WLAN-probe decision tree. 
Probability ratio (p c /p u ) to distinguish between low-SNR and SHT conditions. 
Article
Common WLAN pathologies include low signal-to-noise ratio, congestion, hidden terminals or interference from non-802.11 devices and phenomena. Prior work has focused on the detection and diagnosis of such problems using layer-2 information from 802.11 devices and special-purpose access points and monitors, which may not be generally available. Here, we investigate a userlevel approach: is it possible to detect and diagnose 802.11 pathologies with strictly user-level active probing, without any cooperation from, and without any visibility in, layer-2 devices? In this paper, we present preliminary but promising results indicating that such diagnostics are feasible.
 
Article
In this document we study the application of weighted proportional fairness to data flows in the Internet. We let the users set the weights of their connections in order to maximise the utility they get from the network. When combined with a pricing scheme where connections are billed by weight and time, such a system is known to maximise the total utility of the network. Our study case is a national Web cache server connected to long distance links. We propose two ways of weighting TCP connections by manipulating some parameters of the protocol and present results from simulations and prototypes. We finally discuss how proportional fairness could be used to implement an Internet with differentiated services.
 
Article
Modeling Internet growth is important both for understanding the current network and to predict and improve its future. To date, Internet models have typically attempted to explain a subset of the following characteristics: network structure, traffic flow, geography, and economy. In this paper we present a discrete, agent-based model, that integrates all of them. We show that the model generates networks with topologies, dynamics, and (more speculatively) spatial distributions that are similar to the Internet.
 
Article
Today's data centers face extreme challenges in providing low latency. However, fair sharing, a principle commonly adopted in current congestion control protocols, is far from optimal for satisfying latency requirements. We propose Preemptive Distributed Quick (PDQ) flow scheduling, a protocol designed to complete flows quickly and meet flow deadlines. PDQ enables flow preemption to approximate a range of scheduling disciplines. For example, PDQ can emulate a shortest job first algorithm to give priority to the short flows by pausing the contending flows. PDQ borrows ideas from centralized scheduling disciplines and implements them in a fully distributed manner, making it scalable to today's data centers. Further, we develop a multipath version of PDQ to exploit path diversity. Through extensive packet-level and flow-level simulation, we demonstrate that PDQ significantly outperforms TCP, RCP and D³ in data center environments. We further show that PDQ is stable, resilient to packet loss, and preserves nearly all its performance gains even given inaccurate flow information.
 
Article
Efficient data retrieval in an unstructured peer-to-peer system like Freenet is a challenging problem. In this paper, we study the impact of workload on the performance of Freenet. We find that there is a steep reduction in the hit ratio of document requests with increasing load in Freenet. We show that a slight modification of Freenet’s routing table cache replacement scheme (from LRU to a replacement scheme that enforces clustering in the key space) can significantly improve performance. Our modification is based on intuition from the small-world models and theoretical results by Kleinberg; our replacement scheme forces the routing tables to resemble neighbor relationships in a small-world acquaintance graph––clustering with light randomness. Our simulations show that this new scheme improves the request hit ratio dramatically while keeping the small average hops per successful request comparable to LRU. A simple, highly idealized model of Freenet under clustering with light randomness proves that the expected message delivery time in Freenet is O(log n) if the routing tables satisfy the small-world model and have the size θ(log2 n).
 
Article
With the ongoing exhaustion of free address pools at the registries serving the global demand for IPv4 address space, scarcity has become reality. Networks in need of address space can no longer get more address allocations from their respective registries. In this work we frame the fundamentals of the IPv4 address exhaustion phenomena and connected issues. We elaborate on how the current ecosystem of IPv4 address space has evolved since the standardization of IPv4, leading to the rather complex and opaque scenario we face today. We outline the evolution in address space management as well as address space use patterns, identifying key factors of the scarcity issues. We characterize the possible solution space to overcome these issues and open the perspective of address blocks as virtual resources, which involves issues such as differentiation between address blocks, the need for resource certification, and issues arising when transferring address space between networks.
 
shows the
The parse graph and structure of a TPP. We chose 0x6666 as the ethertype and source UDP port that uniquely identifies a TPP. With a programmable switch parser, this choice can be reprogrammed at any time. 
End-host stack for creating and managing TPPenabled applications. Arrows denote packet flow paths through the stack, and communication paths between the end-host and the network control plane.
Maximum attainable application-level and network throughput with a 260 byte TPPs inserted on a fraction of packets (1500Byte MTU and 1240Byte MSS). A sampling frequency of ∞ depicts the baseline performance as no TPPs are installed. Error bars denote the standard deviation.
Article
This paper presents a practical approach to rapidly introduce new dataplane functionality into networks: End-hosts embed tiny programs into packets to actively query and manipulate a network's internal state. We show how this "tiny packet program" (TPP) interface gives end-hosts unprecedented visibility into network behavior, enabling them to work with the network to achieve a common goal. Our design leverages what each component does best: (a) switches forward and execute tiny packet programs (at most 5 instructions) at line rate, and (b) end-hosts perform arbitrary computation on network state, which are easy to evolve. Using a hardware prototype on a NetFPGA, we show our design is feasible, at a reasonable cost. By implementing three different research proposals, we show that TPPs are also useful. And finally, we present an architecture in which they can be made secure.
 
Top-cited authors
Hari Balakrishnan
Nick McKeown
  • Stanford University
Ion Stoica
  • University of California, Berkeley
Jennifer Rexford
  • Princeton University
Mark Handley
  • University College London