Injong Rhee

Duke University, Durham, North Carolina, United States

Are you Injong Rhee?

Claim your profile

Publications (46)14.92 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Spectrum sensing, the task of discovering spectrum usage at a given location, is a fundamental problem in dynamic spectrum access networks. While sensing in narrow spectrum bands is well studied in previous work, wideband spectrum sensing is challenging since a wideband radio is generally too expensive and power consuming for mobile devices. Sequential scan, on the other hand, can be very slow if the wide spectrum band contains many narrow channels. In this paper, we propose an analog-filter based spectrum sensing technique, which is much faster than sequential scan and much cheaper than using a wideband radio. The key insight is that, if the sum of energy on a contiguous band is low, we can conclude that all channels in this band are clear with just one measurement. Based on this insight, we design an intelligent search algorithm to minimize the number of total measurements. We prove that the algorithm has the same asymptotic complexity as compressed sensing while our design is much simpler and easily implementable in the real hardware. We show the availability of our technique using hardware devices that include analog filters and analog energy detectors. Our extensive evaluation using real TV “white space” signals shows the effectiveness of our technique.
    INFOCOM, 2013 Proceedings IEEE; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A practical protocol jointly considering PHY and MAC for MIMO based concurrent transmissions in wireless ad hoc networks, called Contrabass, is presented. Concurrent transmissions refer to simultaneous transmissions by multiple nodes over the same carrier frequency within the same interference range. Contrabass is the first-to-date open-loop based concurrent transmission protocol which implements simultaneous channel training for concurrently transmitting links without any control message exchange. Its MAC protocol is designed for each active transmitter to independently decide to transmit with near optimal transmission probability. Contrabass maximizes the number of successful concurrent transmissions, thus achieving very high aggregate throughput, low delays and scalability even under dynamic environments. The design choices of Contrabass are deliberately made to enable practical implementation which is demonstrated through GNURadio implementation and experimentation.
    INFOCOM, 2011 Proceedings IEEE; 05/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.
    INFOCOM, 2010 Proceedings IEEE; 04/2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present two novel methods to enable concurrent communications in MIMO networks. First method enables an 802.11n MIMO-OFDM receiver to decode independent data streams from two independent 802.11n transmitters concurrently. It is implemented on real time 802.11n based MIMO-OFDM testbeds and the performance of the technique is examined through both simulation and field trials. Second method is a low overhead Concurrent Communications scheme based on Adaptive Interference Cancellation (CC-AIC), which does not require any explicit channel state information (CSI) between the nodes in the network.
    Signals, Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference on; 12/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Multicast is an efficient means of transmitting the same content to multiple receivers while minimizing network resource usage. Applications that can benefit from multicast such as multimedia streaming and download, are now being deployed over 3G wireless data networks. Existing multicast schemes transmit data at a fixed rate that can accommodate the farthest located users in a cell. However, users belonging to the same multicast group can have widely different channel conditions. Thus existing schemes are too conservative by limiting the throughput of users close to the base station. We propose two proportional fair multicast scheduling algorithms that can adapt to dynamic channel states in cellular data networks that use time division multiplexing: inter-group proportional fairness (IPF) and multicast proportional fairness (MPF). These scheduling algorithms take into account (1) reported data rate requests from users which dynamically change to match their link states to the base station, and (2) the average received throughput of each user inside its cell. This information is used by the base station to select an appropriate data rate for each group. We prove that IPF and MPF achieve proportional fairness among groups and among all users inside a cell respectively. Through extensive packet-level simulations, we demonstrate that these algorithms achieve good balance between throughput and fairness among users and groups.
    IEEE Transactions on Wireless Communications 10/2009; · 2.42 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In P2P file sharing systems, free-riders who use others' resources withoutsharing their own causesystem-wide per- formance degradation. Existing techniques to counter free- riders are either complex (and thus not widely deployed), or easy to bypass (and therefore not effective). This paper proposes a simple yet highly effective free-rider prevention scheme using (t,n) threshold secret sharing. A peer must uploadencryptedfile pieces to obtain the subkeys necessary to decrypt a file which has been downloaded, i.e., subkeys are swapped for file pieces. No centralized monitoring or control is required. This scheme is called "treat-before- trick" (TBeT). TBeT penalizes free-riding with increased file completion times (time to download file and necessary subkeys). TBeT counters known free-riding strategies, in- centivizes peers to donate more upload bandwidth, and in- creases the overall system capacity for compliant peers. TBeT has been implemented as an extension to BitTorrent, and results of experimental evaluation are presented.
    23rd IEEE International Symposium on Parallel and Distributed Processing, IPDPS 2009, Rome, Italy, May 23-29, 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Multicast is an efficient means of transmitting the same content to multiple receivers while minimizing network resource usage. Applications that can benefit from multicast such as multimedia streaming and download, are now being deployed over 3G wireless data networks. Existing multicast schemes transmit data at a fixed rate that can accommodate the farthest located users in a cell. However, users belonging to the same multicast group can have widely different channel conditions. Thus existing schemes are too conservative by limiting the throughput of users close to the base station. We propose two proportional fair multicast scheduling algorithms that can adapt to dynamic channel states in cellular data networks that use time division multiplexing: Inter-group Proportional Fairness (IPF) and Multicast Proportional Fairness (MPF). These scheduling algorithms take into account (1) reported data rate requests from users which dynamically change to match their link states to the base station, and (2) the average received throughput of each user inside its cell. This information is used by the base station to select an appropriate data rate for each group. We prove that IPF and MPF achieve proportional fairness among groups and among all users in a group inside a cell respectively. Through extensive packet-level simulations, we demonstrate that these algorithms achieve good balance between throughput and fairness among users and groups.
    IEEE Transactions on Wireless Communications 01/2009; 8:4540-4549. · 2.42 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper examines congestion control issues for TCP flows that require in-network processing on the fly in network elements such as gateways, proxies, firewalls and even routers. Applications of these flows are increasingly abundant in the future as the Internet evolves. Since these flows require use of CPUs in network elements, both bandwidth and CPU resources can be a bottleneck and thus congestion control must deal with ldquocongestionrdquo on both of these resources. In this paper, we show that conventional TCP/AQM schemes can significantly lose throughput and suffer harmful unfairness in this environment, particularly when CPU cycles become more scarce (which is likely the trend given the recent explosive growth rate of bandwidth). As a solution to this problem, we establish a notion of dual-resource proportional fairness and propose an AQM scheme, called Dual-Resource Queue (DRQ), that can closely approximate proportional fairness for TCP Reno sources with in-network processing requirements. DRQ is scalable because it does not maintain per-flow states while minimizing communication among different resource queues, and is also incrementally deployable because of no required change in TCP stacks. The simulation study shows that DRQ approximates proportional fairness without much implementation cost and even an incremental deployment of DRQ at the edge of the Internet improves the fairness and throughput of these TCP flows. Our work is at its early stage and might lead to an interesting development in congestion control research.
    IEEE/ACM Transactions on Networking 05/2008; 16(2):435-449. · 2.01 Impact Factor
  • INFOCOM 2007. 26th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 6-12 May 2007, Anchorage, Alaska, USA; 01/2007
  • Source
    J. Martin, A. Nilsson, Injong Rhee
    [Show abstract] [Hide abstract]
    ABSTRACT: The set of TCP congestion control algorithms associated with TCP-Reno (e.g., slow-start and congestion avoidance) have been crucial to ensuring the stability of the Internet. Algorithms such as TCP-NewReno (which has been deployed) and TCP-Vegas (which has not been deployed) represent incrementally deployable enhancements to TCP as they have been shown to improve a TCP connection's throughput without degrading performance to competing flows. Our research focuses on delay-based congestion avoidance algorithms (DCA), like TCP-Vegas, which attempt to utilize the congestion information contained in packet round-trip time (RTT) samples. Through measurement and simulation, we show evidence suggesting that a single deployment of DCA (i.e., a TCP connection enhanced with a DCA algorithm) is not a viable enhancement to TCP over high-speed paths. We define several performance metrics that quantify the level of correlation between packet loss and RTT. Based on our measurement analysis, we find that, although there is useful congestion information contained within RTT samples, the level of correlation between an increase in RTT and packet loss is not strong enough to allow a TCP-sender to improve throughput reliably. While DCA is able to reduce the packet loss rate experienced by a connection, in its attempts to avoid packet loss, the algorithm reacts unnecessarily to RTT variation that is not associated with packet loss. The result is degraded throughput as compared to a similar flow that does not support DCA.
    IEEE/ACM Transactions on Networking 07/2003; · 2.01 Impact Factor
  • Source
    Ranjith S Jayaram, Injong Rhee
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an early experience with a CDMA 2.5G wireless network commercially deployed in South Korea. It finds that there are high signal losses and la-tency commonly present in packet delivery causing TCP to under-utilize the available network bandwidth significantly. In this environment, there is inherent limitation in using packet losses as congestion indicators because of lack of cor-relation between congestion and packet losses. To remedy this problem, a new flow control protocol that uses delay hys-teresis as a congestion indicator is presented. The protocol actively manages delays to keep them within a certain bound by throttling its transmission rate when network delays tend to increase and also probing for more bandwidth when net-work delays tend to decrease. A variant of the protocol is currently incorporated in a cellular-phone based video-on-demand system as the main transport protocol for video file download and also for streaming. Our experiment results suggest that the protocol achieves higher and more consis-tent throughput than TCP, and exhibits some degree of fair-ness to its own flows and TCP.
    01/2003;
  • [Show abstract] [Hide abstract]
    ABSTRACT: For each ingress--egress traffic aggregate, the ingress router measures the bulk transfer capacity (BTC) along the paths as well as the number of flows and report them to a central traffic allocation server that runs the weighted max-min fair allocation algorithm. This is a slight variation of the max-min algorithm [2]. Each path (i.e., traffic trunk) is represented by only one flow in the algorithm. The trunk allocations are returned to the ingress routers. The fair rate of a ingress--egress traffic aggregate is the sum of the fair rates of its trunks. The aggregate fair rate is then used as the set point for the token buckets.
    09/2002;
  • [Show abstract] [Hide abstract]
    ABSTRACT: dth. Moreover, the i.l.d distribution of loss events is realistic in networks deploying randomlzed packet discard mechanisms such as RED. Given these assumptions, flow throughput is determined by its packet size and RTT. A change in the number of flows crossing the bottleneck results in a change in the loss rate Al. Let ai be the throughput of connection i after the change, then the throughput of all connections increase (decrease) by a constant factor a, that is X - _ ) An increase in the number of flows keeps the llnk congested. Equally, a decrease therein results in higher throughput for the remaining flows, thus maintaining the state of the llnk as a bottleneck. Let C denote the bottleneck capacity, then i Ai = C or, c and substituting in (2), )i Given the throughput of TCP flows through the bottleneck before the change and given the new flows are known to have the same end-to-end path characteristics as some existing flows, Each fiow's new share of bottleneck capacity can be
    09/2002;
  • [Show abstract] [Hide abstract]
    ABSTRACT: in Figure lachieves this goal by converging on the correct number of flows in logarithmic number of epochs. Even if the number of flows in the FEC keeps increasing, its maximum increase rate (as was shown in [1]) is much slower than that of the convergence speed of the algorithm. The following experiments demonstrate the effectiveness of the the volume estimation algorithm in allowing Aequltuus to achieve efficient equilibria where rate policing does not leave links underutillzed. In the topology of Figure 2, FECs 1 and 2 consist of 20 FTP flows each based on New Peno TCP. FEC 1 packets are forwarded along trunks 1 and 2, while FEC 2 packets are forwarded along trunks 3 and 4. Once again all links are 20Mb/s and lms propagation delay. Output queues are PED with default parameters except for using random drop instead of drop tail. The capacity and propagation delay of the external link 0-1 is varied to emulate congestion on the end-to-end path of FEC 2 outside the local domain. Initial
    09/2002;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of traffic allocation in multipath IP networks. We postulate that the desired equilibrium distribution of network bandwidth is one where every flow acquires a TCP-fair share along each of the routes connecting its ingress and egress. This characterization calls for connection striping at the ingress to balance the load among the available routes. Still, the desired equilibrium is not reachable when flows are allowed to compete freely for bandwidth, in which case, striping is known to reduce the throughput of TCP below the fair share on the least-congested path. Using the fairness properties of TCP congestion control, we prove that under connectionload balancing and ingress--egress traffic isolation, multipath networks tend to reach the desired equilibrium at steady-state---despite the interaction among flows within traffic aggregates. The practical value of this result stems from the accuracy with which ingress--egress bandwidth allocations can be approximated using token buckets as rate limiters at the ingress routers and the availability of an efficient method for the accurate estimation of the TCP-fair shares.
    09/2002;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present MTCP, a congestion control scheme for large-scale reliable multicast. Congestion control for reliable multicast is important, because of its wide applications in multimedia and collaborative computing, yet non-trivial, because of the potentially large number of receivers involved. Many schemes have been proposed to handle the recovery of lost packets in a scalable manner, but there is little work on the design and implementation of congestion control schemes for reliable multicast. We propose new techniques that can effectively handle instances of congestion occurring simultaneously at various parts of a multicast tree.Our protocol incorporates several novel features: (1) hierarchical congestion status reports that distribute the load of processing feedback from all receivers across the multicast group, (2) the relative time delay concept which overcomes the difficulty of estimating round-trip times in tree-based multicast environments, (3) window-based control that prevents the sender from transmitting faster than packets leave the bottleneck link on the multicast path through which the sender's traffic flows, (4) a retransmission window that regulates the flow of repair packets to prevent local recovery from causing congestion, and (5) a selective acknowledgment scheme that prevents independent (i.e., non-congestion-related) packet loss from reducing the sender's transmission rate. We have implemented MTCP both on UDP in SunOS 5.6 and on the simulator ns, and we have conducted extensive Internet experiments and simulation to test the scalability and inter-fairness properties of the protocol. The encouraging results we have obtained support our confidence that TCP-like congestion control for large-scale reliable multicast is within our grasp.
    Computer Networks. 04/2002;
  • Source
    S. Ramesh, I. Rhee, K. Guo
    [Show abstract] [Hide abstract]
    ABSTRACT: A closed-loop (demand-driven) approach toward video-on-demand services, called multicast cache (Mcache), is discussed. Servers use multicast to reduce their bandwidth usage by allowing multiple requests to be served with a single data stream. However, this requires clients to delay receiving the movie until the multicast starts. Using regional cache servers deployed over many strategic locations, Mcache can remove the initial playout delays of clients in multicast-based video streaming. While requests are batched together for a multicast, clients can receive the prefix of a requested movie clip from caches located in their own regions. The multicast containing the later portion of the movie can wait until the prefix is played out. While this use of regional caches has been proposed previously, the novelty of our scheme lies in that the requests coming after the multicast starts can still be batched together to be served by multicast patches without any playout delays. The use of patches was proposed before, but they are used either with unicast or with playout delays. Mcache effectively hires the idea of a multicast patch with caches to provide a truly adaptive video-on demand service whose bandwidth usage is up to par with the best known open-loop schemes under high request rates while using only minimal bandwidth under low request rates. In addition, efficient use of multicast and caches removes the need for a priori knowledge of client disk storage requirements which some of the existing schemes assume. This makes Mcache ideal for the current heterogeneous Internet environments where those parameters are hard to predict. We further propose the Segmented Mcache (SMcache) scheme which is a generalized and improved version of Mcache where the clip is partitioned into several segments in order to preserve the advantages of the original Mcache scheme with nearly the same server bandwidth requirement as the open loop schemes under high request rates
    IEEE Transactions on Circuits and Systems for Video Technology 04/2001; · 1.82 Impact Factor
  • S. Ramesh, Injong Rhee, K. Guo
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a closed-loop (demand-driven) approach towards VoD services, called multicast with caching (Mcache). Servers use multicast to reduce bandwidth usage by serving multiple requests using a single data stream. However, this requires clients to delay receiving the movie until the multicast starts. Using regional cache servers, Mcache removes initial playout delays at the clients, because the clients can receive the prefix of a requested clip from regional caches while waiting for the multicast to start. In addition, the multicast containing the later portion of the movie can wait until the prefix is played out. While this use of caches has been proposed before, the novelty of our scheme lies in that the requests coming after the multicast starts can still be batched together to be served by multicast patches without any playout delays. The use of patches has been proposed to be used either with unicast or with playout delays. Mcache effectively hires the idea of a multicast patch with caches to provide a truly adaptive VoD service whose bandwidth usage is up to par with the best known open-loop schemes under high request rates while using only minimal bandwidth under low request rates. In addition, efficient use of multicast and caches removes the need for a priori knowledge of client request rates and client disk storage requirements which some of the existing schemes assume. This makes Mcache ideal for the current heterogeneous Internet environments where those parameters are hard to predict
    INFOCOM 2001. Twentieth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE; 02/2001
  • Source
    I. Rhee, S.R. Joshi
    [Show abstract] [Hide abstract]
    ABSTRACT: Real-time interactive video transmission in the current Internet has mediocre quality because of high packet loss rates. Loss of packets in a video frame manifests itself not only in the reduced quality of that frame but also in the propagation of that distortion to successive frames. This error propagation problem is inherent in any motion compensation-based video codec. In this paper, we present a new error recovery scheme, called recovery from error spread using continuous updates (RESCU), that effectively alleviates error propagation in the transmission of interactive video. The main benefit of the RESCU scheme is that it allows more time for transport-level recovery such as retransmission and forward error correction to succeed while effectively masking out delays in recovering lost packets without introducing any playout delays, thus making it suitable for interactive video communication. Through simulation and real Internet experiments, we study the effectiveness and limitations of our proposed techniques and compare their performance to that of existing video error recovery techniques including H.263+ (NEWPRED). The study indicates that RESCU is effective in alleviating the error spread problem and can sustain much better video quality with less bit overhead than existing video error recovery techniques under various network environments
    IEEE Journal on Selected Areas in Communications 07/2000; · 3.12 Impact Factor
  • Source
    Injong Rhee
    [Show abstract] [Hide abstract]
    ABSTRACT: A new retransmission-based error control technique is presented that does not incur any additional latency in frame playout times, and hence are suitable for interactive applications. It takes advantage of the motion prediction loop employed in most motion compensationbased codecs. By correcting errors in a reference frame caused by earlier packet loss, it prevents error propagation. The technique rearranges the temporal dependency of frames so that a displayed frame is referenced for the decoding of its succeeding dependent frames much later than its display time. Thus, the delay in repairing lost packets can be effectively masked out. The developed technique is combined with layered video coding to maintain consistently good video quality even under heavy packet loss. Through the results of extensive Internet experiments, the paper shows that layered coding can be very effective when combined with the retransmissionbased error control technique for low-bit rate transmission over best...
    02/2000;

Publication Stats

744 Citations
14.92 Total Impact Points

Institutions

  • 2010
    • Duke University
      • Department of Computer Science
      Durham, North Carolina, United States
  • 1997–2010
    • North Carolina State University
      • Department of Computer Science
      Raleigh, North Carolina, United States
  • 2008
    • Korea Advanced Institute of Science and Technology
      Sŏul, Seoul, South Korea
  • 2003
    • Clemson University
      Clemson, South Carolina, United States
  • 2000
    • AT&T Labs
      Austin, Texas, United States
  • 1996–1997
    • Emory University
      • Department of Mathematics and Computer Science
      Atlanta, GA, United States
  • 1995–1996
    • The University of Warwick
      • Department of Computer Science
      Warwick, ENG, United Kingdom
  • 1992–1994
    • University of North Carolina at Chapel Hill
      • Department of Computer Science
      Chapel Hill, NC, United States